• Simply defining =?utf-8?Q?G=C3=B6del?= Incompleteness and Tarski Un

    From Keith Thompson@21:1/5 to olcott on Wed Aug 12 18:36:35 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    [...]
    It boils down to this: If Turing, Gödel and Tarski are correct then
    truth itself is fundamentally broken. Since truth itself cannot
    possibly be broken human understanding of the nature of truth is
    necessarily incorrect.

    Since provability is an aspect of satisfiability true and provable
    cannot possibly diverge.

    According to some stories (I don't know whether this is 100%
    accurate), the Pythagoreans were horrified to discover that the
    length of the diagonal of a square cannot be expressed as the ratio
    of two integers. In modern terms, sqrt(2) is irrational. This fact
    attacked their concept of mathematical reality. They might have
    said that truth itself is fundamentally broken.

    But there was a rigorous proof of this disturbing fact, and they
    could not refute it. (Supposedly one person was murdered for
    divulging it.)

    They were deeply offended by the irrationality of sqrt(2).
    (Note that "irrational" here merely means that it's not a ratio,
    not that it's in any way insane.) And today mathmaticians simply
    accept that not all numbers are rational.

    https://en.wikipedia.org/wiki/Square_root_of_2#History

    The idea that there are statements that are true within a system
    but cannot be proven (or disproven) within that system is perhaps
    equally disturbing. But again, there is a rigorous proof,
    and mathematicians have no choice but to accept it. Gödel was
    right, and the world has not collapsed. Mathematical truth is not
    controlled by our prejudices.

    You might as well dedicate your life to proving that sqrt(2) is
    rational.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Wed Aug 12 21:01:15 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/12/2020 8:36 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    It boils down to this: If Turing, Gödel and Tarski are correct then
    truth itself is fundamentally broken. Since truth itself cannot
    possibly be broken human understanding of the nature of truth is
    necessarily incorrect.

    Since provability is an aspect of satisfiability true and provable
    cannot possibly diverge.

    According to some stories (I don't know whether this is 100%
    accurate), the Pythagoreans were horrified to discover that the
    length of the diagonal of a square cannot be expressed as the ratio
    of two integers. In modern terms, sqrt(2) is irrational. This fact
    attacked their concept of mathematical reality. They might have
    said that truth itself is fundamentally broken.

    But there was a rigorous proof of this disturbing fact, and they
    could not refute it. (Supposedly one person was murdered for
    divulging it.)

    They were deeply offended by the irrationality of sqrt(2).
    (Note that "irrational" here merely means that it's not a ratio,
    not that it's in any way insane.) And today mathmaticians simply
    accept that not all numbers are rational.

    https://en.wikipedia.org/wiki/Square_root_of_2#History

    The idea that there are statements that are true within a system
    but cannot be proven (or disproven) within that system is perhaps

    It is simply impossible because satisfiability has provability as an
    aspect of itself. It it like adding integers together without using any numbers. Since integers <are> numbers this is impossble.

    equally disturbing. But again, there is a rigorous proof,
    and mathematicians have no choice but to accept it. Gödel was
    right, and the world has not collapsed. Mathematical truth is not
    controlled by our prejudices.

    You might as well dedicate your life to proving that sqrt(2) is
    rational.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Wed Aug 12 22:39:13 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/12/2020 10:25 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/12/2020 7:21 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/12/2020 5:00 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/12/2020 3:36 PM, Ben Bacarisse wrote:
    <cut>
    You babbled. Throwing terms around that you don't understand is not a >>>>>>> good look for anyone. That whole post was a dog's dinner.

    Let me break it down for you using more precise terms:

    No. I know what you are saying because it's not at all complex, unusual >>>>> or new. Your response stems the same problem that leads you to think >>>>> people are mad or lying or playing mind games. Sometimes people think >>>>> you are babbling, not because they don't understand, but because you are >>>>> tossing a word salad.

    A precise degree of partial Turing completeness can be defined as: >>>>>> The subset of Turing-computable functions that are computable on an >>>>>> abstract machine or programming language.

    The second sentence is fine. It's standard fare in computability
    theory, though we go a bit further by not limiting ourselves to Turing >>>>> computable functions.

    The babbling comes from using phrases like Turing completeness
    inappropriately. The latest, "partial Turing completeness", is of
    course an oxymoron. Were you aiming for some archaic rhetorical effect? >>>>
    I want to establish the theoretical foundation that an x86 based
    refutation of the conventional halting problem counter-examples would
    immediately and directly apply to Turing Machines.

    <screeching of brakes> Reality check: what does what you call "partial
    Turing completeness" have to do with your claim? Are you planning to
    demonstrate something about a computational model that can't compute all >>> the functions a Turing machine can? Do you plan to extrapolate that up
    to Turing computability?

    I want to explicitly account for the cases that may be computable in
    finite memory that are not computable in unlimited memory. It was
    proposed that such cases can possibly exist.

    There are no such cases.

    To do this it must be universally understood that the computability of >>>> an x86 based machine is identical to the computability of a Turing
    machine on a subset of functions.

    Two years ago, when you were still lying about having a Turing machine,
    people (certainly me but I think others too) counselled you to use
    anything but a Turing machine because, due to computational equivalence, >>> there was no need for the mess that is an actual Turing machine.

    Apparently some people believe that there is a difference between
    Turing complete and Turing equivalent

    Of course there is. Essentially it's the difference between X >= Y and
    X = Y. But it does not matter for what you claim.

    https://scienceblogs.com/goodmath/2007/01/05/turing-equivalent-vs-turing-co

    Not a good article in my opinion.

    But you, being you, have managed to pick the worst possible alternative. >>> I an not sure that this is an accident. A pile of x86 is a great place
    to hide.

    Not in this case. I did it this way to make my solution as easy as
    possible to fully understand. I could have buried it in a few billion
    Turing Machine interpreter instructions where simply accessing the
    memory location @ 0xffff takes 65,0000 steps.

    I was suggesting a high-level language. Of course even there you could
    bury your error, but it would be harder.

    It boils down to this: If Turing, Gödel and Tarski are correct then
    truth itself is fundamentally broken.

    You mean mathematical truth is not what you always thought it was.

    Since truth itself cannot possibly be broken human understanding of
    the nature of truth is necessarily incorrect.

    Mathematics does not bend to your pronouncements. Mathematical truth is
    what it is, and your declaring it broken has no effect at all.

    Since provability is an aspect of satisfiability true and provable
    cannot possibly diverge.

    PO-provability may be an aspect of PO-satisfiability and PO-truth and PO-provable may not diverge, but the sentence as you wrote it (with
    words you don't yet understand) is just wrong.


    If a proposition is true you must have some basis to know that it is
    true besides a wild guess or a hunch. What-so-ever definitive basis that
    you have to show that a proposition is definitely true is the proof of
    this proposition. THERE IS NO WAY OUT OF THIS


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Thu Aug 13 03:00:18 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-12 18:54, olcott wrote:

    It boils down to this: If Turing, Gödel and Tarski are correct then
    truth itself is fundamentally broken. Since truth itself cannot possibly
    be broken human understanding of the nature of truth is necessarily incorrect.

    None of these people claim that truth is 'fundamentally broken'. They
    simply point out the limitations of what certain types of systems are
    able to achieve.

    The fact that something has limitations doesn't mean it is fundamentally broken. Your car, for example, cannot travel at 1000 mph. That's a
    limitation. It doesn't mean your car is broken.

    Since provability is an aspect of satisfiability true and provable
    cannot possibly diverge.

    Not according to standard definitions of those terms.

    André


    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Thu Aug 13 02:56:23 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-12 21:39, olcott wrote:

    If a proposition is true you must have some basis to know that it is
    true besides a wild guess or a hunch. What-so-ever definitive basis that
    you have to show that a proposition is definitely true is the proof of
    this proposition. THERE IS NO WAY OUT OF THIS

    You are conflating truth with knowledge. They are not the same thing.
    There are many propositions which must either be true or false but which
    we don't know which. For example 'Tachyons exist.' is either true or it
    is false since there are no things which both exist and do not exist.
    But we do not know whether 'tachyons exist' is true.

    André


    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Kaz Kylheku on Thu Aug 13 10:32:28 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/13/2020 2:45 AM, Kaz Kylheku wrote:
    On 2020-08-12, olcott <NoOne@NoWhere.com> wrote:
    I want to establish the theoretical foundation that an x86 based
    refutation of the convetional halting problem counter-examples would
    immediately and directy apply to Turing Machines.

    Your idea for this is intellectually dishonest, invalid, and
    frankly, stupid.


    This is statement presumptuous.

    The impossibility of determining halting for all machines is
    demonstrated by proofs involving a trick which breaks the halting
    detector by embedding it in a machine, which applies it to itself.


    That is very well put. It is indeed a trick. A smart halt decider can
    see through this trick using a key insight that no one ever noticed before.

    Your plan is, evidently, to make the halting detector unavailable to researchers (through trade secrets and denial of licensing) for
    that sort of embedding. Only running it a stand-alone black box will
    be permitted. Or perhaps even as a remote service over a network,
    with no access to the technology. That's likely why you're hinting at
    IPC mechanisms in your vague descriptions.

    This statement is preposterous. I am merely refraining from releasing my
    most significant results until the quality of these results is in their
    best form. I will most likely release my results as an open source code repository. My whole idea all along is to provide a complete execution
    trace of every step of the halt deciding process.

    I might also port my x86 based UTM to a webservice. My UTM directly
    executes the COFF object files produced by the Microsoft Visual C++
    compiler with some caveats. The Microsoft compiler generates code that
    is much easier to understand than the gcc/g++ compiler code.

    But the halting theorem is an assertion about all machines. Machines
    that you disallow researchers to have still exist, mathematically.

    The following does not fly: "Halting is solved for the machines you're allowed to test with; believe me that it also works for the secret ones you're not allowed to test." That is not science; it is intellectual
    fraud.

    One small problem is also that you're not capable of developing a
    halting detector of such a pedigree that testers will actually *have* to resort to self-referential test casees to break it. It will either give
    the wrong answer or else itself fail to halt on all sorts of ordinary
    inputs that do not perpetrate any trickery.


    I have decided to release the code repository of the x86 based UTM long
    before I release my halting problem results because there is nothing
    about the x86 based UTM that need be kept secret.The little bit of code
    that I posted yesterday is all the code that is complete. I spend about
    90% of my time on design 5% on coding and 5% on debugging. Back in the
    bad old days it was 5% on design 15% on coding and 80% on debugging.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Thu Aug 13 10:40:09 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/13/2020 5:50 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/12/2020 10:25 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    It boils down to this: If Turing, Gödel and Tarski are correct then
    truth itself is fundamentally broken.

    You mean mathematical truth is not what you always thought it was.

    Since truth itself cannot possibly be broken human understanding of
    the nature of truth is necessarily incorrect.

    Mathematics does not bend to your pronouncements. Mathematical truth is >>> what it is, and your declaring it broken has no effect at all.

    Since provability is an aspect of satisfiability true and provable
    cannot possibly diverge.

    PO-provability may be an aspect of PO-satisfiability and PO-truth and
    PO-provable may not diverge, but the sentence as you wrote it (with
    words you don't yet understand) is just wrong.

    If a proposition is true you must have some basis to know that it is
    true besides a wild guess or a hunch.

    Every arithmetical formula is either true or false. Not every
    arithmetical formula is provable or refutable. Deal with it.

    The only possible way to correctly conclude that a WFF is true or false
    is some kind of proof. If anyone concludes that a WFF is true or false
    without some kind of proof this is necessarily mere presumption.


    What-so-ever definitive basis that you have to show that a proposition
    is definitely true is the proof of this proposition.

    In a sound system, if you have a proof, the proposition is true. No one doubts that. But in the absence of a proof every proposition P
    corresponds to one as yet unknown truth: either P or ~P.


    Sure that is correct, That that is not what is asserted by the Gödel
    proof. The Gödel proof asserts that G is definitely true and definitely unprovable. That is the same thing as calculating the sum of two
    integers without using any numbers (integers are numbers).

    THERE IS NO WAY OUT OF THIS

    Yes there is: education. You don't have to keep use words you don't understand.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Thu Aug 13 10:44:57 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/13/2020 5:16 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/12/2020 10:11 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/12/2020 6:39 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:


    int main()
    {
    uint32_t* S1;
    S1 = (uint32_t*) 16; // size param passed to AllocateHeap
    AllocateHeap(&S1); // Allocates 16 uint32_t
    SaveState(&S1); // Saves 16 32-bit registers
    RestoreState(&S1); // Process context switch
    __asm hlt
    }

    Shocking. But then you are not a programmer are you?

    In other words you don't understand the code.

    Says the person who had to post a correction to their own code because
    they got the levels of pointer indirection wrong! As far as I know you /still/ haven't posted a correct declaration for AllocateHeap (not that anyone cares).


    That preliminary code has bugs is why I refrain from posting preliminary
    code. The trickiest code is SpawnProcess(). I think that I have that
    totally figured out. I will not post that code until I publish my whole
    UTM to a code repository.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Thu Aug 13 10:48:19 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/13/2020 3:56 AM, André G. Isaak wrote:
    On 2020-08-12 21:39, olcott wrote:

    If a proposition is true you must have some basis to know that it is
    true besides a wild guess or a hunch. What-so-ever definitive basis
    that you have to show that a proposition is definitely true is the
    proof of this proposition. THERE IS NO WAY OUT OF THIS

    You are conflating truth with knowledge. They are not the same thing.
    There are many propositions which must either be true or false but which
    we don't know which. For example 'Tachyons exist.' is either true or it
    is false since there are no things which both exist and do not exist.
    But we do not know whether 'tachyons exist' is true.

    André


    That Gödel's G can be known to be definititely true and known to be
    definitely unprovable is just as impossible as determining the sum of a
    pair of integers without using any numbers (integers are numbers).



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Thu Aug 13 10:56:44 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/13/2020 4:00 AM, André G. Isaak wrote:
    On 2020-08-12 18:54, olcott wrote:

    It boils down to this: If Turing, Gödel and Tarski are correct then
    truth itself is fundamentally broken. Since truth itself cannot
    possibly be broken human understanding of the nature of truth is
    necessarily incorrect.

    None of these people claim that truth is 'fundamentally broken'.

    Of course they don't. If they understood that this is the logical result
    of their work they would have the insight to know that their work cannot possibly be correct.

    They
    simply point out the limitations of what certain types of systems are
    able to achieve.

    The fact that something has limitations doesn't mean it is fundamentally broken. Your car, for example, cannot travel at 1000 mph. That's a limitation. It doesn't mean your car is broken.

    The only possible way that we can know that any analytical expression is
    true is through an analytical proof that applies truth preserving
    operations to true premises.

    Since provability is an aspect of satisfiability true and provable
    cannot possibly diverge.

    Not according to standard definitions of those terms.

    André

    Maybe these terms of the art are defined in absurd ways such as the way
    that a Turing Machine that only immediately halts is called a decider.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Thu Aug 13 12:12:34 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/13/2020 11:51 AM, André G. Isaak wrote:
    On 2020-08-13 09:48, olcott wrote:
    On 8/13/2020 3:56 AM, André G. Isaak wrote:
    On 2020-08-12 21:39, olcott wrote:

    If a proposition is true you must have some basis to know that it is
    true besides a wild guess or a hunch. What-so-ever definitive basis
    that you have to show that a proposition is definitely true is the
    proof of this proposition. THERE IS NO WAY OUT OF THIS

    You are conflating truth with knowledge. They are not the same thing.
    There are many propositions which must either be true or false but
    which we don't know which. For example 'Tachyons exist.' is either
    true or it is false since there are no things which both exist and do
    not exist. But we do not know whether 'tachyons exist' is true.

    André


    That Gödel's G can be known to be definititely true and known to be
    definitely unprovable is just as impossible as determining the sum of
    a pair of integers without using any numbers (integers are numbers).

    You should probably learn what Gödel's proof actually says and address
    that rather than what you think it says.

    Gödel's G is a statement such that neither G nor ¬G can be proven.

    That entails that there is *some* true statement which cannot be proven.
    We do not know, however, whether that true statement is G or ¬G. But one
    of those two statements must be true.

    André


    If it was as simple as that there would not be a problem.
    What is actually meant by "true and unprovable" is that it is true that
    G is unprovable yet it is not provable that G is unprovable.

    The only possible way to know for sure that G is definitely unprovable
    is a proof that G is unprovable anything less than this is not definite.

    Gödel: I know for sure that G is unprovable!
    Pete: Can you prove that?
    Gödel: No
    Pete: Then your "knowledge" about G is mere presumption.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Thu Aug 13 10:51:56 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-13 09:48, olcott wrote:
    On 8/13/2020 3:56 AM, André G. Isaak wrote:
    On 2020-08-12 21:39, olcott wrote:

    If a proposition is true you must have some basis to know that it is
    true besides a wild guess or a hunch. What-so-ever definitive basis
    that you have to show that a proposition is definitely true is the
    proof of this proposition. THERE IS NO WAY OUT OF THIS

    You are conflating truth with knowledge. They are not the same thing.
    There are many propositions which must either be true or false but
    which we don't know which. For example 'Tachyons exist.' is either
    true or it is false since there are no things which both exist and do
    not exist. But we do not know whether 'tachyons exist' is true.

    André


    That Gödel's G can be known to be definititely true and known to be definitely unprovable is just as impossible as determining the sum of a
    pair of integers without using any numbers (integers are numbers).

    You should probably learn what Gödel's proof actually says and address
    that rather than what you think it says.

    Gödel's G is a statement such that neither G nor ¬G can be proven.

    That entails that there is *some* true statement which cannot be proven.
    We do not know, however, whether that true statement is G or ¬G. But one
    of those two statements must be true.

    André

    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Thu Aug 13 12:03:31 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-13 11:12, olcott wrote:
    On 8/13/2020 11:51 AM, André G. Isaak wrote:
    On 2020-08-13 09:48, olcott wrote:
    On 8/13/2020 3:56 AM, André G. Isaak wrote:
    On 2020-08-12 21:39, olcott wrote:

    If a proposition is true you must have some basis to know that it
    is true besides a wild guess or a hunch. What-so-ever definitive
    basis that you have to show that a proposition is definitely true
    is the proof of this proposition. THERE IS NO WAY OUT OF THIS

    You are conflating truth with knowledge. They are not the same
    thing. There are many propositions which must either be true or
    false but which we don't know which. For example 'Tachyons exist.'
    is either true or it is false since there are no things which both
    exist and do not exist. But we do not know whether 'tachyons exist'
    is true.

    André


    That Gödel's G can be known to be definititely true and known to be
    definitely unprovable is just as impossible as determining the sum of
    a pair of integers without using any numbers (integers are numbers).

    You should probably learn what Gödel's proof actually says and address
    that rather than what you think it says.

    Gödel's G is a statement such that neither G nor ¬G can be proven.

    That entails that there is *some* true statement which cannot be
    proven. We do not know, however, whether that true statement is G or
    ¬G. But one of those two statements must be true.

    André


    If it was as simple as that there would not be a problem.

    It is as simple as that.

    What is actually meant by "true and unprovable" is that it is true that
    G is unprovable yet it is not provable that G is unprovable.

    Where on earth do you get that idea from? Gödel proves that both G and
    ¬G are unprovable.

    The only possible way to know for sure that G is definitely unprovable
    is a proof that G is unprovable anything less than this is not definite.

    Gödel: I know for sure that G is unprovable!
    Pete:  Can you prove that?
    Gödel: No

    Except that Gödel's answer would be yes.

    Pete:  Then your "knowledge" about G is mere presumption.

    Your correct response would be 'then I obviously don't understand what
    your proof really says and I should probably go and actually read *and understand* it before pontificating about it'

    André

    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to Keith Thompson on Thu Aug 13 12:46:29 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-13 12:34, Keith Thompson wrote:

    Whatever you're doing, it's unlikely that the code you've shown is
    the right way to do it.

    Also, even if we make assumptions about what this code is *intended* to
    do based on the names given to the undefined functions, none of it has
    anything to do with determining whether something halts.

    André

    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Thu Aug 13 11:34:37 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    [...]
    This is the interprocess communication protocol that I have
    established for a slave UTM to request three different kinds of
    operating system functions. The code for these three functions is
    fully designed. I only need to code the "spawn process" code and then
    the master UTM will be essentially complete.

    int main()
    {
    uint32_t* S1;
    S1 = (uint32_t*) 16; // size param passed to AllocateHeap
    AllocateHeap(&S1); // Allocates 16 uint32_t
    SaveState(&S1); // Saves 16 32-bit registers
    RestoreState(&S1); // Process context switch
    __asm hlt
    }
    [assembly code snipped]

    OK, let's go over this C program.

    What you've written above is of course not a complete program.
    Your functions AllocateHeap, SaveState, and RestoreState are
    not declared, and as of C99 calling an undeclared function is a
    constraint violation. The type name uint32_t is not visible unless
    you have a #include directive for <stdint.h> (or <inttypes.h>).
    (And the fact that you're using uint32_t, which was introduced in
    C99, implies that you're not using C90, which did allow calls to
    undeclared functions.)

    S1 = (uint32_t*) 16; // size param passed to AllocateHeap

    This makes S1 point to a uint32_t object at address 16. That's
    ... odd. Converting a constant integer value to a pointer is
    rarely meaningful, unless you're on some kind of embedded system
    where you know the exact memory layout. I suppose if you intend to
    run this code on your x86 simulator, you can guarantee that there
    is addressable memory at address 16. Why 16? Is some known value
    stored at that location by some other mechanism? But the following
    lines make me think that's not what you intended.

    AllocateHeap(&S1); // Allocates 16 uint32_t

    I presume you have reasons not to use the standard malloc function.
    If I understand the comment correctly, this is intended to allocate
    16 uint32_t objects (i.e., 64 contiguous 8-bit bytes, aligned
    for 4-byte elements). If the argument is intended to specify the
    number of uint32_t elements to allocate, why on Earth would you
    pass a uint32_t** argument rather than just a uint32_t (or an int,
    or a size_t)?

    Can you describe what's stored at address 16, and why AllocateHeap
    doesn't take a simple integer argument? And why does it not return a
    result? (Or if it does, you discard it.) Allocation functions
    typically return a pointer to the allocated memory.

    I'm fairly sure I've misunderstood your code because I can only
    see what you've written, not what you think it's supposed to mean.

    Given the above, I won't try to guess how SaveState and RestoreState
    are supposed to work. They both take arguments of type uint32_t**,
    but it's hard to imagine what they'd do with them.

    Whatever you're doing, it's unlikely that the code you've shown is
    the right way to do it. Frankly, it looks like it was written by
    someone who doesn't yet understand what "*" and "&" mean, and who has
    just been adding arbitrary symbols to make the code look plausible.
    (I would have said it looks like you added arbitrary symbols until
    it compiled, which is a particularly dangerous approach in C,
    but it doesn't even compile.)

    Or am I missing something?

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Thu Aug 13 14:09:24 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/13/2020 1:46 PM, André G. Isaak wrote:
    On 2020-08-13 12:34, Keith Thompson wrote:

    Whatever you're doing, it's unlikely that the code you've shown is
    the right way to do it.

    Also, even if we make assumptions about what this code is *intended* to
    do based on the names given to the undefined functions, none of it has anything to do with determining whether something halts.

    André


    The only thing that I can share is details about my x86 based UTM.
    I will not share any details about my halt decider until a very long
    time after I post the x86 based UTM to a code repository and have made
    all of the changes suggested by peer review.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Thu Aug 13 14:17:51 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/13/2020 1:03 PM, André G. Isaak wrote:
    On 2020-08-13 11:12, olcott wrote:
    On 8/13/2020 11:51 AM, André G. Isaak wrote:
    On 2020-08-13 09:48, olcott wrote:
    On 8/13/2020 3:56 AM, André G. Isaak wrote:
    On 2020-08-12 21:39, olcott wrote:

    If a proposition is true you must have some basis to know that it
    is true besides a wild guess or a hunch. What-so-ever definitive
    basis that you have to show that a proposition is definitely true
    is the proof of this proposition. THERE IS NO WAY OUT OF THIS

    You are conflating truth with knowledge. They are not the same
    thing. There are many propositions which must either be true or
    false but which we don't know which. For example 'Tachyons exist.'
    is either true or it is false since there are no things which both
    exist and do not exist. But we do not know whether 'tachyons exist'
    is true.

    André


    That Gödel's G can be known to be definititely true and known to be
    definitely unprovable is just as impossible as determining the sum
    of a pair of integers without using any numbers (integers are numbers). >>>
    You should probably learn what Gödel's proof actually says and
    address that rather than what you think it says.

    Gödel's G is a statement such that neither G nor ¬G can be proven.

    That entails that there is *some* true statement which cannot be
    proven. We do not know, however, whether that true statement is G or
    ¬G. But one of those two statements must be true.

    André


    If it was as simple as that there would not be a problem.

    It is as simple as that.

    What is actually meant by "true and unprovable" is that it is true
    that G is unprovable yet it is not provable that G is unprovable.

    Where on earth do you get that idea from? Gödel proves that both G and
    ¬G are unprovable.

    The only possible way to know for sure that G is definitely unprovable
    is a proof that G is unprovable anything less than this is not definite.

    Gödel: I know for sure that G is unprovable!
    Pete:  Can you prove that?
    Gödel: No

    Except that Gödel's answer would be yes.

    Any formal proposition that only asserts its own unprovability is
    unprovable for the same reason that the liar paradox is untrue.

    Gödel used a very convoluted process to make it seem that his
    proposition was not actually referring to an assertion about its own unprovability none-the-less his proposition was at least logically
    equivalent to a proposition that asserts its own unprovability and has a vacuous truth value for the same reason and the same way that a
    proposition that only asserts its own falsity has a vacuous truth value.


    This sentence is true.
    What is it true about?
    It is not true about anything it is simply true.
    That is not the way that it works.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Thu Aug 13 14:50:11 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/13/2020 1:34 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    This is the interprocess communication protocol that I have
    established for a slave UTM to request three different kinds of
    operating system functions. The code for these three functions is
    fully designed. I only need to code the "spawn process" code and then
    the master UTM will be essentially complete.

    int main()
    {
    uint32_t* S1;
    S1 = (uint32_t*) 16; // size param passed to AllocateHeap
    AllocateHeap(&S1); // Allocates 16 uint32_t
    SaveState(&S1); // Saves 16 32-bit registers
    RestoreState(&S1); // Process context switch
    __asm hlt
    }
    [assembly code snipped]

    OK, let's go over this C program.

    What you've written above is of course not a complete program.
    Your functions AllocateHeap, SaveState, and RestoreState are
    not declared, and as of C99 calling an undeclared function is a
    constraint violation. The type name uint32_t is not visible unless
    you have a #include directive for <stdint.h> (or <inttypes.h>).
    (And the fact that you're using uint32_t, which was introduced in
    C99, implies that you're not using C90, which did allow calls to
    undeclared functions.)

    S1 = (uint32_t*) 16; // size param passed to AllocateHeap

    This makes S1 point to a uint32_t object at address 16. That's
    ... odd. Converting a constant integer value to a pointer is
    rarely meaningful, unless you're on some kind of embedded system
    where you know the exact memory layout. I suppose if you intend to
    run this code on your x86 simulator, you can guarantee that there
    is addressable memory at address 16. Why 16? Is some known value
    stored at that location by some other mechanism? But the following
    lines make me think that's not what you intended.

    This is merely a short-cut way to send the requested size parameter to AllocateHeap(). If we do this using normal "c" conventions it generates additional machine code increasing the complexity of the assembly
    language execution trace.

    AllocateHeap(&S1); // Allocates 16 uint32_t

    I presume you have reasons not to use the standard malloc function.

    Yes. The master UTM is its own self-contained operating system and all
    memory is a single contiguous block as if it was an actual UTM tape.

    If I understand the comment correctly, this is intended to allocate
    16 uint32_t objects (i.e., 64 contiguous 8-bit bytes, aligned
    for 4-byte elements). If the argument is intended to specify the
    number of uint32_t elements to allocate, why on Earth would you
    pass a uint32_t** argument rather than just a uint32_t (or an int,
    or a size_t)?

    I must pass uint32_t** because the caller returns the address of the
    allocated memory block in this pointer.

    Can you describe what's stored at address 16, and why AllocateHeap
    doesn't take a simple integer argument? And why does it not return a
    result? (Or if it does, you discard it.) Allocation functions
    typically return a pointer to the allocated memory.

    The 16 is not used as an address. It is used as the single integer
    parameter. I did it this unconventional way to reduce the number of
    machine language instructions to make the assembly language debug trace
    easier to understand.

    I'm fairly sure I've misunderstood your code because I can only
    see what you've written, not what you think it's supposed to mean.

    Given the above, I won't try to guess how SaveState and RestoreState
    are supposed to work. They both take arguments of type uint32_t**,
    but it's hard to imagine what they'd do with them.


    SaveState(&S1); // Saves 16 32-bit registers
    As it says SaveState saves all of the 16 registers that an x86 machine
    has. The conventions required to make all of the underlying slave UTMs
    work correctly is that they never define or use any global data. All of
    their data is on their stack or in the heap. Every slave UTM that is
    created with SpawnProcess() gets its own StackSpace from the HeapSpace.

    RestoreState(&S1); // Process context switch
    Copies the saved registers back to the master UTM machine registers.
    This causes a context switch between slave UTMs, AKA time slicing.

    Whatever you're doing, it's unlikely that the code you've shown is
    the right way to do it. Frankly, it looks like it was written by
    someone who doesn't yet understand what "*" and "&" mean, and who has
    just been adding arbitrary symbols to make the code look plausible.
    (I would have said it looks like you added arbitrary symbols until
    it compiled, which is a particularly dangerous approach in C,
    but it doesn't even compile.)

    Or am I missing something?


    Yes you are missing the fact that the "C" code is written to minimize
    the complexity of the generated machine code. I might go the other way
    with this and use regular "c" conventions allowing the assembly language
    code to have this additional complexity.

    The standard code generated by a "c" compiler seems to have about double
    the complexity as the simplest hand written assembly language. The
    simplest hand written assembly language is very easy to understand. The
    code generated by the "c" compiler not so much.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Thu Aug 13 14:55:57 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/13/2020 2:38 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/13/2020 5:50 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/12/2020 10:25 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    Since provability is an aspect of satisfiability true and provable >>>>>> cannot possibly diverge.

    PO-provability may be an aspect of PO-satisfiability and PO-truth and >>>>> PO-provable may not diverge, but the sentence as you wrote it (with
    words you don't yet understand) is just wrong.

    If a proposition is true you must have some basis to know that it is
    true besides a wild guess or a hunch.

    Every arithmetical formula is either true or false. Not every
    arithmetical formula is provable or refutable. Deal with it.

    The only possible way to correctly conclude that a WFF is true or
    false is some kind of proof.

    Do you see how that is not a reply to my point? Probably not.

    If anyone concludes that a WFF is true or false without some kind of
    proof this is necessarily mere presumption.

    If... So many things if... But fortunately the antecedent is false.
    No one (apart from you) does that.

    (Do you remember that way back in 2012 when you declared: "If God can
    not solve the Halting Problem, then there is something wrong with the problem"?)

    What-so-ever definitive basis that you have to show that a proposition >>>> is definitely true is the proof of this proposition.

    In a sound system, if you have a proof, the proposition is true. No one >>> doubts that. But in the absence of a proof every proposition P
    corresponds to one as yet unknown truth: either P or ~P.


    Sure that is correct, That that is not what is asserted by the Gödel
    proof.

    Yes it is. You've never read it, have you? You would fall at the first page.

    The Gödel proof asserts that G is definitely true and definitely
    unprovable.

    The paper has an extra sting in the tail: that g must be true. But this
    is a metamathematical deduction from outside PM.


    Ah great you are directly delving into the actual heart of it. The key
    mistake is that true outside of PM does not count as true inside of PM.

    You once thought it would be wise to learn what words like "true" mean
    in a mathematical context, but that did not last long. You stumbled
    when it turned out you did not know what a function was.


    We can get back to that. I still need to know the conventional meanings
    of the related set of terms.

    That is the same thing as calculating the sum of two
    integers without using any numbers (integers are numbers).

    Analogies like this are only useful for people like me to see just how
    little you understand. They won't persuade anyone, but the fact that
    you think this is an analogy is very revealing.

    THERE IS NO WAY OUT OF THIS

    Yes there is: education. You don't have to keep use words you don't
    understand.

    You briefly thought you might try this way out...



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Thu Aug 13 15:14:07 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    On 8/13/2020 1:34 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    This is the interprocess communication protocol that I have
    established for a slave UTM to request three different kinds of
    operating system functions. The code for these three functions is
    fully designed. I only need to code the "spawn process" code and then
    the master UTM will be essentially complete.

    int main()
    {
    uint32_t* S1;
    S1 = (uint32_t*) 16; // size param passed to AllocateHeap
    AllocateHeap(&S1); // Allocates 16 uint32_t
    SaveState(&S1); // Saves 16 32-bit registers
    RestoreState(&S1); // Process context switch
    __asm hlt
    }
    [assembly code snipped]

    OK, let's go over this C program.

    What you've written above is of course not a complete program.
    Your functions AllocateHeap, SaveState, and RestoreState are
    not declared, and as of C99 calling an undeclared function is a
    constraint violation. The type name uint32_t is not visible unless
    you have a #include directive for <stdint.h> (or <inttypes.h>).
    (And the fact that you're using uint32_t, which was introduced in
    C99, implies that you're not using C90, which did allow calls to
    undeclared functions.)

    S1 = (uint32_t*) 16; // size param passed to AllocateHeap

    This makes S1 point to a uint32_t object at address 16. That's
    ... odd. Converting a constant integer value to a pointer is
    rarely meaningful, unless you're on some kind of embedded system
    where you know the exact memory layout. I suppose if you intend to
    run this code on your x86 simulator, you can guarantee that there
    is addressable memory at address 16. Why 16? Is some known value
    stored at that location by some other mechanism? But the following
    lines make me think that's not what you intended.

    This is merely a short-cut way to send the requested size parameter to AllocateHeap(). If we do this using normal "c" conventions it
    generates additional machine code increasing the complexity of the
    assembly language execution trace.

    If I understand you correctly, rather than passing an integer
    argument and returning a pointer result, you've decided to pass a
    single pointer-to-pointer argument and do some ugly type conversions,
    because you think that passing a single argument rather than either
    (a) passing two arguments or (b) passing an argument and returning
    a result makes the code simpler. Is that what you're saying?

    AllocateHeap(&S1); // Allocates 16 uint32_t

    I presume you have reasons not to use the standard malloc function.

    Yes. The master UTM is its own self-contained operating system and all
    memory is a single contiguous block as if it was an actual UTM tape.

    If I understand the comment correctly, this is intended to allocate
    16 uint32_t objects (i.e., 64 contiguous 8-bit bytes, aligned
    for 4-byte elements). If the argument is intended to specify the
    number of uint32_t elements to allocate, why on Earth would you
    pass a uint32_t** argument rather than just a uint32_t (or an int,
    or a size_t)?

    I must pass uint32_t** because the caller returns the address of the allocated memory block in this pointer.

    What do you mean by this? The caller is the main function, which
    doesn't explicitly return anything (it implicitly returns 0 to the
    calling environment).

    Can you describe what's stored at address 16, and why AllocateHeap
    doesn't take a simple integer argument? And why does it not return a
    result? (Or if it does, you discard it.) Allocation functions
    typically return a pointer to the allocated memory.

    The 16 is not used as an address. It is used as the single integer
    parameter. I did it this unconventional way to reduce the number of
    machine language instructions to make the assembly language debug
    trace easier to understand.

    Does AllocateHeap internally convert its argument back from uint32_t**
    to some integer type before using its value?

    Your C code, interpreted as C, is horribly convoluted. Apparently you
    think there's some benefit that makes this worthwhile.

    I'm fairly sure I've misunderstood your code because I can only
    see what you've written, not what you think it's supposed to mean.

    Given the above, I won't try to guess how SaveState and RestoreState
    are supposed to work. They both take arguments of type uint32_t**,
    but it's hard to imagine what they'd do with them.


    SaveState(&S1); // Saves 16 32-bit registers
    As it says SaveState saves all of the 16 registers that an x86 machine
    has. The conventions required to make all of the underlying slave UTMs
    work correctly is that they never define or use any global data. All
    of their data is on their stack or in the heap. Every slave UTM that
    is created with SpawnProcess() gets its own StackSpace from the
    HeapSpace.

    Saves them where? Is the entire state stored in the 16 32-bit
    registers?

    RestoreState(&S1); // Process context switch
    Copies the saved registers back to the master UTM machine registers.
    This causes a context switch between slave UTMs, AKA time slicing.

    Whatever you're doing, it's unlikely that the code you've shown is
    the right way to do it. Frankly, it looks like it was written by
    someone who doesn't yet understand what "*" and "&" mean, and who has
    just been adding arbitrary symbols to make the code look plausible.
    (I would have said it looks like you added arbitrary symbols until
    it compiled, which is a particularly dangerous approach in C,
    but it doesn't even compile.)

    Or am I missing something?

    Yes you are missing the fact that the "C" code is written to minimize
    the complexity of the generated machine code. I might go the other way
    with this and use regular "c" conventions allowing the assembly
    language code to have this additional complexity.

    The standard code generated by a "c" compiler seems to have about
    double the complexity as the simplest hand written assembly
    language. The simplest hand written assembly language is very easy to understand. The code generated by the "c" compiler not so much.

    Why not write assembly code in the first place?

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Thu Aug 13 18:23:25 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/13/2020 5:14 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/13/2020 1:34 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    This is the interprocess communication protocol that I have
    established for a slave UTM to request three different kinds of
    operating system functions. The code for these three functions is
    fully designed. I only need to code the "spawn process" code and then
    the master UTM will be essentially complete.

    int main()
    {
    uint32_t* S1;
    S1 = (uint32_t*) 16; // size param passed to AllocateHeap
    AllocateHeap(&S1); // Allocates 16 uint32_t
    SaveState(&S1); // Saves 16 32-bit registers
    RestoreState(&S1); // Process context switch
    __asm hlt
    }
    [assembly code snipped]

    OK, let's go over this C program.

    What you've written above is of course not a complete program.
    Your functions AllocateHeap, SaveState, and RestoreState are
    not declared, and as of C99 calling an undeclared function is a
    constraint violation. The type name uint32_t is not visible unless
    you have a #include directive for <stdint.h> (or <inttypes.h>).
    (And the fact that you're using uint32_t, which was introduced in
    C99, implies that you're not using C90, which did allow calls to
    undeclared functions.)

    S1 = (uint32_t*) 16; // size param passed to AllocateHeap

    This makes S1 point to a uint32_t object at address 16. That's
    ... odd. Converting a constant integer value to a pointer is
    rarely meaningful, unless you're on some kind of embedded system
    where you know the exact memory layout. I suppose if you intend to
    run this code on your x86 simulator, you can guarantee that there
    is addressable memory at address 16. Why 16? Is some known value
    stored at that location by some other mechanism? But the following
    lines make me think that's not what you intended.

    This is merely a short-cut way to send the requested size parameter to
    AllocateHeap(). If we do this using normal "c" conventions it
    generates additional machine code increasing the complexity of the
    assembly language execution trace.

    If I understand you correctly, rather than passing an integer
    argument and returning a pointer result, you've decided to pass a
    single pointer-to-pointer argument and do some ugly type conversions,
    because you think that passing a single argument rather than either
    (a) passing two arguments or (b) passing an argument and returning
    a result makes the code simpler. Is that what you're saying?

    At the machine code level my method is simpler.
    There is not actual coded type conversions.


    AllocateHeap(&S1); // Allocates 16 uint32_t

    I presume you have reasons not to use the standard malloc function.

    Yes. The master UTM is its own self-contained operating system and all
    memory is a single contiguous block as if it was an actual UTM tape.

    If I understand the comment correctly, this is intended to allocate
    16 uint32_t objects (i.e., 64 contiguous 8-bit bytes, aligned
    for 4-byte elements). If the argument is intended to specify the
    number of uint32_t elements to allocate, why on Earth would you
    pass a uint32_t** argument rather than just a uint32_t (or an int,
    or a size_t)?

    I must pass uint32_t** because the caller returns the address of the
    allocated memory block in this pointer.

    What do you mean by this? The caller is the main function, which
    doesn't explicitly return anything (it implicitly returns 0 to the
    calling environment).

    You have to look at the assembly language to understand that I why I
    inclused it.

    Can you describe what's stored at address 16, and why AllocateHeap
    doesn't take a simple integer argument? And why does it not return a
    result? (Or if it does, you discard it.) Allocation functions
    typically return a pointer to the allocated memory.

    The 16 is not used as an address. It is used as the single integer
    parameter. I did it this unconventional way to reduce the number of
    machine language instructions to make the assembly language debug
    trace easier to understand.

    Does AllocateHeap internally convert its argument back from uint32_t**
    to some integer type before using its value?

    At the assembly language level machine addresses and unsigned integers
    are exactly the same thing.

    Your C code, interpreted as C, is horribly convoluted. Apparently you
    think there's some benefit that makes this worthwhile.

    I'm fairly sure I've misunderstood your code because I can only
    see what you've written, not what you think it's supposed to mean.

    Given the above, I won't try to guess how SaveState and RestoreState
    are supposed to work. They both take arguments of type uint32_t**,
    but it's hard to imagine what they'd do with them.


    SaveState(&S1); // Saves 16 32-bit registers
    As it says SaveState saves all of the 16 registers that an x86 machine
    has. The conventions required to make all of the underlying slave UTMs
    work correctly is that they never define or use any global data. All
    of their data is on their stack or in the heap. Every slave UTM that
    is created with SpawnProcess() gets its own StackSpace from the
    HeapSpace.

    Saves them where? Is the entire state stored in the 16 32-bit
    registers?

    The only part of the machine state that needs to be saved when switching contexts between processes is stored in the registers. Each process also
    has its own StackSpace.

    RestoreState(&S1); // Process context switch
    Copies the saved registers back to the master UTM machine registers.
    This causes a context switch between slave UTMs, AKA time slicing.

    Whatever you're doing, it's unlikely that the code you've shown is
    the right way to do it. Frankly, it looks like it was written by
    someone who doesn't yet understand what "*" and "&" mean, and who has
    just been adding arbitrary symbols to make the code look plausible.
    (I would have said it looks like you added arbitrary symbols until
    it compiled, which is a particularly dangerous approach in C,
    but it doesn't even compile.)

    Or am I missing something?

    Yes you are missing the fact that the "C" code is written to minimize
    the complexity of the generated machine code. I might go the other way
    with this and use regular "c" conventions allowing the assembly
    language code to have this additional complexity.

    The standard code generated by a "c" compiler seems to have about
    double the complexity as the simplest hand written assembly
    language. The simplest hand written assembly language is very easy to
    understand. The code generated by the "c" compiler not so much.

    Why not write assembly code in the first place?


    Because what can be best done in "C" is much easier to understand in "C".

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Thu Aug 13 22:02:38 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-13 13:17, olcott wrote:
    On 8/13/2020 1:03 PM, André G. Isaak wrote:
    On 2020-08-13 11:12, olcott wrote:
    On 8/13/2020 11:51 AM, André G. Isaak wrote:
    On 2020-08-13 09:48, olcott wrote:
    On 8/13/2020 3:56 AM, André G. Isaak wrote:
    On 2020-08-12 21:39, olcott wrote:

    If a proposition is true you must have some basis to know that it >>>>>>> is true besides a wild guess or a hunch. What-so-ever definitive >>>>>>> basis that you have to show that a proposition is definitely true >>>>>>> is the proof of this proposition. THERE IS NO WAY OUT OF THIS

    You are conflating truth with knowledge. They are not the same
    thing. There are many propositions which must either be true or
    false but which we don't know which. For example 'Tachyons exist.' >>>>>> is either true or it is false since there are no things which both >>>>>> exist and do not exist. But we do not know whether 'tachyons
    exist' is true.

    André


    That Gödel's G can be known to be definititely true and known to be >>>>> definitely unprovable is just as impossible as determining the sum
    of a pair of integers without using any numbers (integers are
    numbers).

    You should probably learn what Gödel's proof actually says and
    address that rather than what you think it says.

    Gödel's G is a statement such that neither G nor ¬G can be proven.

    That entails that there is *some* true statement which cannot be
    proven. We do not know, however, whether that true statement is G or
    ¬G. But one of those two statements must be true.

    André


    If it was as simple as that there would not be a problem.

    It is as simple as that.

    What is actually meant by "true and unprovable" is that it is true
    that G is unprovable yet it is not provable that G is unprovable.

    Where on earth do you get that idea from? Gödel proves that both G and
    ¬G are unprovable.

    The only possible way to know for sure that G is definitely
    unprovable is a proof that G is unprovable anything less than this is
    not definite.

    Gödel: I know for sure that G is unprovable!
    Pete:  Can you prove that?
    Gödel: No

    Except that Gödel's answer would be yes.

    Any formal proposition that only asserts its own unprovability is
    unprovable for the same reason that the liar paradox is untrue.

    You really need to actually *read* Gödel's proof. G is an assertion
    about arithmetic. IOW it does not simply say 'I am not provable'. The
    problem is that you are determined to try to understand what the first incompleteness theorem says without actually trying to understand the
    math. You are basing your entire interpretation on a footnote which you
    do not actually understand and on some sentence of natural language
    which is not even formalizable in standard logic.

    If you think there is some flaw in the proof, then identify the specific
    step in the proof which you think is fallacious and identify exactly why
    it is fallacious. If you cannot do that (and you can't), then your
    objection is entirely without foundation.

    Gödel used a very convoluted process to make it seem that his
    proposition was not actually referring to an assertion about its own unprovability none-the-less his proposition was at least logically
    equivalent to a proposition that asserts its own unprovability and has a vacuous truth value for the same reason and the same way that a
    proposition that only asserts its own falsity has a vacuous truth value.


    This sentence is true.
    What is it true about?
    It is not true about anything it is simply true.
    That is not the way that it works.

    And Gödel's G bears no resemblance to the above except in your mind.

    André


    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Andy Walker on Fri Aug 14 12:10:49 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/14/2020 10:41 AM, Andy Walker wrote:
    On 14/08/2020 13:59, Ben Bacarisse wrote:
    [I wrote:]
        Whoa!  I'm not an "x86" expert, but every standard m/c code
    that I am familiar with is perfectly capable of encoding any one of
    the standard UTMs [and therefore any TM computation]. [...]
    x86 + support chips + some OS, yes.  x86 no.

        In the early '70s, we had a PDP 11/05 with 8Kb of storage,
    no disc, no OS, not much of anything bar a paper tape punch, paper
    tape reader, and a teletype.  That would have been enough for a UTM,
    'tho I suspect the dept would not have appreciated the bill for the
    reels of paper tape had we implemented it.

                            And, to be frank, the idea
    that PO has designed a memory and device access mechanism and an
    addressing scheme that is genuinely unbounded is not credible.

        Well, yes, tho' people here have dropped enough hints.  I
    find it hard to imagine what he even thinks he might have that could
    be interesting.  The best I can make of it is that there are a whole
    lot of PO-words that fit into a PO-theory of PO-computing in which
    PO-halting and other PO-problems are PO-soluble.  The alternative
    is that he's a ridiculously successful troll.  Take your pick.


    A machine that has a tape head that can be advanced in 0 to 0xffffffff increments an unlimited number of times does specify a model that has
    access to unlimited memory. The technical name for memory addressing
    based on the displacement from the current memory address is relative addressing.

    32-bit x86 function calls already use signed 32-bit relative addressing.

    void Bar(int N)
    {
    Foo(5);
    }


    void Foo(int N)
    {
    Bar(6);
    }

    101: 55 push ebp
    102: 8BEC mov ebp, esp
    104: 6A05 push 0x5
    106: E823000000 call 0x12e ; +23h bytes little endian
    10b: 83C404 add esp, 0x4 ; of 00000023h
    10e: 5D pop ebp
    10f: C3 ret

    12e: 55 push ebp
    12f: 8BEC mov ebp, esp
    131: 6A06 push 0x6
    133: E8C9FFFFFF call 0x101 ; -37h bytes little endian
    138: 83C404 add esp, 0x4 ; of FFFFFFC9h
    13b: 5D pop ebp
    13c: C3 ret


    The x86 architecture specifies a Turing complete model of computation
    for its control flow operations:

    Signed 32-bit relative addressing for function calls. (shown above).
    Signed 32-bit relative addressing for unconditional branches.
    Signed relative addressing for all conditional branches.

    It can also be understood (although more difficult) that the above
    Turing complete model of computation applied to the control flow of the
    x86 architecture can be leveraged to provide access to unlimited data.

    I won't provide the details of this to avoid getting into endless debate
    with people that do not understand that they do not understand.




    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Andy Walker on Fri Aug 14 13:28:46 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/14/2020 6:34 AM, Andy Walker wrote:
    On 14/08/2020 11:29, Ben Bacarisse wrote:
    [...] x86 code can't represent any TM computation and even
    an x86 simulator can't execute any x86 computation.

        Whoa!  I'm not an "x86" expert, but every standard m/c code
    that I am familiar with is perfectly capable of encoding any one of
    the standard UTMs [and therefore any TM computation].  All that is
    needed beyond a quite simple FSM, such as that on the front cover of
    Minsky's "Computation", is a mechanism for accessing arbitrarily much storage, eg by the device I mentioned a few days ago -- that when the [simulated] tape head reaches one or other end of actual storage, the computer prints a message "please remove current USB stick and insert
    the next/previous one"*.


    As I already fully elaborated in another post the abstract machine that
    the x86 programming language specifies already has relative addressing
    such that all of its control flow operations can be encoded within this
    Turing complete model.

    Signed 32-bit relative addressing for function calls.
    Signed 32-bit relative addressing for unconditional branches.
    Signed relative addressing for all conditional branches.

    Any software engineer of ordinary skill in the art of x86 programming
    would be able to leverage this Turing complete control flow to obtain
    Turing complete access to data.

        With a certain amount of buffering and a few obvious efficiency hacks, this process can even be reasonably practicable.  For things
    like "computable numbers" [another topic where PO's limited knowledge
    of the theory is beginning to show], you would perhaps want several
    USB sticks as otherwise you spend far too much time searching up and
    down on one "tape" for items that are arbitrarily widely separated.

        If you're going to object that my proposal requires human intervention, then I "merely" place the computer inside a robot-
    controlled factory  in which robots are responsible for feeding the
    computer with USB sticks which they manufacture ....  Somewhere
    along the line, the situation becomes absurd, and misses the real
    point of the theory.

    That the x86 language already specifies a Turing complete model of
    computation makes your great idea of memory sticks moot.


    _______
      * You can't, of course, in general say "please insert stick 1234"
        as the serial numbers may grow too large to be contained in the
        accessible storage.  But there is always a "next" and a "previous"
        stick.  Where you keep them all is another matter.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Fri Aug 14 11:52:22 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    An abstract machine that has a tape head that can be advanced in 0 to 0xffffffff increments an unlimited number of times does specify a
    model of computation that has access to unlimited memory. The
    technical name for memory addressing based on the displacement from
    the current memory address is relative addressing.
    [43 lines deleted]
    Any software engineer of ordinary skill in the art of x86 programming
    would be able to leverage this Turing complete control flow to obtain
    Turing complete access to data.

    I've lost count of how many times I've read this same text in multiple
    articles you've posted. There are probably minor changes in wording,
    but I can't be bothered to check.

    If you want people to read what you write, decide what you want to say
    before you post, and post it just once.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Fri Aug 14 14:11:22 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/14/2020 1:52 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    An abstract machine that has a tape head that can be advanced in 0 to
    0xffffffff increments an unlimited number of times does specify a
    model of computation that has access to unlimited memory. The
    technical name for memory addressing based on the displacement from
    the current memory address is relative addressing.
    [43 lines deleted]
    Any software engineer of ordinary skill in the art of x86 programming
    would be able to leverage this Turing complete control flow to obtain
    Turing complete access to data.

    I've lost count of how many times I've read this same text in multiple articles you've posted. There are probably minor changes in wording,
    but I can't be bothered to check.

    If you want people to read what you write, decide what you want to say
    before you post, and post it just once.


    I kept posting it as its own new thread and Google groups kept appending
    it to a prior thread. If you notice the one with new material has a new
    version number. Version 40 is the current version.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Fri Aug 14 12:35:44 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    On 8/14/2020 1:52 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    An abstract machine that has a tape head that can be advanced in 0 to
    0xffffffff increments an unlimited number of times does specify a
    model of computation that has access to unlimited memory. The
    technical name for memory addressing based on the displacement from
    the current memory address is relative addressing.
    [43 lines deleted]
    Any software engineer of ordinary skill in the art of x86 programming
    would be able to leverage this Turing complete control flow to obtain
    Turing complete access to data.

    I've lost count of how many times I've read this same text in multiple
    articles you've posted. There are probably minor changes in wording,
    but I can't be bothered to check.

    If you want people to read what you write, decide what you want to say
    before you post, and post it just once.

    I kept posting it as its own new thread and Google groups kept
    appending it to a prior thread. If you notice the one with new
    material has a new version number. Version 40 is the current version.

    That doesn't affect what I wrote. I don't keep track of your version
    numbers. Every time you start a new thread, you still quote everything
    that went before, so as far as I'm concerned it's just one very long
    thread. There's no indication of what you've changed, and I can't be
    bothered to compare to previous versions.

    Of course you'll continue to do what you want to do.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to Keith Thompson on Fri Aug 14 12:38:22 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    olcott <NoOne@NoWhere.com> writes:
    On 8/14/2020 1:52 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    An abstract machine that has a tape head that can be advanced in 0 to
    0xffffffff increments an unlimited number of times does specify a
    model of computation that has access to unlimited memory. The
    technical name for memory addressing based on the displacement from
    the current memory address is relative addressing.
    [43 lines deleted]
    Any software engineer of ordinary skill in the art of x86 programming
    would be able to leverage this Turing complete control flow to obtain
    Turing complete access to data.

    I've lost count of how many times I've read this same text in multiple
    articles you've posted. There are probably minor changes in wording,
    but I can't be bothered to check.

    If you want people to read what you write, decide what you want to say
    before you post, and post it just once.

    I kept posting it as its own new thread and Google groups kept
    appending it to a prior thread. If you notice the one with new
    material has a new version number. Version 40 is the current version.

    That doesn't affect what I wrote. I don't keep track of your version numbers. Every time you start a new thread, you still quote everything
    that went before, so as far as I'm concerned it's just one very long
    thread. There's no indication of what you've changed, and I can't be bothered to compare to previous versions.

    My mistake, you don't quote previous text when you start a new thread
    (at least you don't always do so).

    Of course you'll continue to do what you want to do.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Fri Aug 14 14:55:48 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/14/2020 2:35 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/14/2020 1:52 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    An abstract machine that has a tape head that can be advanced in 0 to
    0xffffffff increments an unlimited number of times does specify a
    model of computation that has access to unlimited memory. The
    technical name for memory addressing based on the displacement from
    the current memory address is relative addressing.
    [43 lines deleted]
    Any software engineer of ordinary skill in the art of x86 programming
    would be able to leverage this Turing complete control flow to obtain
    Turing complete access to data.

    I've lost count of how many times I've read this same text in multiple
    articles you've posted. There are probably minor changes in wording,
    but I can't be bothered to check.

    If you want people to read what you write, decide what you want to say
    before you post, and post it just once.

    I kept posting it as its own new thread and Google groups kept
    appending it to a prior thread. If you notice the one with new
    material has a new version number. Version 40 is the current version.

    That doesn't affect what I wrote. I don't keep track of your version numbers. Every time you start a new thread, you still quote everything
    that went before, so as far as I'm concerned it's just one very long
    thread. There's no indication of what you've changed, and I can't be bothered to compare to previous versions.

    Of course you'll continue to do what you want to do.


    I only post new thread on the most important new developments.

    That I can now easily show that the abstract model of computation of the
    x86 programming language is already Turing Complete is a key new
    development. Version 41 is much better than its prior versions because
    of the replacement of its last two paragraphs.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Fri Aug 14 15:56:30 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/13/2020 5:17 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/13/2020 2:38 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/13/2020 5:50 AM, Ben Bacarisse wrote:
    <cut>
    In a sound system, if you have a proof, the proposition is true. No one >>>>> doubts that. But in the absence of a proof every proposition P
    corresponds to one as yet unknown truth: either P or ~P.

    Sure that is correct, That that is not what is asserted by the Gödel
    proof.

    Yes it is. You've never read it, have you? You would fall at the first page.

    The Gödel proof asserts that G is definitely true and definitely
    unprovable.

    The paper has an extra sting in the tail: that g must be true. But this >>> is a metamathematical deduction from outside PM.

    Ah great you are directly delving into the actual heart of it. The key
    mistake is that true outside of PM does not count as true inside of
    PM.

    Nonsense. There is no sign that you want to know what "true" means
    anymore so it would seem pointless to try to explain. Until you know,
    it would be absurd for me to reply. I only know the conventional
    meanings for words.

    The problem is that the conventional terms of the art from which the
    formal notion of truth is composed:

    {Interpretation, Satisfaction, logical consequence, decidability}
    (please let me know it I missed any)

    Suffer from the Sapir–Whorf hypothesis thus making the correct nature of analytical truth inexpressible.

    https://en.wikipedia.org/wiki/Linguistic_relativity

    It seems that the key error of the conventional notion of truth is the
    concept of decidability. The assumption that every closed WFF maps to a
    Boolean value is simply a false assumption.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Fri Aug 14 17:54:08 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/13/2020 1:34 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    This is the interprocess communication protocol that I have
    established for a slave UTM to request three different kinds of
    operating system functions. The code for these three functions is
    fully designed. I only need to code the "spawn process" code and then
    the master UTM will be essentially complete.

    int main()
    {
    uint32_t* S1;
    S1 = (uint32_t*) 16; // size param passed to AllocateHeap
    AllocateHeap(&S1); // Allocates 16 uint32_t
    SaveState(&S1); // Saves 16 32-bit registers
    RestoreState(&S1); // Process context switch
    __asm hlt
    }
    [assembly code snipped]

    OK, let's go over this C program.

    What you've written above is of course not a complete program.
    Your functions AllocateHeap, SaveState, and RestoreState are
    not declared, and as of C99 calling an undeclared function is a
    constraint violation. The type name uint32_t is not visible unless
    you have a #include directive for <stdint.h> (or <inttypes.h>).
    (And the fact that you're using uint32_t, which was introduced in
    C99, implies that you're not using C90, which did allow calls to
    undeclared functions.)

    S1 = (uint32_t*) 16; // size param passed to AllocateHeap

    This makes S1 point to a uint32_t object at address 16. That's
    ... odd. Converting a constant integer value to a pointer is
    rarely meaningful, unless you're on some kind of embedded system
    where you know the exact memory layout. I suppose if you intend to
    run this code on your x86 simulator, you can guarantee that there
    is addressable memory at address 16. Why 16? Is some known value
    stored at that location by some other mechanism? But the following
    lines make me think that's not what you intended.

    AllocateHeap(&S1); // Allocates 16 uint32_t

    I am going to go with the code that is the easiest to understand at the
    "C" level.

    uint32_t* master_state;
    master_state = AllocateHeap(16);

    b8: 6A10 push 0x10
    ba: E84F000000 call 0x10e
    bf: 83C404 add esp, 0x4
    c2: 8945FC mov dword ptr [ebp-0x4], eax




    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Fri Aug 14 19:34:35 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/14/2020 6:48 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    32-bit x86 function calls already use signed 32-bit relative addressing.

    void Bar(int N)
    {
    Foo(5);
    }


    void Foo(int N)
    {
    Bar(6);
    }

    101: 55 push ebp
    102: 8BEC mov ebp, esp
    104: 6A05 push 0x5
    106: E823000000 call 0x12e ; +23h bytes little endian
    10b: 83C404 add esp, 0x4 ; of 00000023h
    10e: 5D pop ebp
    10f: C3 ret

    12e: 55 push ebp
    12f: 8BEC mov ebp, esp
    131: 6A06 push 0x6
    133: E8C9FFFFFF call 0x101 ; -37h bytes little endian
    138: 83C404 add esp, 0x4 ; of FFFFFFC9h
    13b: 5D pop ebp
    13c: C3 ret

    The x86 architecture specifies a Turing complete model of computation
    for its control flow operations:

    Signed 32-bit relative addressing for function calls. (shown above).
    Signed 32-bit relative addressing for unconditional branches.
    Signed relative addressing for all conditional branches.

    There are two notions knocking around that we must distinguish between.
    There is an informal, rather casual, notion of Turing complete that
    glosses over all the reasons why some notation or other is not really
    Turing complete. Then there is the more formal demonstration that some notation really is Turing complete by demonstrating (often in general
    terms) how that notation can simulate any Turing machine computation.

    From the above, you seem to be going for the casual meaning because you
    don't explain how you would deal with having, say, 87 trillion
    functions. Just doing 32-bit relative calls won't work in that case.


    We don't have to actually have the hardware. We only really need the
    abstract model of computation. To specify 87 trillion functions merely
    requires a linked list with 87 trillion elements. If we assume signed
    32-bit relative addressing then each function can have 2^31 memory
    elements.

    I didn't specify the details because there are so many different ways
    that the details can be implemented.

    On 8/14/2020 6:34 AM, Andy Walker wrote:
    every standard m/c code
    that I am familiar with is perfectly capable of encoding any one of
    the standard UTMs [and therefore any TM computation]. All that is
    needed beyond a quite simple FSM, such as that on the front cover of Minsky's "Computation", is a mechanism for accessing arbitrarily much storage...

    In V41 (x86 is Turing Complete)
    I replace the last two sentences with this:

    On 8/14/2020 2:32 PM, olcott wrote:
    Data access is also based on relative addressing therefore: The x86
    language specifies an abstract model of computation that is Turing
    complete.

    proving (to anyone knowing how easy it is to simulate a Turing Machine
    such as Andy and I) that the x86 programming language specifies an
    abstract model of computation that is Turing complete.




    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to David Kleinecke on Fri Aug 14 20:14:09 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/14/2020 7:58 PM, David Kleinecke wrote:
    On Friday, August 14, 2020 at 3:08:28 PM UTC-7, olcott wrote:
    On 8/14/2020 4:28 PM, David Kleinecke wrote:
    On Friday, August 14, 2020 at 1:56:36 PM UTC-7, olcott wrote:

    The assumption that every closed WFF maps to a
    Boolean value is simply a false assumption.

    As Godel proved almost a century ago.


    No that is not what he proved.

    He proved that formal systems are incomplete on the basis that they
    cannot determine a Boolean value that is assumed to certainly exist.

    And that differs from "it is false that "every closed WFF maps to a
    Boolean value""? In what way?

    The Gödel sentence does not map to a Boolean value because it is not a
    truth bearer.

    In classical logic a sentence in a language is true or false under (and
    only under) an interpretation and is therefore a truth-bearer.

    https://en.wikipedia.org/wiki/Truth-bearer#Sentences_in_languages_of_classical_logic


    Is human reasoning incomplete because no human can provide a correct
    (yes or no) answer to the question: "What time is it?"



    That the only reason that formal systems cannot determine a Boolean
    value is because no such values exists would not make them incomplete.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Fri Aug 14 23:29:36 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/14/2020 8:13 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/14/2020 6:48 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    32-bit x86 function calls already use signed 32-bit relative addressing. >>>>
    void Bar(int N)
    {
    Foo(5);
    }


    void Foo(int N)
    {
    Bar(6);
    }

    101: 55 push ebp
    102: 8BEC mov ebp, esp
    104: 6A05 push 0x5
    106: E823000000 call 0x12e ; +23h bytes little endian
    10b: 83C404 add esp, 0x4 ; of 00000023h
    10e: 5D pop ebp
    10f: C3 ret

    12e: 55 push ebp
    12f: 8BEC mov ebp, esp
    131: 6A06 push 0x6
    133: E8C9FFFFFF call 0x101 ; -37h bytes little endian
    138: 83C404 add esp, 0x4 ; of FFFFFFC9h
    13b: 5D pop ebp
    13c: C3 ret

    The x86 architecture specifies a Turing complete model of computation
    for its control flow operations:

    Signed 32-bit relative addressing for function calls. (shown above).
    Signed 32-bit relative addressing for unconditional branches.
    Signed relative addressing for all conditional branches.

    There are two notions knocking around that we must distinguish between.
    There is an informal, rather casual, notion of Turing complete that
    glosses over all the reasons why some notation or other is not really
    Turing complete. Then there is the more formal demonstration that some
    notation really is Turing complete by demonstrating (often in general
    terms) how that notation can simulate any Turing machine computation.

    From the above, you seem to be going for the casual meaning because you >>> don't explain how you would deal with having, say, 87 trillion
    functions. Just doing 32-bit relative calls won't work in that case.

    We don't have to actually have the hardware. We only really need the
    abstract model of computation.

    But you are declaring it to be x86 code. You can banish the hardware,
    but x86 code embodies that hardware in the notation.


    The abstract model of computation that the x86 language specifies is
    Turing complete.

    To specify 87 trillion functions merely requires a linked list with 87
    trillion elements.

    What does that look like in x86 code? And whatever answer you come up
    with, be sure it extends further. There must be no finite limit. This
    is a genuine question. I know you like to just ignore questions and
    move on to say more wrong stuff, but please take a moment to prove me
    wrong. How is this data structure implemented in x86 code?


    Any software engineer of ordinary skill in the art would be able to
    create a linked list of memory elements of unlimited length on the basis
    of a signed 32-bit relative jump instruction. https://www.fermimn.edu.it/linux/quarta/x86/jmp.htm

    "C" code that generated machine language that I patched with a
    hexadecimal editor.
    goto HERE;
    __asm nop
    __asm nop
    __asm nop
    goto HERE;
    __asm nop
    __asm nop
    __asm nop

    Intel XED disassembly of the hand-coded machine code.
    16d: E9E0FFFFFF jmp 0x152 ;jumps to relative address -32
    172: E9FFFFFF7F jmp 0x80000176 ;jumps to relative address +7fffffffh

    This disassembler interprets the absolute memory address that is
    equivalent to the relative memory address that the machine will actually
    use. The signed 32-bit (little-endian) integer encoded in the first
    relative jump instruction is 0xffffffE0 which specifies a -32 relative
    address. The next relative jump is +7fffffffh.

    Even if you can do it, picking an actual machine language was a silly
    idea. High-level languages are usually defined without reference to hardware, so they have no such limits in the notation. You don't have
    to do any of this work in, say, Haskell.


    Examining the actual control flow (as every halt decider must do) is
    much simpler if every state is connected together with integer
    addresses. This provides a complete directed graph of all control flow
    already fully specified.

    On 8/14/2020 6:34 AM, Andy Walker wrote:
    every standard m/c code
    that I am familiar with is perfectly capable of encoding any one of
    the standard UTMs [and therefore any TM computation]. All that is
    needed beyond a quite simple FSM, such as that on the front cover of
    Minsky's "Computation", is a mechanism for accessing arbitrarily much
    storage...

    In V41 (x86 is Turing Complete)
    I replace the last two sentences with this:

    On 8/14/2020 2:32 PM, olcott wrote:
    Data access is also based on relative addressing therefore: The x86
    language specifies an abstract model of computation that is Turing
    complete.

    proving (to anyone knowing how easy it is to simulate a Turing Machine
    such as Andy and I)

    How do you choose whose claims you accept without proof?

    That I directly know how easy it is to simulate a Turing machine because
    I studied this one http://www.lns.mit.edu/~dsw/turing/turing.html for
    the purpose of extending it to enormously more complex machines gave me
    the first hand understanding that Andy's claim is correct.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Andy Walker on Sat Aug 15 09:13:41 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/15/2020 6:13 AM, Andy Walker wrote:
    On 14/08/2020 21:27, Ben Bacarisse wrote:
    [I wrote:]

        However, this is a moderately daft debate, and I don't really
    want to pursue it any further.  Unless you reply with something
    outrageous [not your style!], I'll drop it here, and return to
    watching from the sidelines while we wait for PO's magnum opus.


    However I do believe that I have shown that the abstract model of
    computation defined by the x86 programming language is Turing Complete.

    Ben was unable to see how easily that a Turing machine could be
    simulated. http://www.lns.mit.edu/~dsw/turing/doc/tm_manual.txt http://www.lns.mit.edu/~dsw/turing/turing.html

    Any software engineer of ordinary skill in the art would be able to
    create a linked list of memory elements of unlimited length on the basis
    of a signed 32-bit relative jump instruction. Reference to data already
    uses relative addressing.

    https://www.fermimn.edu.it/linux/quarta/x86/jmp.htm

    "C" code that generated machine language that I patched with a
    hexadecimal editor.
    goto HERE;
    __asm nop
    __asm nop
    __asm nop
    goto HERE;
    __asm nop
    __asm nop
    __asm nop

    Intel XED disassembly of the hand-coded machine code.
    16d: E9E0FFFFFF jmp 0x152 ; jumps to relative address -32
    172: E9FFFFFF7F jmp 0x80000176 ; jumps to relative address +7fffffffh

    This disassembler interprets the absolute memory address that is
    equivalent to the relative memory address that the machine will actually
    use. The machine actually uses the relative address.

    The signed 32-bit (little-endian) integer encoded in the first relative
    jump instruction is 0xffffffE0 which specifies a -32 relative address.
    The jmp instruction @ 172 jumps to relative address +7fffffffh


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Sat Aug 15 12:07:31 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/15/2020 11:25 AM, Ben Bacarisse wrote:
    Andy Walker <anw@cuboid.co.uk> writes:

    On 14/08/2020 21:27, Ben Bacarisse wrote:
    [I wrote:]
    In the early '70s, we had a PDP 11/05 with 8Kb of storage,
    no disc, no OS, not much of anything bar a paper tape punch, paper
    tape reader, and a teletype. That would have been enough for a UTM,
    'tho I suspect the dept would not have appreciated the bill for the
    reels of paper tape had we implemented it.
    Nowadays we might say "support chips" but in those days it would be
    "peripherals". The PDP 11 instruction set does not define a machine
    that is universal and, if I'm going to be picky, (I am!) you wrote the
    OS -- just enough of one to drive those peripherals.

    Um. Well, given the "Unibus" construction of the PDP 11/05,
    the tape reader/punch consisted of setting a status byte, writing or
    reading a data byte and dealing with an interrupt when the status
    changed [or busy-waiting to the same effect]. None of this was in
    any way protected from other users, indeed there were no other users
    or other programs, and to describe half a dozen instructions as an
    operating system is really stretching the definition:

    " An operating system is system software that manages computer
    " hardware, software resources, and provides common services for
    " computer programs. " [Wiki]

    In the case at hand, it wasn't "system" software, the hardware was
    "managed" by direct access to its status and data bytes, there were
    no software resources provided, and no common services. I rest my
    case, m'lud.

    Access to hardware can help because the hardware might have the right
    sort of "unbounded" interface. Don't remember if that was the case for
    the paper tape reader/writer but I think they often did have a stepping function.

    This is why the only sane theoretical approach is to use a high-level
    language.

    Approach to ...?

    To what PO appears to be trying to do. I don't think he's simulating a
    UTM, he's writing a TM simulator that is supposed to be universal. Mind
    you, as with so much PO stuff, the universality might be a red herring.
    It's not at all clear that he is still making any claim that is
    "interesting" (by which I mean "logically impossible") as you originally pointed out he was.

    I don't recall ever seeing a HLL used to
    demonstrate the *theory*.

    I only meant the theoretical claim that some particular programming
    notation is TM complete.

    <cut>


    Yes and your correction of my incorrect use of technical terminology has
    been very helpful. This is one key area where you are way ahead of me.

    I can now say that the abstract model of computation specified by the
    x86 language is Turing complete on the basis that this model can easily simulate this qintuple based Turing machine: http://www.lns.mit.edu/~dsw/turing/turing.html

    and Relative addressing for control flow and data access provides the
    abstract model for unlimited memory access.

    More difficult to understand is that direct memory access within a
    memory address sized block of memory does not break Turing Completeness
    as long access to an unlimited number of these memory addressed sized
    blocks of memory is defined within this abstract model of computation.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Sat Aug 15 14:31:11 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/15/2020 11:49 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/14/2020 8:13 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    We don't have to actually have the hardware. We only really need the
    abstract model of computation.

    But you are declaring it to be x86 code. You can banish the hardware,
    but x86 code embodies that hardware in the notation.

    The abstract model of computation that the x86 language specifies is
    Turing complete.

    I am glad you believe that. I wonder why you worked so hard in the past
    (and continue to do so below) to suggest you were working within finite limits.

    To specify 87 trillion functions merely requires a linked list with 87 >>>> trillion elements.

    What does that look like in x86 code? And whatever answer you come up
    with, be sure it extends further. There must be no finite limit. This
    is a genuine question. I know you like to just ignore questions and
    move on to say more wrong stuff, but please take a moment to prove me
    wrong. How is this data structure implemented in x86 code?

    Any software engineer of ordinary skill in the art would be able to
    create a linked list of memory elements of unlimited length on the
    basis of a signed 32-bit relative jump instruction.

    I asked what the data structure would look like. Jumps have nothing to
    do with that. What does the data structure for an essentially unbounded linked list look like?


    We have an unlimited number of memory address sized blocks of memory
    where the beginning of each block has a relative jump into the prior
    block and the end of each block has a relative jump into the next block.

    https://c9x.me/x86/html/file_module_x86_id_147.html

    // displacement relative to next instruction.
    16d: E9FBFEFFFF jmp 0x6d ; jump -100h (any signed 32-bit value)
    172: E9F6FFFFFF jmp 0x16d ; jump -10h (first code of 4GB block)
    177: E900000000 jmp 0x17c ; jump +0h (last code of 4GB block)
    17c: E900010000 jmp 0x281 ; jump +100h (any signed 32-bit value)

    The above hand-coded machine language was disassembled with Intel XED

    If we imagine that the code @ 177H is really @ FFFFFFFAH then the exact
    same machine code would jump into the first code of the next 4GB block
    of code.

    If we imagine that the code @ 172H is really @ 00000000H then the exact
    same machine code would jump into the last code of the prior 4GB block
    of code.

    How do you choose whose claims you accept without proof?

    That I directly know how easy it is to simulate a Turing machine
    because I studied this one
    http://www.lns.mit.edu/~dsw/turing/turing.html

    That is not a Turing-complete implementation.

    http://www.lns.mit.edu/~dsw/turing/doc/tm_manual.txt
    Here is a Turing Complete adaptation, that I derived two years ago:

    All integers are specified as hexadecimal digits [0-9A-F] encoded as: H.
    Each element of the quintuple is space delimited. One Turing Machine Description language instruction per line:

    H+ HHHHHHHH HHHHHHHH H+ [RL]

    States are of arbitrary finite length strings of hexadecimal digits.
    Tape elements are unsigned 32-bit integers.

    This all appears to be a rabbit hole down which there is nothing to be learned. No one doubts that something very like x86 code (as assembler,
    not machine code) can be tweaked to remove limits and would therefore be Turing-complete. But you were at pains to point out how this was really
    x86 code in every detail.

    My halting problem research will be presented as a program that runs on
    an x86 based UTM. It has to be machine code because the most important
    aspect of this is the halt decider that must directly "see" all of the
    the control flow as a directed graph connected by integer machine
    addresses.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Sat Aug 15 16:49:48 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/15/2020 4:34 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/15/2020 11:49 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/14/2020 8:13 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    To specify 87 trillion functions merely requires a linked list with 87 >>>>>> trillion elements.

    What does that look like in x86 code? And whatever answer you come up >>>>> with, be sure it extends further. There must be no finite limit. This >>>>> is a genuine question. I know you like to just ignore questions and >>>>> move on to say more wrong stuff, but please take a moment to prove me >>>>> wrong. How is this data structure implemented in x86 code?

    Any software engineer of ordinary skill in the art would be able to
    create a linked list of memory elements of unlimited length on the
    basis of a signed 32-bit relative jump instruction.

    I asked what the data structure would look like. Jumps have nothing to
    do with that. What does the data structure for an essentially unbounded >>> linked list look like?

    We have an unlimited number of memory address sized blocks of memory
    where the beginning of each block has a relative jump into the prior
    block and the end of each block has a relative jump into the next
    block.

    What do jumps have to do with it?

    You are either totally lost or you are trying to construct a
    distraction. Since it seems you can't describe the data structure, just
    show the code that changes every 0 to a 1 (these can be any fixed size)
    data in the list elements. This loop will make the data structure clear
    by showing how the code moves from one element to the next, and how the
    end of the list is detected.

    How do you choose whose claims you accept without proof?

    That I directly know how easy it is to simulate a Turing machine
    because I studied this one
    http://www.lns.mit.edu/~dsw/turing/turing.html

    That is not a Turing-complete implementation.

    http://www.lns.mit.edu/~dsw/turing/doc/tm_manual.txt
    Here is a Turing Complete adaptation, that I derived two years ago:

    All integers are specified as hexadecimal digits [0-9A-F] encoded as:
    H. Each element of the quintuple is space delimited. One Turing
    Machine Description language instruction per line:

    The original input language in Turing complete. It's the implementation
    that is not.

    H+ HHHHHHHH HHHHHHHH H+ [RL]

    States are of arbitrary finite length strings of hexadecimal digits.
    Tape elements are unsigned 32-bit integers.

    That was a pointless change from the point of view of the TM notation.
    The original and your extended on are both Turing complete. (It's a odd
    term for a notation that simply a way of writing TMs.)

    He had a very small finite number of states.


    And if you make corresponding changes to the implementation (I don't
    think you did) I very must suspect it was not more Turing complete than
    the original.


    My redesign obviously has an unlimited number of states.

    You should stick with the casual meaning. No one will object to a bit
    of hand-waving on this point (unless it is somehow central to whatever
    it is you are claiming these days). The trouble starts when you
    over-specify and state, as facts, things that need to be proven.


    Changing from the number of states that can be specified as a single
    ASCII character to the number of states that can be specified by a
    hexadecimal string of arbitrary length is not hand waving.

    This all appears to be a rabbit hole down which there is nothing to be
    learned. No one doubts that something very like x86 code (as assembler, >>> not machine code) can be tweaked to remove limits and would therefore be >>> Turing-complete. But you were at pains to point out how this was really >>> x86 code in every detail.

    My halting problem research will be presented as a program that runs
    on an x86 based UTM.

    In what sense is this x86 "thing" U and in what sense is it a TM?

    This is apparently over your head.

    Thankfully more credible reviewers concur with my assessment:
    The abstract model of computation specified by the x86 language is
    Turing complete.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Sat Aug 15 21:07:54 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/15/2020 8:36 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    I have shown that the abstract model of computation specified by the
    x86 language is Turing complete pertaining to access to unlimited
    memory.

    No, you have asserted it. In fact you've dodged requests to show it.
    Write the loop I talked about. That will overcome the problem you have
    in describing data structures.

    Your jmp examples are just silly (EIP is fixed-width and it's no help
    for data accesses).


    Within the abstract model of computation specified by the x86 language
    we would understand that jumping into the next 4GB block resets the EIP
    to an offset within this block. Since this is not currently defined
    within the current specification it would be a required adjustment to
    the current definition. You have definitely made a valid p0oiint here.
    There might even be other teeny tiny little adjustments that would be
    required.

    It is common knowledge in the field that requirements of simulating a
    Turing Machine are trivial.

    You have no idea what is common knowledge in this field. What certainly
    is common is a lot of hand-waving about limits being arbitrary and so
    not crucial. And, to be frank, that's pretty much all anyone cares
    about. But the technical detail -- can some abstract notation really describe potentially unbounded access to storage -- is much less
    commonly discussed.

    I invite you to show that it's possible.

    I already did, although your caveat was totally apt. What I was
    envisioning that may still work is the assumption that all the direct
    memory addressing operations are eliminatated and the Turing complete
    abstract machine would only use the relative addressing subset. In this
    case the EIP would never have any numerical value. Its abstract value
    would be CURRENT_LOCATION.

    I would probably prefer to use my unstated assumption that the EIP is
    only relative to the current 4GB memory block because this lets me keep
    the x86 language essentially as it is.

    This seems to indicate that there is some sort of rigorous formal
    definition of UTM:

    https://cstheory.stackexchange.com/questions/10207/whats-the-simplest-noncontroversial-2-state-universal-turing-machine


    https://en.wikipedia.org/wiki/Wolfram%27s_2-state_3-symbol_Turing_machine

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Sun Aug 16 10:46:52 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/16/2020 7:23 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/15/2020 8:36 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    I have shown that the abstract model of computation specified by the
    x86 language is Turing complete pertaining to access to unlimited
    memory.

    No, you have asserted it. In fact you've dodged requests to show it.
    Write the loop I talked about. That will overcome the problem you have
    in describing data structures.

    Your jmp examples are just silly (EIP is fixed-width and it's no help
    for data accesses).

    Within the abstract model of computation specified by the x86 language
    we would understand that jumping into the next 4GB block resets the EIP
    to an offset within this block. Since this is not currently defined
    within the current specification it would be a required adjustment to
    the current definition. You have definitely made a valid p0oiint
    here. There might even be other teeny tiny little adjustments that
    would be required.

    Yes, since jumps are of no use for the key data structure, you need the

    Jumps <are> moving the tape head.

    same "teeny tiny" change to the data addressing modes and, hey presto,

    Data addressing modes are already relative.

    you have access to unbounded storage implicitly defined in the abstract machine. The notation itself takes on the burden of there always being
    a "next" data block. You can't do it in pure x86 code.


    It can certainly be done in pure x86 code. Current x86 hardware does not interpret the x86 code this way.

    Within the abstract model of computation specified by the x86 language
    the EIP would be understood to be an unsigned integer.

    Relative displacements added to the EIP that exceed its maximum value
    would roll over (like an odometer) into the next EIP sized block of
    memory: 0xffffffff + 1 = 0x00000000.

    Relative displacements subtracted from the EIP roll the EIP backward
    like an odometer: 0x00000000 - 1 = 0xffffffff.

    So now, since you can't use x86 addressing to access unbounded store,
    why bother with the mess that x86 code?

    I just showed how the abstract model of computation specified by the x86 language can be understood to provide unlimited memory access.

    The fact that exisiting hardware and/or operating systems do not
    interpret this model this way does not change the actual model. That the operating system would reject memory access outside of the current 4GB
    block when in 32-bit mode is not an aspect of the abstract machine
    specified by the x86 programming language.

    You could invent your own
    abstract machine language with the few simple instructions you need. Of
    you could use an exiting one. Or you could use a high-level language
    that has the right concepts already defined.


    Yet as I have said many many times does not provide the directed acyclic
    graph of every node of all control flow specified ae integer addresses.

    x86 code is the wrong choice. Taken literally it does not work. You
    need to invent some new entirely abstract addressing mode for it but you still have all the complication of it being, in all other respects, and actual machine language.


    That you do not understand how it works is no actual limit of this
    abstract model of computation.

    Relative addressing allows the x86 proxy for the tape head to move to
    any place on the unbounded tape. That this feature is not implemented in operating systems or hardware does not change the abstract model of
    computation that the x86 language specifies.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Sun Aug 16 15:29:04 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/16/2020 2:52 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/16/2020 2:13 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/16/2020 11:41 AM, Ben Bacarisse wrote:
    <cut>
    Show me a loop that scans the tape replacing zeros with ones.

    I have already shown you enough code that any coder would get it.

    How curious, though, that you have not shown the simple loop I
    requested. Is it a "trade secret"?

    You are apparently not a coder.

    You know, I don't think that will be the conclusion that other readers
    draw!

    I already have other readers of exceptional credibility that have
    certified that my analysis is precisely correct:

    The relative addressing of the x86 language does specify an abstract
    model of computation with access to unlimited memory.

    Who said that? I need to go and correct that post too.

    Meanwhile, a simple unbounded loop is sill not possible (or at least,
    you refuse to post your secret code).

    It is self-evident to any software engineer and incomprehensible to
    people without sufficient software engineering background.

    Kaz can probably tell you how it is done and you may not understand him
    either, yet you would believe him.

    This is the simplest explantion of the process that I can make:

    If we reduce your problem to filling all of unmlimited memory with
    zeroes we simply have the code keep moving itself forward as it fills
    the memory area before it with zeroes. To minimize the number of moves
    it always moves itself exactly as far as it can reach with its relative addressing.

    One thing that I do have to praise is your ability to translate a very
    abstract problem into a perfectly concrete problem. That is a very
    important skill and very helpful for my work. This example that you
    proposed is the best example that I can imagine that proves actual
    unlimited memory access. It is both very simple and totally complete.

    By the way, you can be right
    without there being any such loop. There are many obscure routes to
    being Turing complete, so you can just say that such a loop is not
    possible without having to admit you are necessarily wrong.

    If you disagree with this I will assume ignorance on your part rather
    than dishonesty.

    Assume away, but your track record over the years would have been
    improved if you included a third option: that you might be wrong.
    Remember you called bullshit on a post of mine reminding you of what you
    had said? Remember that I was, according to you, obviously mad to say a denumerable sequence can have a finite range? Both recent cases where considering that you might be wrong would have avoided you making these
    sorts of accusation.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Sun Aug 16 18:13:59 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/16/2020 6:01 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/16/2020 2:13 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/16/2020 11:41 AM, Ben Bacarisse wrote:
    <cut>
    Show me a loop that scans the tape replacing zeros with ones.

    I have already shown you enough code that any coder would get it.

    How curious, though, that you have not shown the simple loop I
    requested. Is it a "trade secret"?

    You are apparently not a coder.

    You know, I don't think that will be the conclusion that other readers
    draw!

    I already have other readers of exceptional credibility that have
    certified that my analysis is precisely correct:

    The relative addressing of the x86 language does specify an abstract
    model of computation with access to unlimited memory.

    If you disagree with this I will assume ignorance on your part rather
    than dishonesty.

    You've admitted being wrong before. You were wrong about Ben being a
    liar, and you admitted it. You're wrong about Ben not being a coder.
    And unless you show the concrete demonstration that Ben requested,
    I'll just assume that you're wrong about that as well.

    Show me a loop that scans the tape replacing zeros with ones.

    I have already shown you enough code that any coder would get it.

    Are you a coder? Are you able to "get it"? Can you show actual
    code that scans the tape replacing zeros with ones? The point is
    not whether Ben or I, or "any software engineer of ordinary skill
    in the art of x86 programming", can do it. You're being asked to demonstrate, without handwaving, that *you* can do it.


    No that never has been the point.
    That the relative addressing of the x86 language specifies an abstract
    model of computation with access to unlimited memory is the point.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Mon Aug 17 12:16:03 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/17/2020 10:35 AM, André G. Isaak wrote:
    On 2020-08-17 09:20, olcott wrote:
    On 8/17/2020 1:44 AM, André G. Isaak wrote:
    On 2020-08-16 17:17, olcott wrote:

    That the relative addressing of the x86 language specifies an
    abstract model of computation with access to unlimited memory really
    is blatantly obvious.

    The problem is the you have a poor track record with respect to
    things you claim are 'blatantly obvious'.

    André


    It really seems that some people in these forums have an established
    track record of arguing against me when they know that I am right.
    They simply play head games. One person recently has argued against me
    that knows that they don't even understand the underlying subject
    matter. Their whole basis was ad hominem.

    To prove unlimited memory access In this case merely requires
    conventional coding. Ben's example surely is a great one.

    If an actual program could be written that fills unlimited memory with
    zeroes this would most unequivocally prove my point.

    The question, though, was about creating an unbounded linked list using
    x86 code, which you have claimed is possible.

    André


    No, this question has gone through two states of simlification that
    equally prove that the x86 language defines an abstract machine with
    unlimited memory access.

    On 8/16/2020 11:41 AM, Ben Bacarisse wrote:
    Show me a loop that scans the tape replacing zeros with ones.

    I simplified this even further and proposed filling all of unlimited
    memory with zeroes.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Kaz Kylheku on Mon Aug 17 13:43:21 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/17/2020 1:12 PM, Kaz Kylheku wrote:
    On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:
    On 8/17/2020 1:44 AM, André G. Isaak wrote:
    On 2020-08-16 17:17, olcott wrote:

    That the relative addressing of the x86 language specifies an abstract >>>> model of computation with access to unlimited memory really is
    blatantly obvious.

    The problem is the you have a poor track record with respect to things
    you claim are 'blatantly obvious'.

    André


    It really seems that some people in these forums have an established
    track record of arguing against me when they know that I am right.

    The Intel x86 instruction set defines a fixed address space.

    That limitation is not simply an instance of an infinite abstraction;
    it is the abstraction.

    No addressing mode provides unlimited memory.

    I.e. you're dead wrong here about something that some elementary school
    kids know about computers.


    https://c9x.me/x86/html/file_module_x86_id_147.html

    The abstract model of computation specified by the x86 language provides
    access to unlimited memory via this instruction:

    E9 cd JMP rel32
    Jump near, relative, displacement relative to next instruction.

    I hand-coded it using a hexadecimal editor applied to generated machine language and then disassembled it using the Intel XED disassembler to
    make sure that my understanding of how the machine language was coded is correct.

    // displacement relative to next instruction.
    16d: E9FBFEFFFF jmp 0x6d ; jump -100h (any signed 32-bit value)
    172: E9F6FFFFFF jmp 0x16d ; jump -10h (first code of 4GB block)
    177: E900000000 jmp 0x17c ; jump +0h (last code of 4GB block)
    17c: E900010000 jmp 0x281 ; jump +100h (any signed 32-bit value)

    The absolute memory addresses provided by the disassembler are merely to
    aid human understanding. The immediate values stored in the machine
    language itself are relative values and not abolutely values. They are
    stored as signed 32-bit integers in little-endian order.

    Now we could build a Universal Turing Machine with even something far
    less powerful than x86; some microcontroller with storage measured in
    bytes. It just has to be hooked up to a tape transport mechanism with read/write abilities; then it's just a matter of giving it enough tape
    for whatever problem we feed it.

    But the x86 relative addressing mode isn't equivalent to that indefinite
    tape mechanism, sorry.

    I just showed how it is.


    They simply play head games.

    Get a better head, then you can participate.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Mon Aug 17 13:21:19 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/17/2020 12:53 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/17/2020 1:44 AM, André G. Isaak wrote:
    On 2020-08-16 17:17, olcott wrote:

    That the relative addressing of the x86 language specifies an
    abstract model of computation with access to unlimited memory
    really is blatantly obvious.

    The problem is the you have a poor track record with respect to
    things you claim are 'blatantly obvious'.

    It really seems that some people in these forums have an established
    track record of arguing against me when they know that I am
    right. They simply play head games.

    I don't believe anyone here has argued against you while knowing that
    you're right. You seem to have an extremely overinflated sense of how obviously correct your ideas are.

    I was not referring to the last six months.

    One person recently has argued
    against me that knows that they don't even understand the underlying
    subject matter. Their whole basis was ad hominem.

    Was that referring to me? (I've acknowledged several times my lack of expertise in certain areas.)

    No I am not referring to you, Ben, Andre, David, Andy, or Kaz.

    To prove unlimited memory access In this case merely requires
    conventional coding. Ben's example surely is a great one.

    If an actual program could be written that fills unlimited memory with
    zeroes this would most unequivocally prove my point.

    What is preventing you from writing such a program and showing it to us?

    The fact that the existence of such a program would be self-evident to
    anyone truly understanding what I am saying tends to provide very strong evidence that this would be a significant waste of my time.

    If people simply don't believe this self-evident truth then they
    probably won't believe it when all the details are fully elaborated.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Mon Aug 17 12:19:46 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    On 8/17/2020 12:53 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    To prove unlimited memory access In this case merely requires
    conventional coding. Ben's example surely is a great one.

    If an actual program could be written that fills unlimited memory with
    zeroes this would most unequivocally prove my point.

    What is preventing you from writing such a program and showing it to us?

    The fact that the existence of such a program would be self-evident to
    anyone truly understanding what I am saying tends to provide very
    strong evidence that this would be a significant waste of my time.

    If people simply don't believe this self-evident truth then they
    probably won't believe it when all the details are fully elaborated.

    I think the issue being disputed here is not just whether such a
    program can be written. It's also whether you are competent to
    write it. The small samples of C and/or C++ code that I've seen
    you post here do not make me optimistic on that point.

    Please please stop assuming that the only possible reasons someone
    could disagree with you is a lack of understanding. *Even if
    you're right*, telling people who disagree with you that they're
    not capable of understanding is a waste of everyone's time.

    Let's back up a bit. You're presenting some controversial ideas,
    and you're discussing them with some very smart people, many of
    whom are experts in relevant fields. Imagine that, at some point
    in the future, you've managed to persuade at least one person that
    you're right. (I presume that's your goal here. If not, please
    clarify what your actual goal is.) Now think about how that can
    have happened.

    This:
    Olcott: I can demonstrate this.
    Someone else: Fine, please do so.
    Olcott: No, it would be a waste of my time. It's obvious that
    I can demonstrate it so I don't actually need to do
    so. You wouldn't understand my demonstration anyway.
    Someone else: Ah yes, you're right, I see it now.
    is not a plausible path to your goal.

    What is? How do you convince us? How do you convince *anyone*?

    Is that even your goal?

    (I acknowledge that I'm putting words in your mouth here, but the above captures the gist of how you appear to me, and probably to others.)

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Mon Aug 17 16:41:26 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/17/2020 2:19 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/17/2020 12:53 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    To prove unlimited memory access In this case merely requires
    conventional coding. Ben's example surely is a great one.

    If an actual program could be written that fills unlimited memory with >>>> zeroes this would most unequivocally prove my point.

    What is preventing you from writing such a program and showing it to us?

    The fact that the existence of such a program would be self-evident to
    anyone truly understanding what I am saying tends to provide very
    strong evidence that this would be a significant waste of my time.

    If people simply don't believe this self-evident truth then they
    probably won't believe it when all the details are fully elaborated.

    I think the issue being disputed here is not just whether such a
    program can be written. It's also whether you are competent to
    write it. The small samples of C and/or C++ code that I've seen
    you post here do not make me optimistic on that point.

    Please please stop assuming that the only possible reasons someone
    could disagree with you is a lack of understanding. *Even if
    you're right*, telling people who disagree with you that they're
    not capable of understanding is a waste of everyone's time.

    Let's back up a bit. You're presenting some controversial ideas,
    and you're discussing them with some very smart people, many of
    whom are experts in relevant fields. Imagine that, at some point
    in the future, you've managed to persuade at least one person that
    you're right. (I presume that's your goal here. If not, please
    clarify what your actual goal is.) Now think about how that can
    have happened.


    My eventual goal is to have my refutation of the halting problem proofs published in the communications of the ACM. That is how Edsgar Dykstra
    became known. http://www.u.arizona.edu/~rubinson/copyright_violations/Go_To_Considered_Harmful.html

    This has several sub-goals. Proving the computational equivalence of the
    model of computation that I am using for my refutation with the Turing
    Machine model would tend to increase the acceptance of my proof.

    This:
    Olcott: I can demonstrate this.
    Someone else: Fine, please do so.
    Olcott: No, it would be a waste of my time. It's obvious that
    I can demonstrate it so I don't actually need to do
    so. You wouldn't understand my demonstration anyway.
    Someone else: Ah yes, you're right, I see it now.
    is not a plausible path to your goal.

    What is? How do you convince us? How do you convince *anyone*?

    Is that even your goal?

    (I acknowledge that I'm putting words in your mouth here, but the above captures the gist of how you appear to me, and probably to others.)


    (As always) I didn't realize how much simpler I could make my
    explanation until after I simplified it.

    Below I show the two key mechanisms that the current x86 language
    already has that would provide very simple access to unlimited memory
    when we assume that the x86 language model of computation has a specific implementation detail.

    When we assume that the abstract model of computation specified by the
    x86 language has an (implementation detail of) underlying memory
    architecture of an unlimited number of adjacent 4GB memory blocks then
    the currently defined x86 language would have access to unlimited memory
    by the following two existing (control flow and data access) mechanisms.

    (1) signed 32-bit relative jumps
    // Jumps to the first address of the next 4GB block
    FFFFFFF0: E90B000000 ; signed 32-bit relative jumps

    (2) signed 32-bit relative offsets
    // Moves a 32-bit integer from the first address of the next 4GB block xxxxxxxx: BBFFFFFFFF mov ebx, 0xffffffff
    xxxxxxxx: 8B4301 mov eax, dword ptr [ebx+0x1]



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Kaz Kylheku on Mon Aug 17 16:58:49 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/17/2020 1:12 PM, Kaz Kylheku wrote:
    On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:
    On 8/17/2020 1:44 AM, André G. Isaak wrote:
    On 2020-08-16 17:17, olcott wrote:

    That the relative addressing of the x86 language specifies an abstract >>>> model of computation with access to unlimited memory really is
    blatantly obvious.

    The problem is the you have a poor track record with respect to things
    you claim are 'blatantly obvious'.

    André


    It really seems that some people in these forums have an established
    track record of arguing against me when they know that I am right.

    The Intel x86 instruction set defines a fixed address space.

    That limitation is not simply an instance of an infinite abstraction;
    it is the abstraction.

    No addressing mode provides unlimited memory.

    I.e. you're dead wrong here about something that some elementary school
    kids know about computers.
    The abstract model of computation specified by the x86 language already
    has access to unlimited memory when we assume the implementation detail
    that the underlying memory architecture of this abstract model is
    organized as an unlimited sequence of adjacent 4GB blocks.

    (1) signed 32-bit relative jumps
    // Jumps to the first address of the next 4GB block
    FFFFFFF0: E90B000000 ; signed 32-bit relative jumps

    (2) signed 32-bit relative offsets
    // Moves a 32-bit integer from the first address of the next 4GB block xxxxxxxx: BBFFFFFFFF mov ebx, 0xffffffff
    xxxxxxxx: 8B4301 mov eax, dword ptr [ebx+0x1]

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Tue Aug 18 12:23:09 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/17/2020 7:20 PM, olcott wrote:
    The abstract model of computation specified by the x86 language already
    has access to unlimited memory when we assume the implementation detail
    that the underlying memory architecture of this abstract model is
    organized as an unlimited sequence of adjacent 4GB blocks.

    (1) signed 32-bit relative jumps
    // Jumps to the first address of the next 4GB block
    FFFFFFF0: E90B000000 ; signed 32-bit relative jumps

    (2) signed 32-bit relative offsets
    // Moves a 32-bit integer from the first address of the next 4GB block xxxxxxxx:  BBFFFFFFFF       mov ebx, 0xffffffff
    xxxxxxxx:  8B4301           mov eax, dword ptr [ebx+0x1]

    x86 Instruction Set Reference
    https://c9x.me/x86/


    The correctness of the above machine language instructions was verified
    with Intel XED.

    The X86 Encoder Decoder (XED), is a software library (and associated
    headers) for encoding and decoding X86 (IA32 and Intel64) instructions. https://intelxed.github.io/

    E9 cd JMP rel32
    Jump near, relative, displacement relative to next instruction.

    ed: E90B000000 jmp 0xfd
    f2: 90 nop
    f3: 90 nop
    f4: 90 nop
    f5: 90 nop
    f6: 90 nop
    f7: 90 nop
    f8: 90 nop
    f9: 90 nop
    fa: 90 nop
    fb: 90 nop
    fc: 90 nop
    fd: BBFFFFFFFF mov ebx, 0xffffffff
    102: 8B4301 mov eax, dword ptr [ebx+0x1]


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Thu Aug 20 19:58:19 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/16/2020 7:23 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    So now, since you can't use x86 addressing to access unbounded store,
    why bother with the mess that x86 code? You could invent your own
    abstract machine language with the few simple instructions you need. Of
    you could use an exiting one. Or you could use a high-level language
    that has the right concepts already defined.

    An abstract machine having a tape head that can be advanced in 0 to
    0xFFFFFFFF increments an unlimited number of times specifies a model of computation that has access to unlimited memory. The technical name for
    memory addressing based on displacement from the current memory address
    is relative addressing.

    The abstract model of computation specified by the x86 language already
    has access to unlimited memory when we assume the implementation detail
    that the underlying memory architecture of this abstract model is
    organized as an unlimited sequence of adjacent 4GB blocks.

    The x86 language has control flow instructions having {8,16,32} bit
    signed offsets to the current EIP (instruction pointer) value. It also
    has data access instructions having {8,16,32} bit signed offsets to the
    base register.

    Signed {8,16,32} bit Jumps relative to EIP
    ; Jump to the first address of the next 4GB block
    FFFFFFF0: E90B000000

    Signed {8,16,32} offsets relative to Base Register
    ; Moves a 32-bit integer from the first address of the next 4GB block
    e4: BBFFFFFFFF mov ebx, 0xffffffff
    f5: 8B4301 mov eax, dword ptr [ebx+0x1]

    Because both the control flow and data access instructions of the x86
    language have relative addressing the x86 language specifies an abstract
    model of computation having access to unlimited memory.

    The implementation details of this abstract model not a part of the
    abstract model itself.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Fri Aug 21 09:31:22 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/21/2020 5:20 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    So you don't have the gumption to admit when you have been proved
    wrong?

    Me: Since it seems you can't describe the data structure, just show the
    code that changes every 0 to a 1 (these can be any fixed size) data
    in the list elements. This loop will make the data structure clear
    by showing how the code moves from one element to the next, and how
    the end of the list is detected.


    So you are saying that you are incapable of this? If you are incapable
    of this then that would also seem to indicate that you are incapable of evaluating this when it is completed.

    If you not can see how these two key features of the x86 language
    already provide access to unlimited memory within the abstract model of computation defined by this language then you would not be able to
    understand any solution provided for your challenge.

    Signed {8,16,32} bit Jumps relative to EIP and
    Signed {8,16,32} offsets relative to Base Register
    already specify access to unlimited memory.

    You: <silence>

    Still waiting...

    Still waiting for you to admit you lied when you said you had an "actual Turing machine" pair "exactly and precisely the Peter Linz H and Ĥ" way
    back in 2018.
    Still waiting for any evidence at all that you have anything to post
    other than empty boasts.


    So you are acknowledging this this is over your head?

    An abstract machine having a tape head that can be advanced in 0 to
    0xFFFFFFFF increments an unlimited number of times specifies a model of computation that has access to unlimited memory. The technical name for
    memory addressing based on displacement from the current memory address
    is relative addressing.

    The abstract model of computation specified by the x86 language already
    has access to unlimited memory when we assume the implementation detail
    that the underlying memory architecture of this abstract model is
    organized as an unlimited sequence of adjacent 4GB blocks.

    The x86 language has control flow instructions having {8,16,32} bit
    signed offsets to the current EIP (instruction pointer) value. It also
    has data access instructions having {8,16,32} bit signed offsets to the
    base register.

    Signed {8,16,32} bit Jumps relative to EIP
    ; Jump to the first address of the next 4GB block
    FFFFFFF0: E90B000000

    Signed {8,16,32} offsets relative to Base Register
    ; Moves a 32-bit integer from the first address of the next 4GB block
    e4: BBFFFFFFFF mov ebx, 0xffffffff
    f5: 8B4301 mov eax, dword ptr [ebx+0x1]

    Because both the control flow and data access instructions of the x86
    language have relative addressing the x86 language specifies an abstract
    model of computation having access to unlimited memory.

    The implementation details of this abstract model not a part of the
    abstract model itself.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Fri Aug 21 13:46:34 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/21/2020 1:17 PM, André G. Isaak wrote:
    On 2020-08-21 08:31, olcott wrote:
    On 8/21/2020 5:20 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    So you don't have the gumption to admit when you have been proved
    wrong?

    Me: Since it seems you can't describe the data structure, just show the
         code that changes every 0 to a 1 (these can be any fixed size) data
         in the list elements.  This loop will make the data structure clear
         by showing how the code moves from one element to the next, and how
         the end of the list is detected.


    So you are saying that you are incapable of this?

    From where I stand, your unwillingness to do this would seem to
    indicate that you are incapable of doing it.

    If you are incapable of this then that would also seem to indicate
    that you are incapable of evaluating this when it is completed.

    How is he supposed to evaluate that which you have not provided?

    The part that I did provide that you disingenuously removed.
    Your assertion that I did not provide this is verifiably false.

    A tactic used to win a debate that shows bias against the truth can be reasonably construed as dishonest.

    If you not can see how these two key features of the x86 language
    already provide access to unlimited memory within the abstract model of computation defined by this language then you would not be able to
    understand any solution provided for your challenge.

    Signed {8,16,32} bit Jumps relative to EIP and
    Signed {8,16,32} offsets relative to Base Register
    already specify access to unlimited memory.

    An abstract machine having a tape head that can be advanced in 0 to
    0xFFFFFFFF increments an unlimited number of times specifies a model of computation that has access to unlimited memory. The technical name for
    memory addressing based on displacement from the current memory address
    is relative addressing.

    The abstract model of computation specified by the x86 language already
    has access to unlimited memory when we assume the implementation detail
    that the underlying memory architecture of this abstract model is
    organized as an unlimited sequence of adjacent 4GB blocks.

    The x86 language has control flow instructions having {8,16,32} bit
    signed offsets to the current EIP (instruction pointer) value. It also
    has data access instructions having {8,16,32} bit signed offsets to the
    base register.

    Signed {8,16,32} bit Jumps relative to EIP
    ; Jump to the first address of the next 4GB block
    FFFFFFF0: E90B000000

    Signed {8,16,32} offsets relative to Base Register
    ; Moves a 32-bit integer from the first address of the next 4GB block
    e4: BBFFFFFFFF mov ebx, 0xffffffff
    f5: 8B4301 mov eax, dword ptr [ebx+0x1]

    Because both the control flow and data access instructions of the x86
    language have relative addressing the x86 language specifies an abstract
    model of computation having access to unlimited memory.

    The implementation details of this abstract model not a part of the
    abstract model itself.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Fri Aug 21 12:17:25 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-21 08:31, olcott wrote:
    On 8/21/2020 5:20 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    So you don't have the gumption to admit when you have been proved
    wrong?

    Me: Since it seems you can't describe the data structure, just show the
         code that changes every 0 to a 1 (these can be any fixed size) data >>      in the list elements.  This loop will make the data structure clear
         by showing how the code moves from one element to the next, and how >>      the end of the list is detected.


    So you are saying that you are incapable of this?

    From where I stand, your unwillingness to do this would seem to
    indicate that you are incapable of doing it.

    If you are incapable
    of this then that would also seem to indicate that you are incapable of evaluating this when it is completed.

    How is he supposed to evaluate that which you have not provided? If your
    claim is true, then you should be able to provide the example he asked for.

    André

    <snip remainder>

    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Fri Aug 21 14:14:26 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-21 12:46, olcott wrote:
    On 8/21/2020 1:17 PM, André G. Isaak wrote:
    On 2020-08-21 08:31, olcott wrote:
    On 8/21/2020 5:20 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    So you don't have the gumption to admit when you have been proved
    wrong?

    Me: Since it seems you can't describe the data structure, just show the >>>>      code that changes every 0 to a 1 (these can be any fixed size) >>>> data
         in the list elements.  This loop will make the data structure >>>> clear
         by showing how the code moves from one element to the next, and >>>> how
         the end of the list is detected.


    So you are saying that you are incapable of this?

     From where I stand, your unwillingness to do this would seem to
    indicate that you are incapable of doing it.

    If you are incapable of this then that would also seem to indicate
    that you are incapable of evaluating this when it is completed.

    How is he supposed to evaluate that which you have not provided?

    The part that I did provide that you disingenuously removed.
    Your assertion that I did not provide this is verifiably false.

    I didn't include this part because you've already posted it around ten
    times. Everyone following this has already seen it. But it doesn't
    actually address Ben's question.

    A tactic used to win a debate that shows bias against the truth can be reasonably construed as dishonest.

    If you not can see how these two key features of the x86 language
    already provide access to unlimited memory within the abstract model of computation defined by this language then you would not be able to
    understand any solution provided for your challenge.

    Signed {8,16,32} bit Jumps relative to EIP and
    Signed {8,16,32} offsets relative to Base Register
    already specify access to unlimited memory.

    An abstract machine having a tape head that can be advanced in 0 to 0xFFFFFFFF increments an unlimited number of times specifies a model of computation that has access to unlimited memory. The technical name for memory addressing based on displacement from the current memory address
    is relative addressing.

    The abstract model of computation specified by the x86 language already
    has access to unlimited memory when we assume the implementation detail
    that the underlying memory architecture of this abstract model is
    organized as an unlimited sequence of adjacent 4GB blocks.

    The x86 language has control flow instructions having {8,16,32} bit
    signed offsets to the current EIP (instruction pointer) value. It also
    has data access instructions having {8,16,32} bit signed offsets to the
    base register.

    Signed {8,16,32} bit Jumps relative to EIP
    ; Jump to the first address of the next 4GB block
    FFFFFFF0: E90B000000

    Signed {8,16,32} offsets relative to Base Register
    ; Moves a 32-bit integer from the first address of the next 4GB block
    e4: BBFFFFFFFF mov ebx, 0xffffffff
    f5: 8B4301     mov eax, dword ptr [ebx+0x1]

    So why not simply show how to use this to address Ben's question?
    Provide a simple program that zeros out some data structure of an
    arbitrary length.

    Because both the control flow and data access instructions of the x86 language have relative addressing the x86 language specifies an abstract model of computation having access to unlimited memory.

    Relative addressing is always relative to *something*, whether it be the program counter, an index register, or whatever. Unless those registers
    can hold a value of unbounded size, relative addressing doesn't give you
    access to unlimited memory. And if you want to claim that your "abstract
    model" has program counters or index registers of unlimited size, then
    what's the point of invoking relative addressing at all? Why not just
    claim that all address values are unbounded? In either case, though,
    you're not talking about x86 assembly.

    André


    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Fri Aug 21 16:27:00 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/21/2020 3:14 PM, André G. Isaak wrote:
    On 2020-08-21 12:46, olcott wrote:
    On 8/21/2020 1:17 PM, André G. Isaak wrote:
    On 2020-08-21 08:31, olcott wrote:
    On 8/21/2020 5:20 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    So you don't have the gumption to admit when you have been proved
    wrong?

    Me: Since it seems you can't describe the data structure, just show
    the
         code that changes every 0 to a 1 (these can be any fixed size) >>>>> data
         in the list elements.  This loop will make the data structure >>>>> clear
         by showing how the code moves from one element to the next, >>>>> and how
         the end of the list is detected.


    So you are saying that you are incapable of this?

     From where I stand, your unwillingness to do this would seem to
    indicate that you are incapable of doing it.

    If you are incapable of this then that would also seem to indicate
    that you are incapable of evaluating this when it is completed.

    How is he supposed to evaluate that which you have not provided?

    The part that I did provide that you disingenuously removed.
    Your assertion that I did not provide this is verifiably false.

    I didn't include this part because you've already posted it around ten
    times. Everyone following this has already seen it. But it doesn't
    actually address Ben's question.

    A tactic used to win a debate that shows bias against the truth can be
    reasonably construed as dishonest.

    If you not can see how these two key features of the x86 language
    already provide access to unlimited memory within the abstract model
    of computation defined by this language then you would not be able to
    understand any solution provided for your challenge.

    Signed {8,16,32} bit Jumps relative to EIP and
    Signed {8,16,32} offsets relative to Base Register
    already specify access to unlimited memory.

    An abstract machine having a tape head that can be advanced in 0 to
    0xFFFFFFFF increments an unlimited number of times specifies a model
    of computation that has access to unlimited memory. The technical name
    for memory addressing based on displacement from the current memory
    address is relative addressing.

    The abstract model of computation specified by the x86 language
    already has access to unlimited memory when we assume the
    implementation detail that the underlying memory architecture of this
    abstract model is organized as an unlimited sequence of adjacent 4GB
    blocks.

    The x86 language has control flow instructions having {8,16,32} bit
    signed offsets to the current EIP (instruction pointer) value. It also
    has data access instructions having {8,16,32} bit signed offsets to
    the base register.

    Signed {8,16,32} bit Jumps relative to EIP
    ; Jump to the first address of the next 4GB block
    FFFFFFF0: E90B000000

    Signed {8,16,32} offsets relative to Base Register
    ; Moves a 32-bit integer from the first address of the next 4GB block
    e4: BBFFFFFFFF mov ebx, 0xffffffff
    f5: 8B4301     mov eax, dword ptr [ebx+0x1]

    So why not simply show how to use this to address Ben's question?
    Provide a simple program that zeros out some data structure of an
    arbitrary length.

    Because both the control flow and data access instructions of the x86
    language have relative addressing the x86 language specifies an
    abstract model of computation having access to unlimited memory.

    Relative addressing is always relative to *something*, whether it be the program counter, an index register, or whatever. Unless those registers
    can hold a value of unbounded size, relative addressing doesn't give you access to unlimited memory.

    An abstract machine having a tape head that can be advanced in 0 to
    0xFFFFFFFF increments an unlimited number of times specifies a model of computation that has access to unlimited memory. The technical name for
    memory addressing based on displacement from the current memory address
    is relative addressing.

    The x86 language defines such an abstract model of computation with the following x86 instructions:

    -- Signed {8,16,32} bit Jumps relative to EIP and
    -- Signed {8,16,32} offsets relative to Base Register

    __asm mov ebx, 0ffffffffh
    __asm mov eax, [ebx+07fffffffh]

    17e: BBFFFFFFFF mov ebx, 0xffffffff
    183: 8B83FFFFFF7F mov eax, dword ptr [ebx+0x7fffffff]

    The above is a concrete example of accessing the data nearly 2GB beyond
    a 32-bit model's boundary, thus proving that the x86 language is not
    really limited by such a boundary.

    When hardware implementations of the abstract model of computation
    prohibit access to memory above a certain limit this is an
    implementation detail that is not part of the abstract model itself.

    If the implementation detail of the memory architecture of the x86
    language was an unlimited sequence of contiguous 4GB blocks and the
    absolute addressing modes of the 86 language only referred to addresses
    within the current 4GB block then it becomes rather obvious that the x86 language exactly as it currently exists specifies access to unlimited
    memory.

    Can you see how an x86 program could be encoded in the existing x86
    language that would fill every location of unlimited memory with zeroes?

    Hint Hint, the code would have to keep moving itself to increasingly
    higher memory and then do a relative jump to its new location.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Sun Aug 23 10:18:09 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/23/2020 7:00 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/22/2020 8:04 PM, Ben Bacarisse wrote:
    <cut>
    So don't prove me wrong! I don't mind.

    But I wonder what you gain by not posting the x86 program (the one that
    scans a linked list changing zeros to ones, but where the list is not in >>> any way bounded in size). It's odd that you have decided not to show
    /anyone/ that you are right, because you think I am biased. Why give me >>> that power? The only loser in that strategy is you.

    Its a pearls before swine thing.

    As I say, you owe me nothing (though an apology for calling me a pig
    would not go amiss). By all means, keep all your great discoveries
    secret forever, but I don't see how that benefits you one iota.

    Oddly, when I got bored telling you were wrong about the x86 and stopped replying you complained about being ignored. It seems only being told
    you are right is acceptable.

    You consistently prove that you lack the required prerequisites:

    As does anyone who knows the x86. The required prerequisite is to
    accept your extended architecture as being called "x86".

    No one disputes that suitable abstract instruction sets, unbounded
    registers,

    Yet you continue to fail to understand (or acknowledge) that the 32-bit
    wide fixed width registers defined by one of the two x86 abstract models
    of computation would by themselves provide access to unlimited memory
    without have to actually change the abstract model of computation
    defined by this x86 language at all.

    It is the relative addressing of the x86 abstract model of computation
    defined by the x86 language that provides memory access and control flow
    access to an unlimited number of 4GB blocks of memory.

    It is only hardware implementations of this abstract model that place
    any limit on memory access. These hardware limits are implementation
    details that are not any aspect of the abstract model itself.

    The x86 language itself allows the instruction pointer and/or the data
    pointer to move forward in 1 to 0x7fffffff incremenents an unlimited
    number of times from their current location.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Sun Aug 23 11:33:07 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/23/2020 10:42 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    In other words you continue to refrain from agreeing this this is true:

    If the implementation detail of the memory architecture of the x86
    language was an unlimited sequence of contiguous 4GB blocks and the
    absolute addressing modes of the 86 language only referred to
    addresses within the current 4GB block then it becomes rather obvious
    that the x86 language exactly as it currently exists specifies access
    to unlimited memory.

    The antecedent is false,

    You and I both know that implementation details are not a part of an
    abstract model of computation itself.

    You and I both know that the capability of moving a tape head forward
    relative to its current location an unlimited number of times would
    define access to unlimited storage.

    That the abstract model of computation defined by the x86 language
    allows its instruction pointer and its data pointer to move forward
    relative to their current location an unlimited number of times also
    very obviously defines access to unlimited storage.

    You and I also both know that when the physical hardware implementations
    of this abstract model place limits on the access to storage that these
    limits are not a part of the abstract model itself.

    Thus we can conclude that the abstract model of computation defined by
    the x86 language is fully Turing complete pertaining the requirement
    that a Turing machine has access to unlimited memory.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Sun Aug 23 12:29:49 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/23/2020 11:51 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/23/2020 7:00 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/22/2020 8:04 PM, Ben Bacarisse wrote:
    <cut>
    So don't prove me wrong! I don't mind.

    But I wonder what you gain by not posting the x86 program (the one that >>>>> scans a linked list changing zeros to ones, but where the list is not in >>>>> any way bounded in size). It's odd that you have decided not to show >>>>> /anyone/ that you are right, because you think I am biased. Why give me >>>>> that power? The only loser in that strategy is you.

    Its a pearls before swine thing.

    As I say, you owe me nothing (though an apology for calling me a pig
    would not go amiss). By all means, keep all your great discoveries
    secret forever, but I don't see how that benefits you one iota.

    Oddly, when I got bored telling you were wrong about the x86 and stopped >>> replying you complained about being ignored. It seems only being told
    you are right is acceptable.

    You consistently prove that you lack the required prerequisites:

    As does anyone who knows the x86. The required prerequisite is to
    accept your extended architecture as being called "x86".

    No one disputes that suitable abstract instruction sets, unbounded
    registers,

    Yet you continue to fail to understand (or acknowledge) that the
    32-bit wide fixed width registers defined by one of the two x86
    abstract models of computation would by themselves provide access to
    unlimited memory without have to actually change the abstract model of
    computation defined by this x86 language at all.

    I can not "understand" a falsehood. Why do you say "would" not "do"?
    Maybe you really do know that the x86 does not provide access to theoretically unlimited memory, so you merely say that is "would"? Why
    the subjunctive implying conditionality? You mean it would if it could?

    It is the relative addressing of the x86 abstract model of computation
    defined by the x86 language that provides memory access and control
    flow access to an unlimited number of 4GB blocks of memory.

    No. The x86 has no way to access unbounded memory.

    It is only hardware implementations of this abstract model that place
    any limit on memory access. These hardware limits are implementation
    details that are not any aspect of the abstract model itself.

    No, it is the design of the x86 that prevents it. Did you try to write
    the loop I suggested?

    This is a very definitive statement from me: it's not possible. In
    fact, I maybe wrong. There could be some corner of the specification
    that would permit access to some unbounded data (like the PDP-11 paper
    tape) so I am relying on other people to review what I write (and they
    may all have given up). I know /you/ have not demonstrated it, but

    The x86 language itself allows the instruction pointer and/or the data
    pointer

    There is no "data pointer" in this context. Relative addressing is RIP-relative only. You can (when not in IA-32 and compatibility mode)
    use RIP-relative addressing to access data but your remark suggests it applies to registers other than the instruction pointer. Mind you, you
    have not been clear if you are talking about 64 bit mode. All your
    previous posts seemed to suggest IA-32.

    to move forward in 1 to 0x7fffffff incremenents an unlimited number of
    times from their current location.

    To see why you are wrong (even about that) you need to read the details
    about that addressing mode. But you don't do details.


    Ah great you have finally provided some evidence that you can understand
    these things. Here is a detail that you missed.

    Some x86 jump instructions have a relative address (displacement) stored
    as a signed 32-bit immediate value.

    https://c9x.me/x86/html/file_module_x86_id_147.html

    Some x86 move instructions have a relative displacement from their base register stored as a signed 32-bit immediate value.

    In both these cases of relative addressing the specified address is
    based on an offset/displacement from a current address.

    So when we look at the abstract model of computation specified by the
    x86 language separately from its conventional Intel Architecture
    physical implementation we can see that this abstract model does specify
    access to unlimited memory.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Sun Aug 23 14:49:13 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    On 8/17/2020 1:12 PM, Kaz Kylheku wrote:
    On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:
    On 8/17/2020 1:44 AM, André G. Isaak wrote:
    On 2020-08-16 17:17, olcott wrote:

    That the relative addressing of the x86 language specifies an abstract >>>>> model of computation with access to unlimited memory really is
    blatantly obvious.

    The problem is the you have a poor track record with respect to things >>>> you claim are 'blatantly obvious'.

    It really seems that some people in these forums have an established
    track record of arguing against me when they know that I am right.

    The Intel x86 instruction set defines a fixed address space.

    That limitation is not simply an instance of an infinite abstraction;
    it is the abstraction.

    No addressing mode provides unlimited memory.

    I.e. you're dead wrong here about something that some elementary school
    kids know about computers.

    You made the rookie mistake of conflating the abstract model of
    computation defined by the x86 language with its typical Intel
    architecture physical implementation.

    Physical implementation details are never a part of an abstract model.

    You keep talking about the "x86 language" as something distinct from
    the x86 physical architecture. Where is this "x86 language" defined?
    Is there a published document that specifies it, independently of
    the hardware implementation? Is there some specific document that
    you're relying on for your assertions? (This is a serious question.)

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Kaz Kylheku on Sun Aug 23 16:29:00 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/17/2020 1:12 PM, Kaz Kylheku wrote:
    On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:
    On 8/17/2020 1:44 AM, André G. Isaak wrote:
    On 2020-08-16 17:17, olcott wrote:

    That the relative addressing of the x86 language specifies an abstract >>>> model of computation with access to unlimited memory really is
    blatantly obvious.

    The problem is the you have a poor track record with respect to things
    you claim are 'blatantly obvious'.

    André


    It really seems that some people in these forums have an established
    track record of arguing against me when they know that I am right.

    The Intel x86 instruction set defines a fixed address space.

    That limitation is not simply an instance of an infinite abstraction;
    it is the abstraction.

    No addressing mode provides unlimited memory.

    I.e. you're dead wrong here about something that some elementary school
    kids know about computers.

    You made the rookie mistake of conflating the abstract model of
    computation defined by the x86 language with its typical Intel
    architecture physical implementation.

    Physical implementation details are never a part of an abstract model.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Sun Aug 23 17:40:17 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/23/2020 4:49 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/17/2020 1:12 PM, Kaz Kylheku wrote:
    On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:
    On 8/17/2020 1:44 AM, André G. Isaak wrote:
    On 2020-08-16 17:17, olcott wrote:

    That the relative addressing of the x86 language specifies an abstract >>>>>> model of computation with access to unlimited memory really is
    blatantly obvious.

    The problem is the you have a poor track record with respect to things >>>>> you claim are 'blatantly obvious'.

    It really seems that some people in these forums have an established
    track record of arguing against me when they know that I am right.

    The Intel x86 instruction set defines a fixed address space.

    That limitation is not simply an instance of an infinite abstraction;
    it is the abstraction.

    No addressing mode provides unlimited memory.

    I.e. you're dead wrong here about something that some elementary school
    kids know about computers.

    You made the rookie mistake of conflating the abstract model of
    computation defined by the x86 language with its typical Intel
    architecture physical implementation.

    Physical implementation details are never a part of an abstract model.

    You keep talking about the "x86 language" as something distinct from
    the x86 physical architecture. Where is this "x86 language" defined?
    Is there a published document that specifies it, independently of
    the hardware implementation? Is there some specific document that
    you're relying on for your assertions? (This is a serious question.)


    That is a very good question.

    The documentation that directly pertains how each individual instruction
    is encoded as a sequence of bytes and the primary change of state that executing the instruction is defined to cause specifies the abstract
    model of computation that the x86 language defines.

    Jumping to a machine address is defined to have a primary change of
    state of changing the instruction pointer to the target address. https://c9x.me/x86/html/file_module_x86_id_147.html

    If the jump happens to be in the middle of the operating system or
    somewhere outside of physical memory these are implemtation details that
    are not defined by the abstract model of the x86 language itself.






    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Sun Aug 23 17:08:41 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    On 8/23/2020 4:49 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/17/2020 1:12 PM, Kaz Kylheku wrote:
    On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:
    On 8/17/2020 1:44 AM, André G. Isaak wrote:
    On 2020-08-16 17:17, olcott wrote:
    That the relative addressing of the x86 language specifies an abstract >>>>>>> model of computation with access to unlimited memory really is
    blatantly obvious.

    The problem is the you have a poor track record with respect to things >>>>>> you claim are 'blatantly obvious'.

    It really seems that some people in these forums have an established >>>>> track record of arguing against me when they know that I am right.

    The Intel x86 instruction set defines a fixed address space.

    That limitation is not simply an instance of an infinite abstraction;
    it is the abstraction.

    No addressing mode provides unlimited memory.

    I.e. you're dead wrong here about something that some elementary school >>>> kids know about computers.

    You made the rookie mistake of conflating the abstract model of
    computation defined by the x86 language with its typical Intel
    architecture physical implementation.

    Physical implementation details are never a part of an abstract model.

    You keep talking about the "x86 language" as something distinct from
    the x86 physical architecture. Where is this "x86 language" defined?
    Is there a published document that specifies it, independently of
    the hardware implementation? Is there some specific document that
    you're relying on for your assertions? (This is a serious question.)

    That is a very good question.

    The documentation that directly pertains how each individual
    instruction is encoded as a sequence of bytes and the primary change
    of state that executing the instruction is defined to cause specifies
    the abstract model of computation that the x86 language defines.

    Jumping to a machine address is defined to have a primary change of
    state of changing the instruction pointer to the target address. https://c9x.me/x86/html/file_module_x86_id_147.html

    If the jump happens to be in the middle of the operating system or
    somewhere outside of physical memory these are implemtation details
    that are not defined by the abstract model of the x86 language itself.

    c9x.me appears to be a personal web site owned by one person.
    (I think I've found that person's name, but it's not particularly
    easy to find so I won't mention it here.)

    I would have thought that a definitive document defining the "x86
    language" would have been published by Intel, or perhaps AMD, IEEE,
    or some other organization. I have no particular reason to doubt
    the accuracy of the web page you linked to, but it doesn't appear
    to be definitive.

    Do you have a link to a *definitive* description of x86 (either the architecture or what you call the "x86 language")? I'm looking for
    something that I or someone else can look at to confirm or refute
    your claims about infinite addressibility.

    I rarely deal with assembly or machine language, and x86 isn't
    the architecture I'm most familiar with, but my understanding
    (supplemented by a quick Google search just now) is that the
    architecture includes an "instruction pointer" or "IP", a register
    that old the memory address of the next instruction that will be
    generated. Depending on the mode, the IP can be 16 bits ("ip"),
    32 bits ("eip"), or 64 bits ("rip"). The IP as I understand it, is
    part of the model of computation defined by the x86 architecture.
    A jump instruction, whether it's absolute or relative, can only
    specify one of 2**N possible targets, where N is no greater than 64.

    A program running on a x86 in 32-bit mode could in principle execute
    a trillion jump instructions, but those instructions cannot possibly
    refer to more than 2**32 distinct target addresses.

    It's entirely possible that I'm missing something.

    What I'm looking for is (a) definitive documentation of the "x86
    language" and (b) an explanation for why the fixed size of the
    instruction pointer is not part of the abstract model of computation.

    Now of course someone could define such an abstract model in which
    jumps are *not* defined in terms of a fixed-width instruction
    pointer, and that could provide a basis for accessing unbounded
    storage. (Then again, defining a paper tape interface could do
    the same thing more straightforwardly.)

    But as far as I can tell, such an abstract model would necessarily
    be inconsistent with the x86 architecture. My hunch is that your
    assertion that the fixed width of the IP is not part of the abstract
    model is arbitrary and not supported by any definitive document,
    but I'm prepared to be proven wrong.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Sun Aug 23 20:19:12 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/23/2020 7:08 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/23/2020 4:49 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/17/2020 1:12 PM, Kaz Kylheku wrote:
    On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:
    On 8/17/2020 1:44 AM, André G. Isaak wrote:
    On 2020-08-16 17:17, olcott wrote:
    That the relative addressing of the x86 language specifies an abstract >>>>>>>> model of computation with access to unlimited memory really is >>>>>>>> blatantly obvious.

    The problem is the you have a poor track record with respect to things >>>>>>> you claim are 'blatantly obvious'.

    It really seems that some people in these forums have an established >>>>>> track record of arguing against me when they know that I am right.

    The Intel x86 instruction set defines a fixed address space.

    That limitation is not simply an instance of an infinite abstraction; >>>>> it is the abstraction.

    No addressing mode provides unlimited memory.

    I.e. you're dead wrong here about something that some elementary school >>>>> kids know about computers.

    You made the rookie mistake of conflating the abstract model of
    computation defined by the x86 language with its typical Intel
    architecture physical implementation.

    Physical implementation details are never a part of an abstract model.

    You keep talking about the "x86 language" as something distinct from
    the x86 physical architecture. Where is this "x86 language" defined?
    Is there a published document that specifies it, independently of
    the hardware implementation? Is there some specific document that
    you're relying on for your assertions? (This is a serious question.)

    That is a very good question.

    The documentation that directly pertains how each individual
    instruction is encoded as a sequence of bytes and the primary change
    of state that executing the instruction is defined to cause specifies
    the abstract model of computation that the x86 language defines.

    Jumping to a machine address is defined to have a primary change of
    state of changing the instruction pointer to the target address.
    https://c9x.me/x86/html/file_module_x86_id_147.html

    If the jump happens to be in the middle of the operating system or
    somewhere outside of physical memory these are implemtation details
    that are not defined by the abstract model of the x86 language itself.

    c9x.me appears to be a personal web site owned by one person.
    (I think I've found that person's name, but it's not particularly
    easy to find so I won't mention it here.)

    I would have thought that a definitive document defining the "x86
    language" would have been published by Intel, or perhaps AMD, IEEE,
    or some other organization. I have no particular reason to doubt
    the accuracy of the web page you linked to, but it doesn't appear
    to be definitive.

    Do you have a link to a *definitive* description of x86 (either the architecture or what you call the "x86 language")? I'm looking for
    something that I or someone else can look at to confirm or refute
    your claims about infinite addressibility.


    It is like the definitive guide of arithmetic they all say exactly the
    same thing. The crucial thing is where I drew the line between the
    language and its implementation.

    The x86 language is defined by its instructions:
    (a) Exactly how each instruction is encoded as a sequence of bytes.
    (b) The primary change to the machine state caused by executing the instruction. (AKA the most basic semantics of the instruction).

    I rarely deal with assembly or machine language, and x86 isn't
    the architecture I'm most familiar with, but my understanding
    (supplemented by a quick Google search just now) is that the
    architecture includes an "instruction pointer" or "IP", a register
    that old the memory address of the next instruction that will be
    generated. Depending on the mode, the IP can be 16 bits ("ip"),
    32 bits ("eip"), or 64 bits ("rip"). The IP as I understand it, is
    part of the model of computation defined by the x86 architecture.
    A jump instruction, whether it's absolute or relative, can only
    specify one of 2**N possible targets, where N is no greater than 64.

    A program running on a x86 in 32-bit mode could in principle execute
    a trillion jump instructions, but those instructions cannot possibly
    refer to more than 2**32 distinct target addresses.


    Sure they can because each one of them can jump 0x7fffffff bytes
    forward. This is documented on the link that I provided.

    What happens when they exceed 0xffffffff is an implementation detail.

    It's entirely possible that I'm missing something.

    What I'm looking for is (a) definitive documentation of the "x86
    language" and (b) an explanation for why the fixed size of the
    instruction pointer is not part of the abstract model of computation.

    Now of course someone could define such an abstract model in which
    jumps are *not* defined in terms of a fixed-width instruction
    pointer, and that could provide a basis for accessing unbounded
    storage. (Then again, defining a paper tape interface could do
    the same thing more straightforwardly.)

    But as far as I can tell, such an abstract model would necessarily
    be inconsistent with the x86 architecture. My hunch is that your
    assertion that the fixed width of the IP is not part of the abstract
    model is arbitrary and not supported by any definitive document,
    but I'm prepared to be proven wrong.


    Study the link that I provided.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Sun Aug 23 20:47:54 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    On 8/23/2020 7:08 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/23/2020 4:49 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/17/2020 1:12 PM, Kaz Kylheku wrote:
    On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:
    On 8/17/2020 1:44 AM, André G. Isaak wrote:
    On 2020-08-16 17:17, olcott wrote:
    That the relative addressing of the x86 language specifies an abstract
    model of computation with access to unlimited memory really is >>>>>>>>> blatantly obvious.

    The problem is the you have a poor track record with respect to things >>>>>>>> you claim are 'blatantly obvious'.

    It really seems that some people in these forums have an established >>>>>>> track record of arguing against me when they know that I am right. >>>>>>
    The Intel x86 instruction set defines a fixed address space.

    That limitation is not simply an instance of an infinite abstraction; >>>>>> it is the abstraction.

    No addressing mode provides unlimited memory.

    I.e. you're dead wrong here about something that some elementary school >>>>>> kids know about computers.

    You made the rookie mistake of conflating the abstract model of
    computation defined by the x86 language with its typical Intel
    architecture physical implementation.

    Physical implementation details are never a part of an abstract model. >>>>
    You keep talking about the "x86 language" as something distinct from
    the x86 physical architecture. Where is this "x86 language" defined?
    Is there a published document that specifies it, independently of
    the hardware implementation? Is there some specific document that
    you're relying on for your assertions? (This is a serious question.)

    That is a very good question.

    The documentation that directly pertains how each individual
    instruction is encoded as a sequence of bytes and the primary change
    of state that executing the instruction is defined to cause specifies
    the abstract model of computation that the x86 language defines.

    Jumping to a machine address is defined to have a primary change of
    state of changing the instruction pointer to the target address.
    https://c9x.me/x86/html/file_module_x86_id_147.html

    If the jump happens to be in the middle of the operating system or
    somewhere outside of physical memory these are implemtation details
    that are not defined by the abstract model of the x86 language itself.

    c9x.me appears to be a personal web site owned by one person.
    (I think I've found that person's name, but it's not particularly
    easy to find so I won't mention it here.)

    I would have thought that a definitive document defining the "x86
    language" would have been published by Intel, or perhaps AMD, IEEE,
    or some other organization. I have no particular reason to doubt
    the accuracy of the web page you linked to, but it doesn't appear
    to be definitive.

    Do you have a link to a *definitive* description of x86 (either the
    architecture or what you call the "x86 language")? I'm looking for
    something that I or someone else can look at to confirm or refute
    your claims about infinite addressibility.

    It is like the definitive guide of arithmetic they all say exactly the
    same thing. The crucial thing is where I drew the line between the
    language and its implementation.

    Right, the crucial thing is where *you* draw the line.

    I haven't seen any x86 documentation referring to it as a "language".
    There is a language (machine code) that runs on an x86 system, but it's
    the system architecture itself that defines the behavior.

    Surely you can provide a more definitive source of information than some
    guy's web site. (No disrespect to the author of that site.)

    Why is the width of the EIP register an implementation detail?
    Explain how you decided to draw that line.

    The x86 language is defined by its instructions:
    (a) Exactly how each instruction is encoded as a sequence of bytes.
    (b) The primary change to the machine state caused by executing the instruction. (AKA the most basic semantics of the instruction).

    Only by its instructions? Not by, say, the sizes of its registers?

    I rarely deal with assembly or machine language, and x86 isn't
    the architecture I'm most familiar with, but my understanding
    (supplemented by a quick Google search just now) is that the
    architecture includes an "instruction pointer" or "IP", a register
    that old the memory address of the next instruction that will be
    generated. Depending on the mode, the IP can be 16 bits ("ip"),
    32 bits ("eip"), or 64 bits ("rip"). The IP as I understand it, is
    part of the model of computation defined by the x86 architecture.
    A jump instruction, whether it's absolute or relative, can only
    specify one of 2**N possible targets, where N is no greater than 64.

    A program running on a x86 in 32-bit mode could in principle execute
    a trillion jump instructions, but those instructions cannot possibly
    refer to more than 2**32 distinct target addresses.

    Sure they can because each one of them can jump 0x7fffffff bytes
    forward. This is documented on the link that I provided.

    Circular reasoning. You say it can do it because it can do it.

    What happens when they exceed 0xffffffff is an implementation detail.

    Hardly.

    It's entirely possible that I'm missing something.

    What I'm looking for is (a) definitive documentation of the "x86
    language" and (b) an explanation for why the fixed size of the
    instruction pointer is not part of the abstract model of computation.

    Now of course someone could define such an abstract model in which
    jumps are *not* defined in terms of a fixed-width instruction
    pointer, and that could provide a basis for accessing unbounded
    storage. (Then again, defining a paper tape interface could do
    the same thing more straightforwardly.)

    But as far as I can tell, such an abstract model would necessarily
    be inconsistent with the x86 architecture. My hunch is that your
    assertion that the fixed width of the IP is not part of the abstract
    model is arbitrary and not supported by any definitive document,
    but I'm prepared to be proven wrong.

    Study the link that I provided.

    From that link (and I'd still like to see something *definitive*):

    A relative offset (rel8, rel16, or rel32) is generally specified
    as a label in assembly code, but at the machine code level, it
    is encoded as a signed 8-, 16-, or 32-bit immediate value. This
    value is added to the value in the EIP register. (Here, the
    EIP register contains the address of the instruction following
    the JMP instruction). When using relative offsets, the opcode
    (for short vs. near jumps) and the operand-size attribute (for
    near relative jumps) determines the size of the target operand
    (8, 16, or 32 bits).

    The 8-, 16-, or 32-bit value is added to the value in the EIP register.
    Do you claim that the EIP register can be infinitely wide? Do you claim
    that it can old an address whose value exceeds 2**64?

    (I know you've already acknowledged that the relative offset can't
    exceed 32 bits, so don't bother arguing about that.)

    Again, I can imagine that there might be some abstract x86-based
    architecture *with significant changes* that could support an
    arbitrarily large address range. But it wouldn't be "the x86 language"
    as you call it.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Sun Aug 23 23:34:56 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/23/2020 10:47 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/23/2020 7:08 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/23/2020 4:49 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/17/2020 1:12 PM, Kaz Kylheku wrote:
    On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:
    On 8/17/2020 1:44 AM, André G. Isaak wrote:
    On 2020-08-16 17:17, olcott wrote:
    That the relative addressing of the x86 language specifies an abstract
    model of computation with access to unlimited memory really is >>>>>>>>>> blatantly obvious.

    The problem is the you have a poor track record with respect to things
    you claim are 'blatantly obvious'.

    It really seems that some people in these forums have an established >>>>>>>> track record of arguing against me when they know that I am right. >>>>>>>
    The Intel x86 instruction set defines a fixed address space.

    That limitation is not simply an instance of an infinite abstraction; >>>>>>> it is the abstraction.

    No addressing mode provides unlimited memory.

    I.e. you're dead wrong here about something that some elementary school >>>>>>> kids know about computers.

    You made the rookie mistake of conflating the abstract model of
    computation defined by the x86 language with its typical Intel
    architecture physical implementation.

    Physical implementation details are never a part of an abstract model. >>>>>
    You keep talking about the "x86 language" as something distinct from >>>>> the x86 physical architecture. Where is this "x86 language" defined? >>>>> Is there a published document that specifies it, independently of
    the hardware implementation? Is there some specific document that
    you're relying on for your assertions? (This is a serious question.) >>>>
    That is a very good question.

    The documentation that directly pertains how each individual
    instruction is encoded as a sequence of bytes and the primary change
    of state that executing the instruction is defined to cause specifies
    the abstract model of computation that the x86 language defines.

    Jumping to a machine address is defined to have a primary change of
    state of changing the instruction pointer to the target address.
    https://c9x.me/x86/html/file_module_x86_id_147.html

    If the jump happens to be in the middle of the operating system or
    somewhere outside of physical memory these are implemtation details
    that are not defined by the abstract model of the x86 language itself.

    c9x.me appears to be a personal web site owned by one person.
    (I think I've found that person's name, but it's not particularly
    easy to find so I won't mention it here.)

    I would have thought that a definitive document defining the "x86
    language" would have been published by Intel, or perhaps AMD, IEEE,
    or some other organization. I have no particular reason to doubt
    the accuracy of the web page you linked to, but it doesn't appear
    to be definitive.

    Do you have a link to a *definitive* description of x86 (either the
    architecture or what you call the "x86 language")? I'm looking for
    something that I or someone else can look at to confirm or refute
    your claims about infinite addressibility.

    It is like the definitive guide of arithmetic they all say exactly the
    same thing. The crucial thing is where I drew the line between the
    language and its implementation.

    Right, the crucial thing is where *you* draw the line.

    I haven't seen any x86 documentation referring to it as a "language".
    There is a language (machine code) that runs on an x86 system, but it's
    the system architecture itself that defines the behavior.

    Surely you can provide a more definitive source of information than some guy's web site. (No disrespect to the author of that site.)


    I have many sources, this guy explained it more clearly. I usually refer
    to the Intel programmers reference manual for a specific CPU, yet the
    x86 language is not defined by any single chip.

    Why is the width of the EIP register an implementation detail?
    Explain how you decided to draw that line.

    My end goal was to provide the easiest possible way to see every single
    detail of the execution trace of the halting problem counter-example.
    I needed the highest possible abstraction that was Turing complete.

    People needed to be able to see the overall architecture of the program
    and not have it buried in a quadrillion details. My actual halting
    problem refutation will be in "C", its execution trace has to be in x86 assembly language and the "C" program examines the x86 machine language.

    The x86 language is defined by its instructions:
    (a) Exactly how each instruction is encoded as a sequence of bytes.
    (b) The primary change to the machine state caused by executing the
    instruction. (AKA the most basic semantics of the instruction).

    Only by its instructions? Not by, say, the sizes of its registers?

    The size of the registers is an implentation detail that I count as part
    of the language specification.

    I rarely deal with assembly or machine language, and x86 isn't
    the architecture I'm most familiar with, but my understanding
    (supplemented by a quick Google search just now) is that the
    architecture includes an "instruction pointer" or "IP", a register
    that old the memory address of the next instruction that will be
    generated. Depending on the mode, the IP can be 16 bits ("ip"),
    32 bits ("eip"), or 64 bits ("rip"). The IP as I understand it, is
    part of the model of computation defined by the x86 architecture.
    A jump instruction, whether it's absolute or relative, can only
    specify one of 2**N possible targets, where N is no greater than 64.

    A program running on a x86 in 32-bit mode could in principle execute
    a trillion jump instructions, but those instructions cannot possibly
    refer to more than 2**32 distinct target addresses.

    Sure they can because each one of them can jump 0x7fffffff bytes
    forward. This is documented on the link that I provided.

    Circular reasoning. You say it can do it because it can do it.


    The syntax of the x86 language stipulates that an instruction can be
    specified as correct syntax that jumps to an address 50% greater than
    the maximum address that fits in a 32-bit register.

    What happens when they exceed 0xffffffff is an implementation detail.

    Hardly.

    An operating system exception is certainly not aspect what-so-ever of
    any abstract model of computation.


    It's entirely possible that I'm missing something.

    What I'm looking for is (a) definitive documentation of the "x86
    language" and (b) an explanation for why the fixed size of the
    instruction pointer is not part of the abstract model of computation.

    Now of course someone could define such an abstract model in which
    jumps are *not* defined in terms of a fixed-width instruction
    pointer, and that could provide a basis for accessing unbounded
    storage. (Then again, defining a paper tape interface could do
    the same thing more straightforwardly.)

    But as far as I can tell, such an abstract model would necessarily
    be inconsistent with the x86 architecture. My hunch is that your
    assertion that the fixed width of the IP is not part of the abstract
    model is arbitrary and not supported by any definitive document,
    but I'm prepared to be proven wrong.

    Study the link that I provided.

    From that link (and I'd still like to see something *definitive*):

    A relative offset (rel8, rel16, or rel32) is generally specified
    as a label in assembly code, but at the machine code level, it
    is encoded as a signed 8-, 16-, or 32-bit immediate value. This
    value is added to the value in the EIP register. (Here, the
    EIP register contains the address of the instruction following
    the JMP instruction). When using relative offsets, the opcode
    (for short vs. near jumps) and the operand-size attribute (for
    near relative jumps) determines the size of the target operand
    (8, 16, or 32 bits).

    The 8-, 16-, or 32-bit value is added to the value in the EIP register.
    Do you claim that the EIP register can be infinitely wide? Do you claim
    that it can old an address whose value exceeds 2**64?


    The syntax of the language specifies that the EIP would roll over like
    an odometer and reference memory in the next adjacent 4GB block.
    0FFFFFFFFh + 07FFFFFFFh = 07FFFFFFEh
    The above arithmetic shows that it has to roll over like an odometer.

    The semantics of a memory address that is 07FFFFFFF bytes above
    0FFFFFFFF specifies that this address is reachable.


    (I know you've already acknowledged that the relative offset can't
    exceed 32 bits, so don't bother arguing about that.)

    Again, I can imagine that there might be some abstract x86-based
    architecture *with significant changes* that could support an
    arbitrarily large address range. But it wouldn't be "the x86 language"
    as you call it.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jeff Barnett@21:1/5 to olcott on Sun Aug 23 23:15:13 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/23/2020 9:57 PM, olcott wrote:
    The abstract model of computation specified by the x86 language provides access to unlimited memory.

    The x86 language is defined by its instructions:
    (a) Exactly how each instruction is encoded as a sequence of bytes.
    (b) The primary change to the machine state caused by executing the instruction. (AKA the most basic semantics of the instruction).

    The syntax of this language specifies that relative jumps can be made to 0x7FFFFFFF bytes above the maximum value that the EIP register can hold.
    When this syntax is implemented having semantics that it specifies then:

    (a) Jumps could be made to memory location addresses that would not fit
    in the EIP register. This entails that the EIP register would have to
    roll over like an odometer.

    (b) This higher memory would have to exist.

    (c) This process could be repeated an unlimited number of times.

    I see that the prior version lasted for less than 20 minutes. You really shouldn't masturbate and type at the same time. It breaks your limited
    ability to concentrate. You keep flipping crap out there without a clue.
    Try thinking about what you are typing. You are a joke - no one can take
    you seriously when you keep spinning out one mistake laden email after
    another. I suppose those copyright symbols you include are meant to
    protect your readers. If someone quoted you as a serious source, they
    would be declared to be a nut case. So my serious advise to you: get
    your hand out of your pants and think. Also don't type until thought
    penetrates your skull.
    --
    Jeff Barnett

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Jeff Barnett on Mon Aug 24 00:35:21 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/24/2020 12:15 AM, Jeff Barnett wrote:
    On 8/23/2020 9:57 PM, olcott wrote:
    The abstract model of computation specified by the x86 language
    provides access to unlimited memory.

    The x86 language is defined by its instructions:
    (a) Exactly how each instruction is encoded as a sequence of bytes.
    (b) The primary change to the machine state caused by executing the
    instruction. (AKA the most basic semantics of the instruction).

    The syntax of this language specifies that relative jumps can be made
    to 0x7FFFFFFF bytes above the maximum value that the EIP register can
    hold.
    When this syntax is implemented having semantics that it specifies then:

    (a) Jumps could be made to memory location addresses that would not
    fit in the EIP register. This entails that the EIP register would have
    to roll over like an odometer.

    (b) This higher memory would have to exist.

    (c) This process could be repeated an unlimited number of times.

    I see that the prior version lasted for less than 20 minutes. You really shouldn't masturbate and type at the same time. It breaks your limited ability to concentrate. You keep flipping crap out there without a clue.
    Try thinking about what you are typing. You are a joke - no one can take
    you seriously when you keep spinning out one mistake laden email after another. I suppose those copyright symbols you include are meant to
    protect your readers. If someone quoted you as a serious source, they
    would be declared to be a nut case. So my serious advise to you: get
    your hand out of your pants and think. Also don't type until thought penetrates your skull.

    Those that get pleasure out of denigrating others prove how tiny their
    soul is. I am pleased to see that you actually really understand some of
    this stuff.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Sun Aug 23 22:47:16 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    On 8/23/2020 10:47 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    Study the link that I provided.

    From that link (and I'd still like to see something *definitive*):

    A relative offset (rel8, rel16, or rel32) is generally specified
    as a label in assembly code, but at the machine code level, it
    is encoded as a signed 8-, 16-, or 32-bit immediate value. This
    value is added to the value in the EIP register. (Here, the
    EIP register contains the address of the instruction following
    the JMP instruction). When using relative offsets, the opcode
    (for short vs. near jumps) and the operand-size attribute (for
    near relative jumps) determines the size of the target operand
    (8, 16, or 32 bits).

    The 8-, 16-, or 32-bit value is added to the value in the EIP register.
    Do you claim that the EIP register can be infinitely wide? Do you claim
    that it can old an address whose value exceeds 2**64?

    The syntax of the language specifies that the EIP would roll over like
    an odometer and reference memory in the next adjacent 4GB block.
    0FFFFFFFFh + 07FFFFFFFh = 07FFFFFFEh
    The above arithmetic shows that it has to roll over like an odometer.

    The semantics of a memory address that is 07FFFFFFF bytes above
    0FFFFFFFF specifies that this address is reachable.

    Explain to me how the "syntax of the language" specifies that. Cite a
    source that says that's what it means. Why does it *have to* roll over
    like an odometer? If I have an actual x86 chip where it *doesn't* roll
    over and address 17FFFFFFE *isn't* accessible, does that chip violate
    the "x86 language"?

    Or is it true because you assert it?

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Mon Aug 24 01:52:48 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/24/2020 12:47 AM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/23/2020 10:47 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    Study the link that I provided.

    From that link (and I'd still like to see something *definitive*):

    A relative offset (rel8, rel16, or rel32) is generally specified
    as a label in assembly code, but at the machine code level, it
    is encoded as a signed 8-, 16-, or 32-bit immediate value. This
    value is added to the value in the EIP register. (Here, the
    EIP register contains the address of the instruction following
    the JMP instruction). When using relative offsets, the opcode
    (for short vs. near jumps) and the operand-size attribute (for
    near relative jumps) determines the size of the target operand
    (8, 16, or 32 bits).

    The 8-, 16-, or 32-bit value is added to the value in the EIP register.
    Do you claim that the EIP register can be infinitely wide? Do you claim >>> that it can old an address whose value exceeds 2**64?

    The syntax of the language specifies that the EIP would roll over like
    an odometer and reference memory in the next adjacent 4GB block.
    0FFFFFFFFh + 07FFFFFFFh = 07FFFFFFEh
    The above arithmetic shows that it has to roll over like an odometer.

    The semantics of a memory address that is 07FFFFFFF bytes above
    0FFFFFFFF specifies that this address is reachable.

    Explain to me how the "syntax of the language" specifies that.

    ; This specifies a jump 2GB above the maximum value of EIP
    fffffff0: EBFFFFFF7F

    ; Since it specifies a jump 2GB above the maximum value of EIP
    ; then according to the syntax of the language this memory exists.

    Cite a source that says that's what it means. I already cited the source. It is the same source.

    Why does it *have to* roll over like an odometer?
    That is how fixed width arithmetic works.
    fffffff0 + 5 + 7FFFFFFF = 7FFFFFF4

    If I have an actual x86 chip where it *doesn't* roll
    The hardware implementation overrides the syntax of the language.

    over and address 17FFFFFFE *isn't* accessible, does that chip violate
    the "x86 language"?

    Here is the architecture that the x86 language specifies:

    If the implementation detail of the memory architecture of the x86
    language was an unlimited sequence of contiguous 4GB blocks and the
    absolute addressing modes of the 86 language only referred to addresses
    within the current 4GB block then it becomes rather obvious that the x86 language exactly as it currently exists specifies access to unlimited
    memory.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Mon Aug 24 00:30:51 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    On 8/24/2020 12:47 AM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/23/2020 10:47 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    Study the link that I provided.

    From that link (and I'd still like to see something *definitive*):

    A relative offset (rel8, rel16, or rel32) is generally specified >>>> as a label in assembly code, but at the machine code level, it
    is encoded as a signed 8-, 16-, or 32-bit immediate value. This
    value is added to the value in the EIP register. (Here, the
    EIP register contains the address of the instruction following
    the JMP instruction). When using relative offsets, the opcode
    (for short vs. near jumps) and the operand-size attribute (for
    near relative jumps) determines the size of the target operand
    (8, 16, or 32 bits).

    The 8-, 16-, or 32-bit value is added to the value in the EIP register. >>>> Do you claim that the EIP register can be infinitely wide? Do you claim >>>> that it can old an address whose value exceeds 2**64?

    The syntax of the language specifies that the EIP would roll over like
    an odometer and reference memory in the next adjacent 4GB block.
    0FFFFFFFFh + 07FFFFFFFh = 07FFFFFFEh
    The above arithmetic shows that it has to roll over like an odometer.

    The semantics of a memory address that is 07FFFFFFF bytes above
    0FFFFFFFF specifies that this address is reachable.

    Explain to me how the "syntax of the language" specifies that.

    ; This specifies a jump 2GB above the maximum value of EIP
    fffffff0: EBFFFFFF7F

    The *syntax* says no such thing. You're talking about semantics.

    ; Since it specifies a jump 2GB above the maximum value of EIP
    ; then according to the syntax of the language this memory exists.

    Cite a source that says that's what it means. I already cited the source. It is the same source.

    Why does it *have to* roll over like an odometer?
    That is how fixed width arithmetic works.
    fffffff0 + 5 + 7FFFFFFF = 7FFFFFF4

    That says nothing about going the fffffff0: EBFFFFFF7F that you claimed
    above.

    If I have an actual x86 chip where it *doesn't* roll
    The hardware implementation overrides the syntax of the language.

    over and address 17FFFFFFE *isn't* accessible, does that chip violate
    the "x86 language"?

    Here is the architecture that the x86 language specifies:

    If the implementation detail of the memory architecture of the x86
    language was an unlimited sequence of contiguous 4GB blocks and the
    absolute addressing modes of the 86 language only referred to
    addresses within the current 4GB block then it becomes rather obvious
    that the x86 language exactly as it currently exists specifies access
    to unlimited memory.

    That's your claim. Can you support it by citing something that you
    didn't write?

    I understand that you're defining some kind of *extension* to the x86.
    And yet you refuse to acknowledge that it's an extension, and insist
    that everything you say about it is implied by the syntax.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Mon Aug 24 10:55:24 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/24/2020 2:30 AM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/24/2020 12:47 AM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/23/2020 10:47 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    Study the link that I provided.

    From that link (and I'd still like to see something *definitive*): >>>>>
    A relative offset (rel8, rel16, or rel32) is generally specified >>>>> as a label in assembly code, but at the machine code level, it >>>>> is encoded as a signed 8-, 16-, or 32-bit immediate value. This >>>>> value is added to the value in the EIP register. (Here, the
    EIP register contains the address of the instruction following >>>>> the JMP instruction). When using relative offsets, the opcode >>>>> (for short vs. near jumps) and the operand-size attribute (for >>>>> near relative jumps) determines the size of the target operand >>>>> (8, 16, or 32 bits).

    The 8-, 16-, or 32-bit value is added to the value in the EIP register. >>>>> Do you claim that the EIP register can be infinitely wide? Do you claim >>>>> that it can old an address whose value exceeds 2**64?

    The syntax of the language specifies that the EIP would roll over like >>>> an odometer and reference memory in the next adjacent 4GB block.
    0FFFFFFFFh + 07FFFFFFFh = 07FFFFFFEh
    The above arithmetic shows that it has to roll over like an odometer.

    The semantics of a memory address that is 07FFFFFFF bytes above
    0FFFFFFFF specifies that this address is reachable.

    Explain to me how the "syntax of the language" specifies that.

    ; This specifies a jump 2GB above the maximum value of EIP
    fffffff0: E9FFFFFF7F

    We will start with just the facts:

    https://c9x.me/x86/html/file_module_x86_id_147.html
    E9 cd JMP rel32
    Jump near, relative, displacement relative to next instruction.

    FFFFFFF0: E9FFFFFF7F
    Indicates that the EIP is to be changed to a value that is 7FFFFFFF
    greater than FFFFFFF5, which is the next address after fffffff0.
    FFFFFFF5 + 7FFFFFFF = 17FFFFFF4

    The instruction is @ address FFFFFFF0
    The Opcode: E9 indicates a jmp to a signed 32-bit
    displacement from the next instruction @ FFFFFFF5

    The relative displacement is the immediate value encoded directly in the instruction as: FFFFFF7F, because these bytes are in little-endian order
    they are listed backwards. They actually refer to the signed 32-bit
    integer: 7FFFFFFF.

    So the instruction says to jump to FFFFFFF5 + 7FFFFFFF
    which is an address that is nearly 2GB larger than the EIP can hold.

    EIP = FFFFFFF5 + 7FFFFFFF, truncates the high order bit rolling the EIP
    over like an odometer: 7FFFFFF4

    Before we proceed with any interpretation of these facts lets first come
    to a mutual agreement about these facts themselves.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jeff Barnett@21:1/5 to olcott on Mon Aug 24 10:29:08 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/23/2020 11:35 PM, olcott wrote:
    On 8/24/2020 12:15 AM, Jeff Barnett wrote:
    On 8/23/2020 9:57 PM, olcott wrote:
    The abstract model of computation specified by the x86 language
    provides access to unlimited memory.

    The x86 language is defined by its instructions:
    (a) Exactly how each instruction is encoded as a sequence of bytes.
    (b) The primary change to the machine state caused by executing the
    instruction. (AKA the most basic semantics of the instruction).

    The syntax of this language specifies that relative jumps can be made
    to 0x7FFFFFFF bytes above the maximum value that the EIP register can
    hold.
    When this syntax is implemented having semantics that it specifies then: >>>
    (a) Jumps could be made to memory location addresses that would not
    fit in the EIP register. This entails that the EIP register would
    have to roll over like an odometer.

    (b) This higher memory would have to exist.

    (c) This process could be repeated an unlimited number of times.

    I see that the prior version lasted for less than 20 minutes. You
    really shouldn't masturbate and type at the same time. It breaks your
    limited ability to concentrate. You keep flipping crap out there
    without a clue. Try thinking about what you are typing. You are a joke
    - no one can take you seriously when you keep spinning out one mistake
    laden email after another. I suppose those copyright symbols you
    include are meant to protect your readers. If someone quoted you as a
    serious source, they would be declared to be a nut case. So my serious
    advise to you: get your hand out of your pants and think. Also don't
    type until thought penetrates your skull.

    Those that get pleasure out of denigrating others prove how tiny their
    soul is. I am pleased to see that you actually really understand some of
    this stuff.

    If you did, these threads would be far fewer and far shorter. You type
    and assert without knowledge or thought. That's just a form of mental masturbation. You are so certain and so wrong so often that I'm sure
    records have been set by you for display of total ignorance of a subject
    in which you wish to excel. Whether a troll or an ignoramus makes little difference; if the former, you are starting to believe your nonsense.
    --
    Jeff Barnett

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Mon Aug 24 11:40:35 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-24 00:52, olcott wrote:

    If the implementation detail of the memory architecture of the x86
    language was an unlimited sequence of contiguous 4GB blocks and the
    absolute addressing modes of the 86 language only referred to addresses within the current 4GB block then it becomes rather obvious that the x86 language exactly as it currently exists specifies access to unlimited
    memory.

    If all absolute addresses refer to the 'current block', how exactly does
    one identify which block is 'current'? AFAIK the x86 does not have a
    'current block pointer'.

    André


    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Jeff Barnett on Mon Aug 24 13:24:03 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/24/2020 11:29 AM, Jeff Barnett wrote:
    On 8/23/2020 11:35 PM, olcott wrote:
    On 8/24/2020 12:15 AM, Jeff Barnett wrote:
    On 8/23/2020 9:57 PM, olcott wrote:
    The abstract model of computation specified by the x86 language
    provides access to unlimited memory.

    The x86 language is defined by its instructions:
    (a) Exactly how each instruction is encoded as a sequence of bytes.
    (b) The primary change to the machine state caused by executing the
    instruction. (AKA the most basic semantics of the instruction).

    The syntax of this language specifies that relative jumps can be
    made to 0x7FFFFFFF bytes above the maximum value that the EIP
    register can hold.
    When this syntax is implemented having semantics that it specifies
    then:

    (a) Jumps could be made to memory location addresses that would not
    fit in the EIP register. This entails that the EIP register would
    have to roll over like an odometer.

    (b) This higher memory would have to exist.

    (c) This process could be repeated an unlimited number of times.

    I see that the prior version lasted for less than 20 minutes. You
    really shouldn't masturbate and type at the same time. It breaks your
    limited ability to concentrate. You keep flipping crap out there
    without a clue. Try thinking about what you are typing. You are a
    joke - no one can take you seriously when you keep spinning out one
    mistake laden email after another. I suppose those copyright symbols
    you include are meant to protect your readers. If someone quoted you
    as a serious source, they would be declared to be a nut case. So my
    serious advise to you: get your hand out of your pants and think.
    Also don't type until thought penetrates your skull.

    Those that get pleasure out of denigrating others prove how tiny their
    soul is. I am pleased to see that you actually really understand some
    of this stuff.

    If you did, these threads would be far fewer and far shorter. You type
    and assert without knowledge or thought. That's just a form of mental masturbation. You are so certain and so wrong so often that I'm sure
    records have been set by you for display of total ignorance of a subject
    in which you wish to excel. Whether a troll or an ignoramus makes little difference; if the former, you are starting to believe your nonsense.

    The reasonably plausible basis for such an assessment is that I have
    been trying to convey meanings using conventional terminology as the
    first approximation of the meanings that I intend to convey because
    there are no actual terms of the art that convey the meanings that intend.

    Initially I made sure to not learn the precise meanings of these
    conventional terms of the art terms so that my thinking would not be
    confined to the box of conventional thinking.

    Ideas such as the Sapir Whorf hypothesis show that language can confine thinking by limiting the ideas that can possibly be expressed.

    The strong version says that language determines thought and that
    linguistic categories limit and determine cognitive categories. https://en.wikipedia.org/wiki/Linguistic_relativity

    Now that the ideas that I need to convey have sufficiently gelled in my
    own mind there is no risk of learning the conventional terms of the art.
    For the near future I will focus on my computer science work thus coding
    is mostly what I will be doing. My code speaks for itself.

    The key idea that I need to convey now is:
    The abstract model of computation defined by the x86 language specifies
    access to unlimited memory.

    I may post tested x86 machine language code to support this position.
    This machine language code would begin @ 0x100, zero out the preceding
    256 bytes move itself to a memory address that is 256 bytes higher and
    then do a relative jump to that address. I would be able to demonstrate
    this code by an execution trace.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Mon Aug 24 13:26:56 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/24/2020 12:40 PM, André G. Isaak wrote:
    On 2020-08-24 00:52, olcott wrote:

    If the implementation detail of the memory architecture of the x86
    language was an unlimited sequence of contiguous 4GB blocks and the
    absolute addressing modes of the 86 language only referred to
    addresses within the current 4GB block then it becomes rather obvious
    that the x86 language exactly as it currently exists specifies access
    to unlimited memory.

    If all absolute addresses refer to the 'current block', how exactly does
    one identify which block is 'current'? AFAIK the x86 does not have a
    'current block pointer'.

    André



    The idea is that it is exactly like a Turing Machine tape head.
    The EIP points to the current byte address in the current 4GB block.
    There are an unlimited number of 4GB blocks before and after the current
    4GB block.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Mon Aug 24 13:23:54 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-24 12:26, olcott wrote:
    On 8/24/2020 12:40 PM, André G. Isaak wrote:
    On 2020-08-24 00:52, olcott wrote:

    If the implementation detail of the memory architecture of the x86
    language was an unlimited sequence of contiguous 4GB blocks and the
    absolute addressing modes of the 86 language only referred to
    addresses within the current 4GB block then it becomes rather obvious
    that the x86 language exactly as it currently exists specifies access
    to unlimited memory.

    If all absolute addresses refer to the 'current block', how exactly
    does one identify which block is 'current'? AFAIK the x86 does not
    have a 'current block pointer'.

    André



    The idea is that it is exactly like a Turing Machine tape head.
    The EIP points to the current byte address in the current 4GB block.

    There are an unlimited number of 4GB blocks before and after the current
    4GB block.

    So how do I know which of those blocks the EIP is pointing at? Talking
    about the 'current' 4GB block is meaningless unless you have some way of differentiating between these blocks.

    What changes in the system when it moves from one of these 4GB blocks to
    the next or previous block?

    André

    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Mon Aug 24 16:02:07 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/24/2020 2:23 PM, André G. Isaak wrote:
    On 2020-08-24 12:26, olcott wrote:
    On 8/24/2020 12:40 PM, André G. Isaak wrote:
    On 2020-08-24 00:52, olcott wrote:

    If the implementation detail of the memory architecture of the x86
    language was an unlimited sequence of contiguous 4GB blocks and the
    absolute addressing modes of the 86 language only referred to
    addresses within the current 4GB block then it becomes rather
    obvious that the x86 language exactly as it currently exists
    specifies access to unlimited memory.

    If all absolute addresses refer to the 'current block', how exactly
    does one identify which block is 'current'? AFAIK the x86 does not
    have a 'current block pointer'.

    André



    The idea is that it is exactly like a Turing Machine tape head.
    The EIP points to the current byte address in the current 4GB block.

    There are an unlimited number of 4GB blocks before and after the
    current 4GB block.

    So how do I know which of those blocks the EIP is pointing at? Talking
    about the 'current' 4GB block is meaningless unless you have some way of differentiating between these blocks.


    The current 4GB block is exactly the same thing as the current location
    of the Turing machine tape head.

    What changes in the system when it moves from one of these 4GB blocks to
    the next or previous block?

    André


    A relative jump into a next 4GB memory block or a prior 4GB memory block
    only changes the value of the EIP register. Relative memory addressing
    allows code and data to be copied between 4GB boundaries.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Mon Aug 24 18:18:29 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/23/2020 7:38 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/23/2020 4:14 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/23/2020 10:42 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    In other words you continue to refrain from agreeing this this is true: >>>>>>
    If the implementation detail of the memory architecture of the x86 >>>>>> language was an unlimited sequence of contiguous 4GB blocks and the >>>>>> absolute addressing modes of the 86 language only referred to
    addresses within the current 4GB block then it becomes rather obvious >>>>>> that the x86 language exactly as it currently exists specifies access >>>>>> to unlimited memory.

    The antecedent is false,

    You and I both know that implementation details are not a part of an
    abstract model of computation itself.

    We do.

    You and I both know that the capability of moving a tape head forward
    relative to its current location an unlimited number of times would
    define access to unlimited storage.

    Provided you can read and write the data, and go back as well as
    forward. But given your disliking of details, that's quibbling.


    Those extra details are required.

    That the abstract model of computation defined by the x86 language
    allows its instruction pointer and its data pointer to move forward
    relative to their current location an unlimited number of times also
    very obviously defines access to unlimited storage.

    No, we don't know that. The architecture of the x86 is very clear about >>> what relative addresses are and how they are calculated. RIP is a
    fixed-width register.

    -- You and I both know that implementation details are not a part of an
    -- abstract model of computation itself.

    The Intel architecture of the x86 is a physical implementatation
    detail that is not any part of the abstract model of computation
    defined by the x86 language.

    Don't be silly. That registers have a fixed width is not an
    implementation detail.

    You and I also both know that when the physical hardware
    implementations of this abstract model place limits on the access to
    storage that these limits are not a part of the abstract model itself.

    Yes, we do.

    Good.

    Thus we can conclude that the abstract model of computation defined by >>>> the x86 language is fully Turing complete pertaining the requirement
    that a Turing machine has access to unlimited memory.

    No we can't conclude that.

    You might not understand the abstract model of computaton defined by
    the x86 language well enough to conclude that.

    Write the loop I suggested and I will admit I was wrong.
    Alternatively,
    (the alternative was to admit you can't)

    Writing an x86 program that would fill an unlimited amount of memory
    with zeroes requires understanding the details of how the abstract
    model of computation defined by the x86 language overcomes memory
    limits with relative addressing.

    Until you understand this you won't be able to understand the program.

    That is a cowardly answer and I struggle to believe that you being
    sincere because it is such a childish excuse. The truth is you know you can't do as you boast. It's as plain as day from the description of how
    the x86 architecture works.

    But suit yourself. As I said, boasting and insulting is the main
    pleasure for a crank like you, and you won't stop just because I call
    you out on it. Have fun.


    I don't disrespect you. I disrespect your disrespecting me.
    The code is completed, it is only 65 bytes of x86 machine code.

    The code begins @ machine address 0x100. Fills the 256 bytes preceding
    it with “@@@@”. Dynamically modifies its own first instruction to refer
    to its new code address. Copies itself to a new address that is exactly
    0x100 bytes above its current code address and then does a relative jump
    to this new address.

    I had the emulator terminate its execution after the first four full
    cycles. I output the first 1024 bytes of memory before and after
    execution and output complete execution trace of all of the execution.
    This is 23 pages long.

    According to the meaning of the syntax of the x86 language this program
    would continue to operate after it completed its execution @ machine
    address: FFFFFF00.

    The only reason that it would not continue to operate would be that the physical implementation of the abstract model did not implement the
    semantics specified by the language syntax.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Mon Aug 24 20:21:58 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/24/2020 7:59 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/23/2020 7:38 PM, Ben Bacarisse wrote:
    <cut>
    But suit yourself. As I said, boasting and insulting is the main
    pleasure for a crank like you, and you won't stop just because I call
    you out on it. Have fun.

    I don't disrespect you. I disrespect your disrespecting me.
    The code is completed, it is only 65 bytes of x86 machine code.

    Another empty boast -- still no code. You can't seem to stop doing it.
    Or maybe repeating the excuse for not posting code -- that it would be "pearls before swine" -- would not look good next to your claim that you don't disrespect me!

    The code begins @ machine address 0x100. Fills the 256 bytes preceding
    it with “@@@@”. Dynamically modifies its own first instruction to
    refer to its new code address. Copies itself to a new address that is
    exactly 0x100 bytes above its current code address and then does a
    relative jump to this new address.

    Instead of 65 bytes of unambiguous code, you post 312 bytes of poorly
    worded description. Despite the poor wording, this is clearly not the
    code you said you could produce (it's not in any way a linked list) but that's OK. You say enough for me to know that even this more limited
    loop does not meet your objective. Post the code and you can find out
    why it can only access a finite amount memory. Don't post the code and
    you can boast away to year heart's content. It's up to you.

    I had the emulator terminate its execution after the first four full
    cycles. I output the first 1024 bytes of memory before and after
    execution and output complete execution trace of all of the execution.
    This is 23 pages long.

    Wow! You wrote code and got 23 pages of output. You must be right.
    I'll call the editor of the JACM right away.

    According to the meaning of the syntax of the x86 language this
    program would continue to operate after it completed its execution @
    machine address: FFFFFF00.

    By studiously hiding the code you can avoid anyone being able to be
    specific, but assuming a reasonable interpretation of your non-code
    waffle, the jumps will always be to addresses within the current code segment. I'm pretty sure there are other problems with it (hint: you
    talk of EIP and not RIP so data access is also a problem) but your
    hiding the code protects it from more specific analysis.

    The only reason that it would not continue to operate would be that
    the physical implementation of the abstract model did not implement
    the semantics specified by the language syntax.

    No. For more details, post the code.


    You have not yet shown that you understand the subject matter well
    enough that you would not baselessly denigrate it. You have shown that
    you are capable of understanding the subject matter well enough by your reference to RIP relative addressing.

    I do have to admit that your idea of creating code such as this was a
    great idea. By the fully executing code it is conclusively proven that
    the unlimited memory access provided by the x86 model of computation is
    fully operational for actual coding and not merely a hypothetical
    theoretical construct that does not work in practice.

    RIP relative addressing is much simpler, data can be directly read and
    written to relative to the instruction pointer. It is much easier to see
    that RIP relative addressing more directly provides an abstract model of computation with unlimited memory access.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Mon Aug 24 21:29:34 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/24/2020 8:21 PM, olcott wrote:
    On 8/24/2020 7:59 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/23/2020 7:38 PM, Ben Bacarisse wrote:
    <cut>
    But suit yourself.  As I said, boasting and insulting is the main
    pleasure for a crank like you, and you won't stop just because I call
    you out on it.  Have fun.

    I don't disrespect you. I disrespect your disrespecting me.
    The code is completed, it is only 65 bytes of x86 machine code.

    Another empty boast -- still no code.  You can't seem to stop doing it.
    Or maybe repeating the excuse for not posting code -- that it would be
    "pearls before swine" -- would not look good next to your claim that you
    don't disrespect me!

    The code begins @ machine address 0x100. Fills the 256 bytes preceding
    it with “@@@@”. Dynamically modifies its own first instruction to
    refer to its new code address. Copies itself to a new address that is
    exactly 0x100 bytes above its current code address and then does a
    relative jump to this new address.

    Instead of 65 bytes of unambiguous code, you post 312 bytes of poorly
    worded description.  Despite the poor wording, this is clearly not the
    code you said you could produce (it's not in any way a linked list) but
    that's OK.  You say enough for me to know that even this more limited
    loop does not meet your objective.  Post the code and you can find out
    why it can only access a finite amount memory.  Don't post the code and
    you can boast away to year heart's content.  It's up to you.

    I had the emulator terminate its execution after the first four full
    cycles. I output the first 1024 bytes of memory before and after
    execution and output complete execution trace of all of the execution.
    This is 23 pages long.

    Wow!  You wrote code and got 23 pages of output.  You must be right.
    I'll call the editor of the JACM right away.

    According to the meaning of the syntax of the x86 language this
    program would continue to operate after it completed its execution @
    machine address: FFFFFF00.

    By studiously hiding the code you can avoid anyone being able to be
    specific, but assuming a reasonable interpretation of your non-code
    waffle, the jumps will always be to addresses within the current code
    segment.  I'm pretty sure there are other problems with it (hint: you
    talk of EIP and not RIP so data access is also a problem) but your
    hiding the code protects it from more specific analysis.

    The only reason that it would not continue to operate would be that
    the physical implementation of the abstract model did not implement
    the semantics specified by the language syntax.

    No.  For more details, post the code.


    You have not yet shown that you understand the subject matter well
    enough that you would not baselessly denigrate it. You have shown that

    I need to rephrase that: (I re-read your reply)

    You have proven that you do not understand the subject matter well
    enough and would baselessly denigrate it on the basis of your lack of understanding.

    When you understand RIP relative addressing better you will more
    directly see how it makes the instruction pointer equivalent to a Turing machine tape head. An undestanding of RIP relative addressing can be
    then leveraged to see how the x86 model achieves the same thing in a
    more complicated way.

    you are capable of understanding the subject matter well enough by your reference to RIP relative addressing.

    I do have to admit that your idea of creating code such as this was a
    great idea. By the fully executing code it is conclusively proven that
    the unlimited memory access provided by the x86 model of computation is
    fully operational for actual coding and not merely a hypothetical
    theoretical construct that does not work in practice.

    RIP relative addressing is much simpler, data can be directly read and written to relative to the instruction pointer. It is much easier to see
    that RIP relative addressing more directly provides an abstract model of computation with unlimited memory access.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Mon Aug 24 20:30:25 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-24 15:02, olcott wrote:
    On 8/24/2020 2:23 PM, André G. Isaak wrote:
    On 2020-08-24 12:26, olcott wrote:
    On 8/24/2020 12:40 PM, André G. Isaak wrote:
    On 2020-08-24 00:52, olcott wrote:

    If the implementation detail of the memory architecture of the x86
    language was an unlimited sequence of contiguous 4GB blocks and the
    absolute addressing modes of the 86 language only referred to
    addresses within the current 4GB block then it becomes rather
    obvious that the x86 language exactly as it currently exists
    specifies access to unlimited memory.

    If all absolute addresses refer to the 'current block', how exactly
    does one identify which block is 'current'? AFAIK the x86 does not
    have a 'current block pointer'.

    André



    The idea is that it is exactly like a Turing Machine tape head.
    The EIP points to the current byte address in the current 4GB block.

    There are an unlimited number of 4GB blocks before and after the
    current 4GB block.

    So how do I know which of those blocks the EIP is pointing at? Talking
    about the 'current' 4GB block is meaningless unless you have some way
    of differentiating between these blocks.


    The current 4GB block is exactly the same thing as the current location
    of the Turing machine tape head.

    I have absolutely no idea how to interpret that analogy.

    What changes in the system when it moves from one of these 4GB blocks
    to the next or previous block?

    If you actually answered the above maybe your analogy would be come
    clearer. What exactly changes in the system when a different 4GB block
    becomes the new 'current' block?

    André

    André


    A relative jump into a next 4GB memory block or a prior 4GB memory block
    only changes the value of the EIP register. Relative memory addressing
    allows code and data to be copied between 4GB boundaries.


    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Tue Aug 25 08:56:43 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-25 08:34, olcott wrote:
    On 8/24/2020 9:30 PM, André G. Isaak wrote:
    On 2020-08-24 15:02, olcott wrote:
    On 8/24/2020 2:23 PM, André G. Isaak wrote:
    On 2020-08-24 12:26, olcott wrote:
    On 8/24/2020 12:40 PM, André G. Isaak wrote:
    On 2020-08-24 00:52, olcott wrote:

    If the implementation detail of the memory architecture of the
    x86 language was an unlimited sequence of contiguous 4GB blocks
    and the absolute addressing modes of the 86 language only
    referred to addresses within the current 4GB block then it
    becomes rather obvious that the x86 language exactly as it
    currently exists specifies access to unlimited memory.

    If all absolute addresses refer to the 'current block', how
    exactly does one identify which block is 'current'? AFAIK the x86
    does not have a 'current block pointer'.

    André



    The idea is that it is exactly like a Turing Machine tape head.
    The EIP points to the current byte address in the current 4GB block.

    There are an unlimited number of 4GB blocks before and after the
    current 4GB block.

    So how do I know which of those blocks the EIP is pointing at?
    Talking about the 'current' 4GB block is meaningless unless you have
    some way of differentiating between these blocks.


    The current 4GB block is exactly the same thing as the current
    location of the Turing machine tape head.

    I have absolutely no idea how to interpret that analogy.

    --- So how do I know which of those blocks the EIP is pointing at?
    Like a Turing Machine tape head has no uniquely defined index identifier neither do the sequence of 4GB blocks. Where everywhere you are is only relatitve to where you have been before.

    There's no analog to a Turing Machine head in the x86. The x86 uses
    random access memory unlike a Turing Machine. So whatever it is you are
    trying to claim here is entirely unclear.

    So say I have access to one of your modified x86s which has been running
    for some unspecified amount of time and I can examine any aspect of it.
    How do I determine *which* 4GB block the contents of the EIP is
    referring to?

    If you made a Turing Machine that counted to infinity and stored the
    result the relative index of your current location would eventually wrap across 4GB boundaries.

    We could use the typical Turing Machine convention of delimiting the
    numeric digit strings with a space character. Whether we used base 10
    ACSII digits or a base 256 number system we will eventually have a
    googleplex of 4GB blocks filled with the representation of a single
    integer value.

    RIP relative addressing of 64-bit architectures physically implements
    this abstract model of computation directly.

    No it doesn't. RIP is a 64-bit register which imposes an upper limit on
    the number of memory locations which can be addressed (and AFAIK no x86
    chip actually has a 64-bit address bus so the real limit will be
    considerably less).

    The abstract model itself
    specifies that an unlimited number of 0x7FFFFFFF jumps can be made to
    higher memory. When one of these jumps exceeds installed memory a

    Only if your relative addressing is done in something wider than
    32-bits. Relative addressing simply adds an offset to the EIP. Since
    this arithmetic is performed in 32-bits, it can never get to this
    hypothetical next block. It would wrap around to the beginning of the
    current block (except I still have no idea what 'current block' means).

    André


    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Tue Aug 25 09:34:13 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/24/2020 9:30 PM, André G. Isaak wrote:
    On 2020-08-24 15:02, olcott wrote:
    On 8/24/2020 2:23 PM, André G. Isaak wrote:
    On 2020-08-24 12:26, olcott wrote:
    On 8/24/2020 12:40 PM, André G. Isaak wrote:
    On 2020-08-24 00:52, olcott wrote:

    If the implementation detail of the memory architecture of the x86 >>>>>> language was an unlimited sequence of contiguous 4GB blocks and
    the absolute addressing modes of the 86 language only referred to
    addresses within the current 4GB block then it becomes rather
    obvious that the x86 language exactly as it currently exists
    specifies access to unlimited memory.

    If all absolute addresses refer to the 'current block', how exactly
    does one identify which block is 'current'? AFAIK the x86 does not
    have a 'current block pointer'.

    André



    The idea is that it is exactly like a Turing Machine tape head.
    The EIP points to the current byte address in the current 4GB block.

    There are an unlimited number of 4GB blocks before and after the
    current 4GB block.

    So how do I know which of those blocks the EIP is pointing at?
    Talking about the 'current' 4GB block is meaningless unless you have
    some way of differentiating between these blocks.


    The current 4GB block is exactly the same thing as the current
    location of the Turing machine tape head.

    I have absolutely no idea how to interpret that analogy.

    --- So how do I know which of those blocks the EIP is pointing at?
    Like a Turing Machine tape head has no uniquely defined index identifier neither do the sequence of 4GB blocks. Where everywhere you are is only relatitve to where you have been before.

    If you made a Turing Machine that counted to infinity and stored the
    result the relative index of your current location would eventually wrap
    across 4GB boundaries.

    We could use the typical Turing Machine convention of delimiting the
    numeric digit strings with a space character. Whether we used base 10
    ACSII digits or a base 256 number system we will eventually have a
    googleplex of 4GB blocks filled with the representation of a single
    integer value.

    RIP relative addressing of 64-bit architectures physically implements
    this abstract model of computation directly. The abstract model itself specifies that an unlimited number of 0x7FFFFFFF jumps can be made to
    higher memory. When one of these jumps exceeds installed memory a
    hardware fault occ
  • From olcott@21:1/5 to Ben Bacarisse on Tue Aug 25 09:55:58 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/24/2020 9:37 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/24/2020 7:59 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/23/2020 7:38 PM, Ben Bacarisse wrote:
    <cut>
    But suit yourself. As I said, boasting and insulting is the main
    pleasure for a crank like you, and you won't stop just because I call >>>>> you out on it. Have fun.

    I don't disrespect you. I disrespect your disrespecting me.
    The code is completed, it is only 65 bytes of x86 machine code.

    Another empty boast -- still no code. You can't seem to stop doing it.
    Or maybe repeating the excuse for not posting code -- that it would be
    "pearls before swine" -- would not look good next to your claim that you >>> don't disrespect me!

    The code begins @ machine address 0x100. Fills the 256 bytes preceding >>>> it with “@@@@”. Dynamically modifies its own first instruction to
    refer to its new code address. Copies itself to a new address that is
    exactly 0x100 bytes above its current code address and then does a
    relative jump to this new address.

    Instead of 65 bytes of unambiguous code, you post 312 bytes of poorly
    worded description. Despite the poor wording, this is clearly not the
    code you said you could produce (it's not in any way a linked list) but
    that's OK. You say enough for me to know that even this more limited
    loop does not meet your objective. Post the code and you can find out
    why it can only access a finite amount memory. Don't post the code and
    you can boast away to year heart's content. It's up to you.

    I had the emulator terminate its execution after the first four full
    cycles. I output the first 1024 bytes of memory before and after
    execution and output complete execution trace of all of the execution. >>>> This is 23 pages long.

    Wow! You wrote code and got 23 pages of output. You must be right.
    I'll call the editor of the JACM right away.

    According to the meaning of the syntax of the x86 language this
    program would continue to operate after it completed its execution @
    machine address: FFFFFF00.

    By studiously hiding the code you can avoid anyone being able to be
    specific, but assuming a reasonable interpretation of your non-code
    waffle, the jumps will always be to addresses within the current code
    segment. I'm pretty sure there are other problems with it (hint: you
    talk of EIP and not RIP so data access is also a problem) but your
    hiding the code protects it from more specific analysis.

    The only reason that it would not continue to operate would be that
    the physical implementation of the abstract model did not implement
    the semantics specified by the language syntax.

    No. For more details, post the code.

    You have not yet shown that you understand the subject matter well
    enough that you would not baselessly denigrate it.

    Oh dear. Why are you are afraid of baseless criticism? I'm not.

    When 20% of the population utterly rejects medical science on the basis
    of persuasive political rhetoric it becomes clear that baseless
    denigrating assertions can even cost many thousands of lives. If they
    can cost thousands of lives they can certainly damage reputations.

    No, you don't want to show the code because you know it's behaviour has finite limits imposed by the silly choice of architecture.

    The abstract model of computation defined by the meaning of the syntax
    of the language specifies access to unlimited memory.

    RIP relative addressing of 64-bit architectures physically implements
    this abstract model of computation directly. The abstract model itself specifies that an unlimited number of 0x7FFFFFFF jumps can be made to
    higher memory. When one of these jumps exceeds installed memory a
    hardware fault occurs. The hardware fault is certainly not any part of
    the abstract model

    Anyway, I see it's business as usual at crack HQ: boast, boast and boast
    some more, but never post anything technical.

    I just did post a key technical detail of RIP relative addressing

    Certainly not the
    impossible TMs that you lied about having in 2018. And now not even a
    short loop in x86 code that has been your main excuse for not posting
    the TMs you didn't have.

    But why choose this to be wrong about? Everyone accepts that a single

    You are apparently bright enough to see that I am not wrong. You are not currently knowledgeable enough to see that I am correct and you shun
    acquiring this knowledge so that you can continue to denigrate my work
    without actually lying.

    The abstract model of computation defined by RIP relative addressing
    clearly specifies access to unlimited memory. Your first step in seeing
    that I am correct is understanding RIP relative addressing.

    conceptual shift (such as providing unbounded registers) will make an x86-like notation Turing complete. And if you have to make that shift,
    you could just as well use a very much simpler architecture making
    whatever you want to do with it much clearer. And surely designing and emulating such a thing would enable you to delay having to post any
    evidence of your non-existent H and H^ just as much as whatever it is
    you are doing with the x86?

    At least you are managing to deflect attention away from the big lie and
    the big question you will never answer -- what was it you really had
    when you claimed to have those two impossible Turing machines in Dec
    2018. So well done on that front.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Tue Aug 25 09:59:56 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-25 09:34, olcott wrote:
    On 8/25/2020 9:56 AM, André G. Isaak wrote:
    On 2020-08-25 08:34, olcott wrote:
    On 8/24/2020 9:30 PM, André G. Isaak wrote:
    On 2020-08-24 15:02, olcott wrote:
    On 8/24/2020 2:23 PM, André G. Isaak wrote:
    On 2020-08-24 12:26, olcott wrote:
    On 8/24/2020 12:40 PM, André G. Isaak wrote:
    On 2020-08-24 00:52, olcott wrote:

    If the implementation detail of the memory architecture of the >>>>>>>>> x86 language was an unlimited sequence of contiguous 4GB blocks >>>>>>>>> and the absolute addressing modes of the 86 language only
    referred to addresses within the current 4GB block then it
    becomes rather obvious that the x86 language exactly as it
    currently exists specifies access to unlimited memory.

    If all absolute addresses refer to the 'current block', how
    exactly does one identify which block is 'current'? AFAIK the
    x86 does not have a 'current block pointer'.

    André



    The idea is that it is exactly like a Turing Machine tape head.
    The EIP points to the current byte address in the current 4GB block. >>>>>>
    There are an unlimited number of 4GB blocks before and after the >>>>>>> current 4GB block.

    So how do I know which of those blocks the EIP is pointing at?
    Talking about the 'current' 4GB block is meaningless unless you
    have some way of differentiating between these blocks.


    The current 4GB block is exactly the same thing as the current
    location of the Turing machine tape head.

    I have absolutely no idea how to interpret that analogy.

    --- So how do I know which of those blocks the EIP is pointing at?
    Like a Turing Machine tape head has no uniquely defined index
    identifier neither do the sequence of 4GB blocks. Where everywhere
    you are is only relatitve to where you have been before.

    There's no analog to a Turing Machine head in the x86. The x86 uses
    random access memory unlike a Turing Machine. So whatever it is you
    are trying to claim here is entirely unclear.

    So say I have access to one of your modified x86s which has been
    running for some unspecified amount of time and I can examine any
    aspect of it. How do I determine *which* 4GB block the contents of the
    EIP is referring to?


    You have no way of knowing that and no need to know that.

    If there is no way of knowing this, then how does this hypothetical
    computer know which block it is working with?

    The values in
    the current 4GB tell you everything that you need to know. One of the
    things that you might need to know is that the result of a computation
    spans however many 4GB blocks until space character is encountered. Such
    a TM could count to infinity and store the result as space delimited

    Don't call this a TM. You are not describing a TM, you are describing
    some vaguely defined modification of x86.

    Your goal here seemed to be to somehow demonstrate that x86 architecture
    is Turing Complete (which it isn't), or that you could pretend there was
    some Turing Complete 'language' based on x86 assembly language. But
    being written in a language which is Turing Complete doesn't make
    something into a Turing Machine.

    And since your whole reason for doing this was, as far as I can tell, to somehow justify your misuse of the term 'Turing Machine', then Turing Completeness isn't even the property you want. A language is Turing
    Complete if it is capable of performing any computation that a Turing
    Machine can perform. But if you want to claim that there is some Turing
    Machine that corresponds to your still-not-presented code, then you
    would need to show that every computation which can be performed by an
    X86 can be performed by a Turing Machine. That's not Turing
    Completeness. That's something else.

    And it still wouldn't make your code a Turing Machine.

    ASCII digit strings.

    If you made a Turing Machine that counted to infinity and stored the
    result the relative index of your current location would eventually
    wrap across 4GB boundaries.

    We could use the typical Turing Machine convention of delimiting the
    numeric digit strings with a space character. Whether we used base 10
    ACSII digits or a base 256 number system we will eventually have a
    googleplex of 4GB blocks filled with the representation of a single
    integer value.

    RIP relative addressing of 64-bit architectures physically implements
    this abstract model of computation directly.

    No it doesn't. RIP is a 64-bit register which imposes an upper limit
    on the number of memory locations which can be addressed (and AFAIK no
    x86 chip actually has a 64-bit address bus so the real limit will be
    considerably less).


    An abstract machine having a tape head that can be advanced in 0 to 0x7FFFFFFF increments an unlimited number of times specifies a model of computation that has access to unlimited memory. The technical name for memory addressing based on displacement from the current memory address
    is relative addressing.

    Relative addressing is always relative to something. If it is relative
    to the current block, you need a way of specifying which block is the
    current block,

    You also need a way of specifying which block is the next block.

    And it should be noted that relative addresses are translated into
    absolute addresses whenever memory is actually accessed. So all you're
    doing here is imaging a system in which there are unbounded address
    registers which you don't talk about that somehow pick out each 4GB
    block. Then you try to pretend this unbounded address doesn't exist by
    invoking all sorts of needless relative offsets. Wouldn't it be much
    simpler to simply state you are talking about some hypothetical system
    which is like the x86 except it has no bounds on addressing?

    RIP relative addressing directly implements this in hardware. The

    RIP implements nothing of the sort. RIP is simply a 64-bit instruction
    pointer register.

    abstract model of computation specified by RIP relative addressing
    allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When a
    jump is made outside of the bounds of installed memory a hardware fault occurs. Hardware faults are not part of any abstract model.

    That that RIP relative addressing happens to be currently implemented on
    a 64-bit architecture is an implementation detail that is not a part of

    Um. The 'R' in RIP specifies a 64-bit register. That's what 'R' means.
    There is no x86 architecture that is defined in some register-width
    independent fashion.

    André


    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Tue Aug 25 11:06:23 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/25/2020 10:34 AM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    The abstract model of computation defined by RIP relative addressing
    clearly specifies access to unlimited memory. Your first step in
    seeing that I am correct is understanding RIP relative addressing.

    Clearly? How is that clear?


    An abstract machine having a tape head that can be advanced in 0 to
    0x7FFFFFFF increments an unlimited number of times specifies a model of computation that has access to unlimited memory. The technical name for
    memory addressing based on displacement from the current memory address
    is relative addressing.

    RIP relative addressing directly implements this in hardware. The
    abstract model of computation specified by RIP relative addressing
    allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When a
    jump is made outside of the bounds of installed memory a hardware fault
    occurs. Hardware faults are not part of any abstract model.

    That that RIP relative addressing happens to be currently implemented on
    a 64-bit architecture is an implementation detail that is not a part of
    the abstract model of computation itself.

    As far as I can tell, all you can reasonably is that relative addressing doesn't specify a limit. If you try to do a relative addressing
    operation that goes beyond the range of addresses the fixed-width
    instruction pointer can hold, there is as far as I know no specification
    in the x86 architecture of what happens. (Or perhaps there is such a specification. If so, perhaps you can find it in some document.)

    To implement the semantics specified by the x86 language would require
    memory organzied as an unlimited sequence of contiguous 4GB blocks.
    Absolute addressing modes of the 86 language would then only refer to
    addresses within the current 4GB block. The concept of a current 4GB
    block would be analogous to the current location of a Turing Machine
    tape head.

    That's not at all the same thing as specifying unlimited memory.

    The abstract model of computation specified by the language does define unlimited memory. Only the physical implementations of this abstract
    model place limits on memory access.

    Why would a specification for a real-world CPU specify unlimited memory
    when no actual hardware has unlimited memory?

    RIP relative addressing is a key example of this. Code written for the
    RIP relative addressing model will run exactly the same on every memory architecture that supports it.

    If we suddenly have a technical breakthrough such that googleplex sized
    RAM chips can be produced RIP relative addressing machine code would run
    from this RAM unmodified.

    Can you cite some x86 document that *actually says* anything about
    access to unlimited memory? I'm asking for something other than your infererence of unlimited memory from the lack of a specified limit.


    Look into RIP relative addressing as a simpler way of understanding the
    same thing that I am saying.

    I understand it would be convenient *for you* if the "x86 language"
    specified access to unlimited memory. That doesn't imply that it does.

    It will make my Halting Problem refutation provided as x86 code have
    much higher credibility and complete applicability to actual Turing
    Machines.

    BTW, this would all be irrelevant if you simply stated that, for your purposes, you were working with an extended x86 that allows for
    unlimited memory.

    Not really, not at all. As long as it is understood that the abstract
    model of computation defined by the x86 language is fully Turing
    Complete physical implementations of this model that are not fully
    Turing complete would still be understood to be totally applicable.

    We might for example create a higher level language defined in terms of
    32-bit signed RIP relative addressing that is itself 100% Turing
    Complete and then translate this higher level language into specific
    hardware platforms.

    Any such program needing less that 4GB RAM would run on existing 32-bit platforms. Any such program needing less than 1.0 TB of RAM would run on existing 64-bit platforms.

    The key advantage of this is that theory of computation concepts can be empirically validated concretely using a much higher level abstraction.
    The "C" programming language could be mapped to such a 32-bit signed RIP relative addressing model such that Turing complete machines could be
    written directly in "C".

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Tue Aug 25 10:34:35 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/25/2020 9:56 AM, André G. Isaak wrote:
    On 2020-08-25 08:34, olcott wrote:
    On 8/24/2020 9:30 PM, André G. Isaak wrote:
    On 2020-08-24 15:02, olcott wrote:
    On 8/24/2020 2:23 PM, André G. Isaak wrote:
    On 2020-08-24 12:26, olcott wrote:
    On 8/24/2020 12:40 PM, André G. Isaak wrote:
    On 2020-08-24 00:52, olcott wrote:

    If the implementation detail of the memory architecture of the >>>>>>>> x86 language was an unlimited sequence of contiguous 4GB blocks >>>>>>>> and the absolute addressing modes of the 86 language only
    referred to addresses within the current 4GB block then it
    becomes rather obvious that the x86 language exactly as it
    currently exists specifies access to unlimited memory.

    If all absolute addresses refer to the 'current block', how
    exactly does one identify which block is 'current'? AFAIK the x86 >>>>>>> does not have a 'current block pointer'.

    André



    The idea is that it is exactly like a Turing Machine tape head.
    The EIP points to the current byte address in the current 4GB block. >>>>>
    There are an unlimited number of 4GB blocks before and after the
    current 4GB block.

    So how do I know which of those blocks the EIP is pointing at?
    Talking about the 'current' 4GB block is meaningless unless you
    have some way of differentiating between these blocks.


    The current 4GB block is exactly the same thing as the current
    location of the Turing machine tape head.

    I have absolutely no idea how to interpret that analogy.

    --- So how do I know which of those blocks the EIP is pointing at?
    Like a Turing Machine tape head has no uniquely defined index
    identifier neither do the sequence of 4GB blocks. Where everywhere you
    are is only relatitve to where you have been before.

    There's no analog to a Turing Machine head in the x86. The x86 uses
    random access memory unlike a Turing Machine. So whatever it is you are trying to claim here is entirely unclear.

    So say I have access to one of your modified x86s which has been running
    for some unspecified amount of time and I can examine any aspect of it.
    How do I determine *which* 4GB block the contents of the EIP is
    referring to?


    You have no way of knowing that and no need to know that. The values in
    the current 4GB tell you everything that you need to know. One of the
    things that you might need to know is that the result of a computation
    spans however many 4GB blocks until space character is encountered. Such
    a TM could count to infinity and store the result as space delimited
    ASCII digit strings.

    If you made a Turing Machine that counted to infinity and stored the
    result the relative index of your current location would eventually
    wrap across 4GB boundaries.

    We could use the typical Turing Machine convention of delimiting the
    numeric digit strings with a space character. Whether we used base 10
    ACSII digits or a base 256 number system we will eventually have a
    googleplex of 4GB blocks filled with the representation of a single
    integer value.

    RIP relative addressing of 64-bit architectures physically implements
    this abstract model of computation directly.

    No it doesn't. RIP is a 64-bit register which imposes an upper limit on
    the number of memory locations which can be addressed (and AFAIK no x86
    chip actually has a 64-bit address bus so the real limit will be
    considerably less).


    An abstract machine having a tape head that can be advanced in 0 to
    0x7FFFFFFF increments an unlimited number of times specifies a model of computation that has access to unlimited memory. The technical name for
    memory addressing based on displacement from the current memory address
    is relative addressing.

    RIP relative addressing directly implements this in hardware. The
    abstract model of computation specified by RIP relative addressing
    allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When a
    jump is made outside of the bounds of installed memory a hardware fault
    occurs. Hardware faults are not part of any abstract model.

    That that RIP relative addressing happens to be currently implemented on
    a 64-bit architecture is an implementation detail that is not a part of
    the abstract model of computation itself.

    The abstract model itself specifies that an unlimited number of
    0x7FFFFFFF jumps can be made to higher memory. When one of these jumps
    exceeds installed memory a

    Only if your relative addressing is done in something wider than
    32-bits. Relative addressing simply adds an offset to the EIP. Since
    this arithmetic is performed in 32-bits, it can never get to this hypothetical next block. It would wrap around to the beginning of the
    current block (except I still have no idea what 'current block' means).

    André




    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Tue Aug 25 08:34:19 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    [...]
    The abstract model of computation defined by RIP relative addressing
    clearly specifies access to unlimited memory. Your first step in
    seeing that I am correct is understanding RIP relative addressing.

    Clearly? How is that clear?

    As far as I can tell, all you can reasonably is that relative addressing doesn't specify a limit. If you try to do a relative addressing
    operation that goes beyond the range of addresses the fixed-width
    instruction pointer can hold, there is as far as I know no specification
    in the x86 architecture of what happens. (Or perhaps there is such a specification. If so, perhaps you can find it in some document.)

    That's not at all the same thing as specifying unlimited memory.

    Why would a specification for a real-world CPU specify unlimited memory
    when no actual hardware has unlimited memory?

    Can you cite some x86 document that *actually says* anything about
    access to unlimited memory? I'm asking for something other than your infererence of unlimited memory from the lack of a specified limit.

    I understand it would be convenient *for you* if the "x86 language"
    specified access to unlimited memory. That doesn't imply that it does.

    BTW, this would all be irrelevant if you simply stated that, for your
    purposes, you were working with an extended x86 that allows for
    unlimited memory.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Tue Aug 25 12:00:35 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/25/2020 10:59 AM, André G. Isaak wrote:
    On 2020-08-25 09:34, olcott wrote:
    On 8/25/2020 9:56 AM, André G. Isaak wrote:
    On 2020-08-25 08:34, olcott wrote:
    On 8/24/2020 9:30 PM, André G. Isaak wrote:
    On 2020-08-24 15:02, olcott wrote:
    On 8/24/2020 2:23 PM, André G. Isaak wrote:
    On 2020-08-24 12:26, olcott wrote:
    On 8/24/2020 12:40 PM, André G. Isaak wrote:
    On 2020-08-24 00:52, olcott wrote:

    If the implementation detail of the memory architecture of the >>>>>>>>>> x86 language was an unlimited sequence of contiguous 4GB
    blocks and the absolute addressing modes of the 86 language >>>>>>>>>> only referred to addresses within the current 4GB block then >>>>>>>>>> it becomes rather obvious that the x86 language exactly as it >>>>>>>>>> currently exists specifies access to unlimited memory.

    If all absolute addresses refer to the 'current block', how
    exactly does one identify which block is 'current'? AFAIK the >>>>>>>>> x86 does not have a 'current block pointer'.

    André



    The idea is that it is exactly like a Turing Machine tape head. >>>>>>>> The EIP points to the current byte address in the current 4GB
    block.

    There are an unlimited number of 4GB blocks before and after the >>>>>>>> current 4GB block.

    So how do I know which of those blocks the EIP is pointing at?
    Talking about the 'current' 4GB block is meaningless unless you
    have some way of differentiating between these blocks.


    The current 4GB block is exactly the same thing as the current
    location of the Turing machine tape head.

    I have absolutely no idea how to interpret that analogy.

    --- So how do I know which of those blocks the EIP is pointing at?
    Like a Turing Machine tape head has no uniquely defined index
    identifier neither do the sequence of 4GB blocks. Where everywhere
    you are is only relatitve to where you have been before.

    There's no analog to a Turing Machine head in the x86. The x86 uses
    random access memory unlike a Turing Machine. So whatever it is you
    are trying to claim here is entirely unclear.

    So say I have access to one of your modified x86s which has been
    running for some unspecified amount of time and I can examine any
    aspect of it. How do I determine *which* 4GB block the contents of
    the EIP is referring to?


    You have no way of knowing that and no need to know that.

    If there is no way of knowing this, then how does this hypothetical
    computer know which block it is working with?

    The values in the current 4GB tell you everything that you need to
    know. One of the things that you might need to know is that the result
    of a computation spans however many 4GB blocks until space character
    is encountered. Such a TM could count to infinity and store the result
    as space delimited

    Don't call this a TM. You are not describing a TM, you are describing
    some vaguely defined modification of x86.

    Your goal here seemed to be to somehow demonstrate that x86 architecture
    is Turing Complete (which it isn't), or that you could pretend there was
    some Turing Complete 'language' based on x86 assembly language. But
    being written in a language which is Turing Complete doesn't make
    something into a Turing Machine.

    And since your whole reason for doing this was, as far as I can tell, to somehow justify your misuse of the term 'Turing Machine', then Turing Completeness isn't even the property you want. A language is Turing
    Complete if it is capable of performing any computation that a Turing
    Machine can perform. But if you want to claim that there is some Turing Machine that corresponds to your still-not-presented code, then you
    would need to show that every computation which can be performed by an
    X86 can be performed by a Turing Machine. That's not Turing
    Completeness. That's something else.

    And it still wouldn't make your code a Turing Machine.

    ASCII digit strings.

    If you made a Turing Machine that counted to infinity and stored the
    result the relative index of your current location would eventually
    wrap across 4GB boundaries.

    We could use the typical Turing Machine convention of delimiting the
    numeric digit strings with a space character. Whether we used base
    10 ACSII digits or a base 256 number system we will eventually have
    a googleplex of 4GB blocks filled with the representation of a
    single integer value.

    RIP relative addressing of 64-bit architectures physically
    implements this abstract model of computation directly.

    No it doesn't. RIP is a 64-bit register which imposes an upper limit
    on the number of memory locations which can be addressed (and AFAIK
    no x86 chip actually has a 64-bit address bus so the real limit will
    be considerably less).


    An abstract machine having a tape head that can be advanced in 0 to
    0x7FFFFFFF increments an unlimited number of times specifies a model
    of computation that has access to unlimited memory. The technical name
    for memory addressing based on displacement from the current memory
    address is relative addressing.

    Relative addressing is always relative to something. If it is relative
    to the current block, you need a way of specifying which block is the
    current block,

    You also need a way of specifying which block is the next block.

    And it should be noted that relative addresses are translated into
    absolute addresses whenever memory is actually accessed. So all you're
    doing here is imaging a system in which there are unbounded address
    registers which you don't talk about that somehow pick out each 4GB
    block. Then you try to pretend this unbounded address doesn't exist by invoking all sorts of needless relative offsets. Wouldn't it be much
    simpler to simply state you are talking about some hypothetical system
    which is like the x86 except it has no bounds on addressing?

    RIP relative addressing directly implements this in hardware. The

    RIP implements nothing of the sort. RIP is simply a 64-bit instruction pointer register.

    abstract model of computation specified by RIP relative addressing
    allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When
    a jump is made outside of the bounds of installed memory a hardware
    fault occurs. Hardware faults are not part of any abstract model.

    That that RIP relative addressing happens to be currently implemented
    on a 64-bit architecture is an implementation detail that is not a
    part of

    Um. The 'R' in RIP specifies a 64-bit register. That's what 'R' means.
    There is no x86 architecture that is defined in some register-width independent fashion.

    André



    "C" is not Turing Complete, but it could be: If we mapped the "C"
    programming language to the abstract model of computation of 32-bit
    signed RIP relative addressing that is implemented in the 64-bit machine architecture then "C" would have Turing Complete access to unlimited memory.

    To keep this abstraction pure the generated code would never directly
    refer to any machine address. This ensures that the actual generated
    code is always Turing Complete. Post processing could translate this
    generated code into specific hardware architectures.

    RIP relative addressing implemented in the 64-bit architecture is not
    directly tied to any fixed sized addressing. This same abstract model of computation could be implemented in an unlimited number of physical
    address sizes.

    By doing this theoretical computer science problems could be empirically validated using the much higher level abstraction of the "C" programming language. It is much easier to examine the overall structure of an
    algorithm using this much higher level of abstraction.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Tue Aug 25 12:51:51 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/25/2020 12:28 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 10:34 AM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    The abstract model of computation defined by RIP relative addressing
    clearly specifies access to unlimited memory. Your first step in
    seeing that I am correct is understanding RIP relative addressing.

    Clearly? How is that clear?

    An abstract machine having a tape head that can be advanced in 0 to
    0x7FFFFFFF increments an unlimited number of times specifies a model
    of computation that has access to unlimited memory. The technical name
    for memory addressing based on displacement from the current memory
    address is relative addressing.

    RIP relative addressing directly implements this in hardware. The
    abstract model of computation specified by RIP relative addressing
    allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When
    a jump is made outside of the bounds of installed memory a hardware
    fault occurs. Hardware faults are not part of any abstract model.

    That that RIP relative addressing happens to be currently implemented
    on a 64-bit architecture is an implementation detail that is not a
    part of the abstract model of computation itself.

    Yes, I read all that the first N times you posted it. I'm not
    failing to understand in a way that would be solved by repetition.
    I just disagree.


    Valid disagreement always has a basis provided.

    As far as I can tell, all you can reasonably is that relative addressing >>> doesn't specify a limit. If you try to do a relative addressing
    operation that goes beyond the range of addresses the fixed-width
    instruction pointer can hold, there is as far as I know no specification >>> in the x86 architecture of what happens. (Or perhaps there is such a
    specification. If so, perhaps you can find it in some document.)

    To implement the semantics specified by the x86 language would require
    memory organzied as an unlimited sequence of contiguous 4GB
    blocks. Absolute addressing modes of the 86 language would then only
    refer to addresses within the current 4GB block. The concept of a
    current 4GB block would be analogous to the current location of a
    Turing Machine tape head.

    That's not at all the same thing as specifying unlimited memory.

    The abstract model of computation specified by the language does
    define unlimited memory. Only the physical implementations of this
    abstract model place limits on memory access.

    Why would a specification for a real-world CPU specify unlimited memory
    when no actual hardware has unlimited memory?

    RIP relative addressing is a key example of this. Code written for the
    RIP relative addressing model will run exactly the same on every
    memory architecture that supports it.

    If we suddenly have a technical breakthrough such that googleplex
    sized RAM chips can be produced RIP relative addressing machine code
    would run from this RAM unmodified.

    Can you cite some x86 document that *actually says* anything about
    access to unlimited memory? I'm asking for something other than your
    infererence of unlimited memory from the lack of a specified limit.


    Look into RIP relative addressing as a simpler way of understanding
    the same thing that I am saying.

    I understand it would be convenient *for you* if the "x86 language"
    specified access to unlimited memory. That doesn't imply that it does.

    It will make my Halting Problem refutation provided as x86 code have
    much higher credibility and complete applicability to actual Turing
    Machines.

    BTW, this would all be irrelevant if you simply stated that, for your
    purposes, you were working with an extended x86 that allows for
    unlimited memory.

    Not really, not at all. As long as it is understood that the abstract
    model of computation defined by the x86 language is fully Turing
    Complete physical implementations of this model that are not fully
    Turing complete would still be understood to be totally applicable.

    We might for example create a higher level language defined in terms
    of 32-bit signed RIP relative addressing that is itself 100% Turing
    Complete and then translate this higher level language into specific
    hardware platforms.

    Any such program needing less that 4GB RAM would run on existing
    32-bit platforms. Any such program needing less than 1.0 TB of RAM
    would run on existing 64-bit platforms.

    The key advantage of this is that theory of computation concepts can
    be empirically validated concretely using a much higher level
    abstraction. The "C" programming language could be mapped to such a
    32-bit signed RIP relative addressing model such that Turing complete
    machines could be written directly in "C".

    Your interpretation of somebody's web page describing the x86 JMP
    instruction is that it implies access to unlimited memory. Since I have asked you several times to cite some document that states that same conclusion and you have failed to do so, I conclude that you cannot.


    http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF

    page 25-192

    E9 cd JMP rel32 1 Jump near, displacement relative to next
    instruction

    I said it is common knowledge and every source agrees that it is a
    verified fact.

    This is a 1995 manual thus 32-bit architecture. https://en.wikipedia.org/wiki/List_of_Intel_microprocessors#Original_Pentium

    From https://c9x.me/x86/html/file_module_x86_id_147.html :

    A relative offset (rel8, rel16, or rel32) is generally specified as
    a label in assembly code, but at the machine code level, it is
    encoded as a signed 8-, 16-, or 32-bit immediate value. This value
    is added to the value in the EIP register. (Here, the EIP register
    contains the address of the instruction following the JMP
    instruction).

    The EIP register is 64 bits. That's part of the x86 architecture. The source you cite contradicts your claim.


    You are confusing the 32-bit EIP register with the 64-bit RIP register.

    Now if you want to assume an x86-like architecture that can access
    unlimited memory, that's fine.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Tue Aug 25 10:28:00 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 10:34 AM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    The abstract model of computation defined by RIP relative addressing
    clearly specifies access to unlimited memory. Your first step in
    seeing that I am correct is understanding RIP relative addressing.

    Clearly? How is that clear?

    An abstract machine having a tape head that can be advanced in 0 to 0x7FFFFFFF increments an unlimited number of times specifies a model
    of computation that has access to unlimited memory. The technical name
    for memory addressing based on displacement from the current memory
    address is relative addressing.

    RIP relative addressing directly implements this in hardware. The
    abstract model of computation specified by RIP relative addressing
    allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When
    a jump is made outside of the bounds of installed memory a hardware
    fault occurs. Hardware faults are not part of any abstract model.

    That that RIP relative addressing happens to be currently implemented
    on a 64-bit architecture is an implementation detail that is not a
    part of the abstract model of computation itself.

    Yes, I read all that the first N times you posted it. I'm not
    failing to understand in a way that would be solved by repetition.
    I just disagree.

    As far as I can tell, all you can reasonably is that relative addressing
    doesn't specify a limit. If you try to do a relative addressing
    operation that goes beyond the range of addresses the fixed-width
    instruction pointer can hold, there is as far as I know no specification
    in the x86 architecture of what happens. (Or perhaps there is such a
    specification. If so, perhaps you can find it in some document.)

    To implement the semantics specified by the x86 language would require
    memory organzied as an unlimited sequence of contiguous 4GB
    blocks. Absolute addressing modes of the 86 language would then only
    refer to addresses within the current 4GB block. The concept of a
    current 4GB block would be analogous to the current location of a
    Turing Machine tape head.

    That's not at all the same thing as specifying unlimited memory.

    The abstract model of computation specified by the language does
    define unlimited memory. Only the physical implementations of this
    abstract model place limits on memory access.

    Why would a specification for a real-world CPU specify unlimited memory
    when no actual hardware has unlimited memory?

    RIP relative addressing is a key example of this. Code written for the
    RIP relative addressing model will run exactly the same on every
    memory architecture that supports it.

    If we suddenly have a technical breakthrough such that googleplex
    sized RAM chips can be produced RIP relative addressing machine code
    would run from this RAM unmodified.

    Can you cite some x86 document that *actually says* anything about
    access to unlimited memory? I'm asking for something other than your
    infererence of unlimited memory from the lack of a specified limit.


    Look into RIP relative addressing as a simpler way of understanding
    the same thing that I am saying.

    I understand it would be convenient *for you* if the "x86 language"
    specified access to unlimited memory. That doesn't imply that it does.

    It will make my Halting Problem refutation provided as x86 code have
    much higher credibility and complete applicability to actual Turing
    Machines.

    BTW, this would all be irrelevant if you simply stated that, for your
    purposes, you were working with an extended x86 that allows for
    unlimited memory.

    Not really, not at all. As long as it is understood that the abstract
    model of computation defined by the x86 language is fully Turing
    Complete physical implementations of this model that are not fully
    Turing complete would still be understood to be totally applicable.

    We might for example create a higher level language defined in terms
    of 32-bit signed RIP relative addressing that is itself 100% Turing
    Complete and then translate this higher level language into specific
    hardware platforms.

    Any such program needing less that 4GB RAM would run on existing
    32-bit platforms. Any such program needing less than 1.0 TB of RAM
    would run on existing 64-bit platforms.

    The key advantage of this is that theory of computation concepts can
    be empirically validated concretely using a much higher level
    abstraction. The "C" programming language could be mapped to such a
    32-bit signed RIP relative addressing model such that Turing complete machines could be written directly in "C".

    Your interpretation of somebody's web page describing the x86 JMP
    instruction is that it implies access to unlimited memory. Since I have
    asked you several times to cite some document that states that same
    conclusion and you have failed to do so, I conclude that you cannot.

    From https://c9x.me/x86/html/file_module_x86_id_147.html :

    A relative offset (rel8, rel16, or rel32) is generally specified as
    a label in assembly code, but at the machine code level, it is
    encoded as a signed 8-, 16-, or 32-bit immediate value. This value
    is added to the value in the EIP register. (Here, the EIP register
    contains the address of the instruction following the JMP
    instruction).

    The EIP register is 64 bits. That's part of the x86 architecture. The
    source you cite contradicts your claim.

    Now if you want to assume an x86-like architecture that can access
    unlimited memory, that's fine.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Tue Aug 25 12:39:24 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-25 11:00, olcott wrote:
    On 8/25/2020 10:59 AM, André G. Isaak wrote:
    On 2020-08-25 09:34, olcott wrote:
    On 8/25/2020 9:56 AM, André G. Isaak wrote:
    On 2020-08-25 08:34, olcott wrote:
    On 8/24/2020 9:30 PM, André G. Isaak wrote:
    On 2020-08-24 15:02, olcott wrote:
    On 8/24/2020 2:23 PM, André G. Isaak wrote:
    On 2020-08-24 12:26, olcott wrote:
    On 8/24/2020 12:40 PM, André G. Isaak wrote:
    On 2020-08-24 00:52, olcott wrote:

    If the implementation detail of the memory architecture of >>>>>>>>>>> the x86 language was an unlimited sequence of contiguous 4GB >>>>>>>>>>> blocks and the absolute addressing modes of the 86 language >>>>>>>>>>> only referred to addresses within the current 4GB block then >>>>>>>>>>> it becomes rather obvious that the x86 language exactly as it >>>>>>>>>>> currently exists specifies access to unlimited memory.

    If all absolute addresses refer to the 'current block', how >>>>>>>>>> exactly does one identify which block is 'current'? AFAIK the >>>>>>>>>> x86 does not have a 'current block pointer'.

    André



    The idea is that it is exactly like a Turing Machine tape head. >>>>>>>>> The EIP points to the current byte address in the current 4GB >>>>>>>>> block.

    There are an unlimited number of 4GB blocks before and after >>>>>>>>> the current 4GB block.

    So how do I know which of those blocks the EIP is pointing at? >>>>>>>> Talking about the 'current' 4GB block is meaningless unless you >>>>>>>> have some way of differentiating between these blocks.


    The current 4GB block is exactly the same thing as the current
    location of the Turing machine tape head.

    I have absolutely no idea how to interpret that analogy.

    --- So how do I know which of those blocks the EIP is pointing at?
    Like a Turing Machine tape head has no uniquely defined index
    identifier neither do the sequence of 4GB blocks. Where everywhere
    you are is only relatitve to where you have been before.

    There's no analog to a Turing Machine head in the x86. The x86 uses
    random access memory unlike a Turing Machine. So whatever it is you
    are trying to claim here is entirely unclear.

    So say I have access to one of your modified x86s which has been
    running for some unspecified amount of time and I can examine any
    aspect of it. How do I determine *which* 4GB block the contents of
    the EIP is referring to?


    You have no way of knowing that and no need to know that.

    If there is no way of knowing this, then how does this hypothetical
    computer know which block it is working with?

    The values in the current 4GB tell you everything that you need to
    know. One of the things that you might need to know is that the
    result of a computation spans however many 4GB blocks until space
    character is encountered. Such a TM could count to infinity and store
    the result as space delimited

    Don't call this a TM. You are not describing a TM, you are describing
    some vaguely defined modification of x86.

    Your goal here seemed to be to somehow demonstrate that x86
    architecture is Turing Complete (which it isn't), or that you could
    pretend there was some Turing Complete 'language' based on x86
    assembly language. But being written in a language which is Turing
    Complete doesn't make something into a Turing Machine.

    And since your whole reason for doing this was, as far as I can tell,
    to somehow justify your misuse of the term 'Turing Machine', then
    Turing Completeness isn't even the property you want. A language is
    Turing Complete if it is capable of performing any computation that a
    Turing Machine can perform. But if you want to claim that there is
    some Turing Machine that corresponds to your still-not-presented code,
    then you would need to show that every computation which can be
    performed by an X86 can be performed by a Turing Machine. That's not
    Turing Completeness. That's something else.

    And it still wouldn't make your code a Turing Machine.

    ASCII digit strings.

    If you made a Turing Machine that counted to infinity and stored
    the result the relative index of your current location would
    eventually wrap across 4GB boundaries.

    We could use the typical Turing Machine convention of delimiting
    the numeric digit strings with a space character. Whether we used
    base 10 ACSII digits or a base 256 number system we will eventually
    have a googleplex of 4GB blocks filled with the representation of a
    single integer value.

    RIP relative addressing of 64-bit architectures physically
    implements this abstract model of computation directly.

    No it doesn't. RIP is a 64-bit register which imposes an upper limit
    on the number of memory locations which can be addressed (and AFAIK
    no x86 chip actually has a 64-bit address bus so the real limit will
    be considerably less).


    An abstract machine having a tape head that can be advanced in 0 to
    0x7FFFFFFF increments an unlimited number of times specifies a model
    of computation that has access to unlimited memory. The technical
    name for memory addressing based on displacement from the current
    memory address is relative addressing.

    Relative addressing is always relative to something. If it is relative
    to the current block, you need a way of specifying which block is the
    current block,

    You also need a way of specifying which block is the next block.

    And it should be noted that relative addresses are translated into
    absolute addresses whenever memory is actually accessed. So all you're
    doing here is imaging a system in which there are unbounded address
    registers which you don't talk about that somehow pick out each 4GB
    block. Then you try to pretend this unbounded address doesn't exist by
    invoking all sorts of needless relative offsets. Wouldn't it be much
    simpler to simply state you are talking about some hypothetical system
    which is like the x86 except it has no bounds on addressing?

    RIP relative addressing directly implements this in hardware. The

    RIP implements nothing of the sort. RIP is simply a 64-bit instruction
    pointer register.

    abstract model of computation specified by RIP relative addressing
    allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When
    a jump is made outside of the bounds of installed memory a hardware
    fault occurs. Hardware faults are not part of any abstract model.

    That that RIP relative addressing happens to be currently implemented
    on a 64-bit architecture is an implementation detail that is not a
    part of

    Um. The 'R' in RIP specifies a 64-bit register. That's what 'R' means.
    There is no x86 architecture that is defined in some register-width
    independent fashion.

    André



    "C" is not Turing Complete, but it could be: If we mapped the "C"
    programming language to the abstract model of computation of 32-bit
    signed RIP relative addressing that is implemented in the 64-bit machine architecture then "C" would have Turing Complete access to unlimited
    memory.

    Why are you suddenly talking about C? Before you were talking about x86.
    Is this a change of topic? If so, why are you giving it as a reply to my
    post?

    And why on earth would C be related to RIP relative addressing? C,
    unlike x86 assembly, is not tied to any specific processor. It makes no assumptions about which registers or addressing modes exist.

    André

    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Tue Aug 25 12:30:06 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-25 11:51, olcott wrote:
    On 8/25/2020 12:28 PM, Keith Thompson wrote:

    Your interpretation of somebody's web page describing the x86 JMP
    instruction is that it implies access to unlimited memory.  Since I have
    asked you several times to cite some document that states that same
    conclusion and you have failed to do so, I conclude that you cannot.


    http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF

    page 25-192

    E9 cd JMP rel32 1 Jump near, displacement relative to next
    instruction

    p25-192 of that document describes the jump instruction. It most
    definitely *does not* claim that this instruction gives access to
    unlimited memory.

    I said it is common knowledge and every source agrees that it is a
    verified fact.

    The above one certainly does not. If you think otherwise, please provide
    an actual quote from the above which talks about access to unlimited memory.

    André

    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Tue Aug 25 13:50:02 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/25/2020 1:30 PM, André G. Isaak wrote:
    On 2020-08-25 11:51, olcott wrote:
    On 8/25/2020 12:28 PM, Keith Thompson wrote:

    Your interpretation of somebody's web page describing the x86 JMP
    instruction is that it implies access to unlimited memory.  Since I have >>> asked you several times to cite some document that states that same
    conclusion and you have failed to do so, I conclude that you cannot.


    http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF

    page 25-192

    E9 cd JMP rel32 1 Jump near, displacement relative to next
    instruction

    p25-192 of that document describes the jump instruction. It most
    definitely *does not* claim that this instruction gives access to
    unlimited memory.

    I said it is common knowledge and every source agrees that it is a
    verified fact.

    The above one certainly does not. If you think otherwise, please provide
    an actual quote from the above which talks about access to unlimited
    memory.

    André


    The syntax of the language specifies this:
    Jumps to an address that is 7FFFFFFF greater than FFFFFFF5
    fffffff0: e9ffffff7f jmp 0xfffffff5 + 0x7fffffff
    fffffff5: 90 nop

    In any case the abstract model of computation defined by RIP relative addressing much more directly provides access to unlimited memory.

    An abstract model of computation having all memory access for both code
    and data specified as signed 32-bit offsets relative to the instruction
    pointer would define Turing complete access to memory when the size of
    the instruction pointer is unspecified.

    The "C" programming language could be mapped to such an abstract model
    thus allowing Turing complete programs to be written directly in "C".


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Tue Aug 25 14:37:08 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/25/2020 1:39 PM, André G. Isaak wrote:
    On 2020-08-25 11:00, olcott wrote:
    On 8/25/2020 10:59 AM, André G. Isaak wrote:
    On 2020-08-25 09:34, olcott wrote:
    On 8/25/2020 9:56 AM, André G. Isaak wrote:
    On 2020-08-25 08:34, olcott wrote:
    On 8/24/2020 9:30 PM, André G. Isaak wrote:
    On 2020-08-24 15:02, olcott wrote:
    On 8/24/2020 2:23 PM, André G. Isaak wrote:
    On 2020-08-24 12:26, olcott wrote:
    On 8/24/2020 12:40 PM, André G. Isaak wrote:
    On 2020-08-24 00:52, olcott wrote:

    If the implementation detail of the memory architecture of >>>>>>>>>>>> the x86 language was an unlimited sequence of contiguous 4GB >>>>>>>>>>>> blocks and the absolute addressing modes of the 86 language >>>>>>>>>>>> only referred to addresses within the current 4GB block then >>>>>>>>>>>> it becomes rather obvious that the x86 language exactly as >>>>>>>>>>>> it currently exists specifies access to unlimited memory. >>>>>>>>>>>
    If all absolute addresses refer to the 'current block', how >>>>>>>>>>> exactly does one identify which block is 'current'? AFAIK the >>>>>>>>>>> x86 does not have a 'current block pointer'.

    André



    The idea is that it is exactly like a Turing Machine tape head. >>>>>>>>>> The EIP points to the current byte address in the current 4GB >>>>>>>>>> block.

    There are an unlimited number of 4GB blocks before and after >>>>>>>>>> the current 4GB block.

    So how do I know which of those blocks the EIP is pointing at? >>>>>>>>> Talking about the 'current' 4GB block is meaningless unless you >>>>>>>>> have some way of differentiating between these blocks.


    The current 4GB block is exactly the same thing as the current >>>>>>>> location of the Turing machine tape head.

    I have absolutely no idea how to interpret that analogy.

    --- So how do I know which of those blocks the EIP is pointing at? >>>>>> Like a Turing Machine tape head has no uniquely defined index
    identifier neither do the sequence of 4GB blocks. Where everywhere >>>>>> you are is only relatitve to where you have been before.

    There's no analog to a Turing Machine head in the x86. The x86 uses
    random access memory unlike a Turing Machine. So whatever it is you
    are trying to claim here is entirely unclear.

    So say I have access to one of your modified x86s which has been
    running for some unspecified amount of time and I can examine any
    aspect of it. How do I determine *which* 4GB block the contents of
    the EIP is referring to?


    You have no way of knowing that and no need to know that.

    If there is no way of knowing this, then how does this hypothetical
    computer know which block it is working with?

    The values in the current 4GB tell you everything that you need to
    know. One of the things that you might need to know is that the
    result of a computation spans however many 4GB blocks until space
    character is encountered. Such a TM could count to infinity and
    store the result as space delimited

    Don't call this a TM. You are not describing a TM, you are describing
    some vaguely defined modification of x86.

    Your goal here seemed to be to somehow demonstrate that x86
    architecture is Turing Complete (which it isn't), or that you could
    pretend there was some Turing Complete 'language' based on x86
    assembly language. But being written in a language which is Turing
    Complete doesn't make something into a Turing Machine.

    And since your whole reason for doing this was, as far as I can tell,
    to somehow justify your misuse of the term 'Turing Machine', then
    Turing Completeness isn't even the property you want. A language is
    Turing Complete if it is capable of performing any computation that a
    Turing Machine can perform. But if you want to claim that there is
    some Turing Machine that corresponds to your still-not-presented
    code, then you would need to show that every computation which can be
    performed by an X86 can be performed by a Turing Machine. That's not
    Turing Completeness. That's something else.

    And it still wouldn't make your code a Turing Machine.

    ASCII digit strings.

    If you made a Turing Machine that counted to infinity and stored
    the result the relative index of your current location would
    eventually wrap across 4GB boundaries.

    We could use the typical Turing Machine convention of delimiting
    the numeric digit strings with a space character. Whether we used
    base 10 ACSII digits or a base 256 number system we will
    eventually have a googleplex of 4GB blocks filled with the
    representation of a single integer value.

    RIP relative addressing of 64-bit architectures physically
    implements this abstract model of computation directly.

    No it doesn't. RIP is a 64-bit register which imposes an upper
    limit on the number of memory locations which can be addressed (and
    AFAIK no x86 chip actually has a 64-bit address bus so the real
    limit will be considerably less).


    An abstract machine having a tape head that can be advanced in 0 to
    0x7FFFFFFF increments an unlimited number of times specifies a model
    of computation that has access to unlimited memory. The technical
    name for memory addressing based on displacement from the current
    memory address is relative addressing.

    Relative addressing is always relative to something. If it is
    relative to the current block, you need a way of specifying which
    block is the current block,

    You also need a way of specifying which block is the next block.

    And it should be noted that relative addresses are translated into
    absolute addresses whenever memory is actually accessed. So all
    you're doing here is imaging a system in which there are unbounded
    address registers which you don't talk about that somehow pick out
    each 4GB block. Then you try to pretend this unbounded address
    doesn't exist by invoking all sorts of needless relative offsets.
    Wouldn't it be much simpler to simply state you are talking about
    some hypothetical system which is like the x86 except it has no
    bounds on addressing?

    RIP relative addressing directly implements this in hardware. The

    RIP implements nothing of the sort. RIP is simply a 64-bit
    instruction pointer register.

    abstract model of computation specified by RIP relative addressing
    allows an unlimited number of 0x7FFFFFFF jumps to higher memory.
    When a jump is made outside of the bounds of installed memory a
    hardware fault occurs. Hardware faults are not part of any abstract
    model.

    That that RIP relative addressing happens to be currently
    implemented on a 64-bit architecture is an implementation detail
    that is not a part of

    Um. The 'R' in RIP specifies a 64-bit register. That's what 'R'
    means. There is no x86 architecture that is defined in some
    register-width independent fashion.

    André



    "C" is not Turing Complete, but it could be: If we mapped the "C"
    programming language to the abstract model of computation of 32-bit
    signed RIP relative addressing that is implemented in the 64-bit
    machine architecture then "C" would have Turing Complete access to
    unlimited memory.

    Why are you suddenly talking about C?

    The ultimate goal is to define the highest level abstraction that is
    Turing Complete.

    Before you were talking about x86.
    Is this a change of topic? If so, why are you giving it as a reply to my post?

    And why on earth would C be related to RIP relative addressing? C,
    unlike x86 assembly, is not tied to any specific processor. It makes no assumptions about which registers or addressing modes exist.

    André


    When "C" compilers generate code on 64-bit architecture machines the
    code is based on RIP relative addressing.

    To make the abstract model of computation implied by RIP relative
    addressing code have Turing complete memory access for both code and
    data we merely fail to assume any specific size of the instruction pointer.

    The abstract model of computation itself does not assume that the RIP
    register is 64-bits. Details such as this are not actually required for instruction pointer relative addressing.

    Because we know that the generic instruction pointer relative addressing
    model of computation is Turing complete any "C" code that could be
    translated into this model would also be Turing complete.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Tue Aug 25 12:29:14 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 12:28 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 10:34 AM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    The abstract model of computation defined by RIP relative addressing >>>>> clearly specifies access to unlimited memory. Your first step in
    seeing that I am correct is understanding RIP relative addressing.

    Clearly? How is that clear?

    An abstract machine having a tape head that can be advanced in 0 to
    0x7FFFFFFF increments an unlimited number of times specifies a model
    of computation that has access to unlimited memory. The technical name
    for memory addressing based on displacement from the current memory
    address is relative addressing.

    RIP relative addressing directly implements this in hardware. The
    abstract model of computation specified by RIP relative addressing
    allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When
    a jump is made outside of the bounds of installed memory a hardware
    fault occurs. Hardware faults are not part of any abstract model.

    That that RIP relative addressing happens to be currently implemented
    on a 64-bit architecture is an implementation detail that is not a
    part of the abstract model of computation itself.

    Yes, I read all that the first N times you posted it. I'm not
    failing to understand in a way that would be solved by repetition.
    I just disagree.

    Valid disagreement always has a basis provided.

    I disagree.

    As far as I can tell, all you can reasonably is that relative addressing >>>> doesn't specify a limit. If you try to do a relative addressing
    operation that goes beyond the range of addresses the fixed-width
    instruction pointer can hold, there is as far as I know no specification >>>> in the x86 architecture of what happens. (Or perhaps there is such a
    specification. If so, perhaps you can find it in some document.)

    To implement the semantics specified by the x86 language would require
    memory organzied as an unlimited sequence of contiguous 4GB
    blocks. Absolute addressing modes of the 86 language would then only
    refer to addresses within the current 4GB block. The concept of a
    current 4GB block would be analogous to the current location of a
    Turing Machine tape head.

    That's not at all the same thing as specifying unlimited memory.

    The abstract model of computation specified by the language does
    define unlimited memory. Only the physical implementations of this
    abstract model place limits on memory access.

    Why would a specification for a real-world CPU specify unlimited memory >>>> when no actual hardware has unlimited memory?

    RIP relative addressing is a key example of this. Code written for the
    RIP relative addressing model will run exactly the same on every
    memory architecture that supports it.

    If we suddenly have a technical breakthrough such that googleplex
    sized RAM chips can be produced RIP relative addressing machine code
    would run from this RAM unmodified.

    Can you cite some x86 document that *actually says* anything about
    access to unlimited memory? I'm asking for something other than your
    infererence of unlimited memory from the lack of a specified limit.


    Look into RIP relative addressing as a simpler way of understanding
    the same thing that I am saying.

    I understand it would be convenient *for you* if the "x86 language"
    specified access to unlimited memory. That doesn't imply that it does. >>>
    It will make my Halting Problem refutation provided as x86 code have
    much higher credibility and complete applicability to actual Turing
    Machines.

    BTW, this would all be irrelevant if you simply stated that, for your
    purposes, you were working with an extended x86 that allows for
    unlimited memory.

    Not really, not at all. As long as it is understood that the abstract
    model of computation defined by the x86 language is fully Turing
    Complete physical implementations of this model that are not fully
    Turing complete would still be understood to be totally applicable.

    We might for example create a higher level language defined in terms
    of 32-bit signed RIP relative addressing that is itself 100% Turing
    Complete and then translate this higher level language into specific
    hardware platforms.

    Any such program needing less that 4GB RAM would run on existing
    32-bit platforms. Any such program needing less than 1.0 TB of RAM
    would run on existing 64-bit platforms.

    The key advantage of this is that theory of computation concepts can
    be empirically validated concretely using a much higher level
    abstraction. The "C" programming language could be mapped to such a
    32-bit signed RIP relative addressing model such that Turing complete
    machines could be written directly in "C".

    Your interpretation of somebody's web page describing the x86 JMP
    instruction is that it implies access to unlimited memory. Since I have
    asked you several times to cite some document that states that same
    conclusion and you have failed to do so, I conclude that you cannot.

    http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF

    Great, that's actually a document from Intel. I wonder why you couldn't
    have cited that earlier when I asked you to.

    page 25-192

    E9 cd JMP rel32 1 Jump near, displacement relative to next
    instruction

    IF instruction = relative JMP (* i.e. operand is rel8, rel16, or rel32 *)
    THEN
    EIP ← EIP + rel8/16/32;
    IF OperandSize = 16
    THEN EIP ← EIP AND 0000FFFFH;
    FI;
    FI;

    A relative JMP is defined in terms of its effect on EIP, which is
    specifed in that document as a 32-bit register. This directly
    contradicts your claim.

    I said it is common knowledge and every source agrees that it is a
    verified fact.

    No source that you've cited says that x86 can access unlimited memory.
    This one in particular seems clear that a relative JMP cannot do so.

    Is that a good enough basis for valid disagreement?

    This is a 1995 manual thus 32-bit architecture. https://en.wikipedia.org/wiki/List_of_Intel_microprocessors#Original_Pentium

    From https://c9x.me/x86/html/file_module_x86_id_147.html :

    A relative offset (rel8, rel16, or rel32) is generally specified as
    a label in assembly code, but at the machine code level, it is
    encoded as a signed 8-, 16-, or 32-bit immediate value. This value
    is added to the value in the EIP register. (Here, the EIP register
    contains the address of the instruction following the JMP
    instruction).

    The EIP register is 64 bits. That's part of the x86 architecture. The
    source you cite contradicts your claim.

    You are confusing the 32-bit EIP register with the 64-bit RIP register.

    Quite possibly. In any case, the instruction pointer has a fixed width (whatever it is) and can only point to one of a finite number of memory locations.

    Now if you want to assume an x86-like architecture that can access
    unlimited memory, that's fine.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Tue Aug 25 14:41:10 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/25/2020 2:29 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 12:28 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 10:34 AM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    The abstract model of computation defined by RIP relative addressing >>>>>> clearly specifies access to unlimited memory. Your first step in
    seeing that I am correct is understanding RIP relative addressing.

    Clearly? How is that clear?

    An abstract machine having a tape head that can be advanced in 0 to
    0x7FFFFFFF increments an unlimited number of times specifies a model
    of computation that has access to unlimited memory. The technical name >>>> for memory addressing based on displacement from the current memory
    address is relative addressing.

    RIP relative addressing directly implements this in hardware. The
    abstract model of computation specified by RIP relative addressing
    allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When
    a jump is made outside of the bounds of installed memory a hardware
    fault occurs. Hardware faults are not part of any abstract model.

    That that RIP relative addressing happens to be currently implemented
    on a 64-bit architecture is an implementation detail that is not a
    part of the abstract model of computation itself.

    Yes, I read all that the first N times you posted it. I'm not
    failing to understand in a way that would be solved by repetition.
    I just disagree.

    Valid disagreement always has a basis provided.

    I disagree.

    As far as I can tell, all you can reasonably is that relative addressing >>>>> doesn't specify a limit. If you try to do a relative addressing
    operation that goes beyond the range of addresses the fixed-width
    instruction pointer can hold, there is as far as I know no specification >>>>> in the x86 architecture of what happens. (Or perhaps there is such a >>>>> specification. If so, perhaps you can find it in some document.)

    To implement the semantics specified by the x86 language would require >>>> memory organzied as an unlimited sequence of contiguous 4GB
    blocks. Absolute addressing modes of the 86 language would then only
    refer to addresses within the current 4GB block. The concept of a
    current 4GB block would be analogous to the current location of a
    Turing Machine tape head.

    That's not at all the same thing as specifying unlimited memory.

    The abstract model of computation specified by the language does
    define unlimited memory. Only the physical implementations of this
    abstract model place limits on memory access.

    Why would a specification for a real-world CPU specify unlimited memory >>>>> when no actual hardware has unlimited memory?

    RIP relative addressing is a key example of this. Code written for the >>>> RIP relative addressing model will run exactly the same on every
    memory architecture that supports it.

    If we suddenly have a technical breakthrough such that googleplex
    sized RAM chips can be produced RIP relative addressing machine code
    would run from this RAM unmodified.

    Can you cite some x86 document that *actually says* anything about
    access to unlimited memory? I'm asking for something other than your >>>>> infererence of unlimited memory from the lack of a specified limit.


    Look into RIP relative addressing as a simpler way of understanding
    the same thing that I am saying.

    I understand it would be convenient *for you* if the "x86 language"
    specified access to unlimited memory. That doesn't imply that it does. >>>>
    It will make my Halting Problem refutation provided as x86 code have
    much higher credibility and complete applicability to actual Turing
    Machines.

    BTW, this would all be irrelevant if you simply stated that, for your >>>>> purposes, you were working with an extended x86 that allows for
    unlimited memory.

    Not really, not at all. As long as it is understood that the abstract
    model of computation defined by the x86 language is fully Turing
    Complete physical implementations of this model that are not fully
    Turing complete would still be understood to be totally applicable.

    We might for example create a higher level language defined in terms
    of 32-bit signed RIP relative addressing that is itself 100% Turing
    Complete and then translate this higher level language into specific
    hardware platforms.

    Any such program needing less that 4GB RAM would run on existing
    32-bit platforms. Any such program needing less than 1.0 TB of RAM
    would run on existing 64-bit platforms.

    The key advantage of this is that theory of computation concepts can
    be empirically validated concretely using a much higher level
    abstraction. The "C" programming language could be mapped to such a
    32-bit signed RIP relative addressing model such that Turing complete
    machines could be written directly in "C".

    Your interpretation of somebody's web page describing the x86 JMP
    instruction is that it implies access to unlimited memory. Since I have >>> asked you several times to cite some document that states that same
    conclusion and you have failed to do so, I conclude that you cannot.

    http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF

    Great, that's actually a document from Intel. I wonder why you couldn't
    have cited that earlier when I asked you to.

    page 25-192

    E9 cd JMP rel32 1 Jump near, displacement relative to next
    instruction

    IF instruction = relative JMP (* i.e. operand is rel8, rel16, or rel32 *) THEN
    EIP ← EIP + rel8/16/32;
    IF OperandSize = 16
    THEN EIP ← EIP AND 0000FFFFH;
    FI;
    FI;

    A relative JMP is defined in terms of its effect on EIP, which is
    specifed in that document as a 32-bit register. This directly
    contradicts your claim.

    I said it is common knowledge and every source agrees that it is a
    verified fact.

    No source that you've cited says that x86 can access unlimited memory.
    This one in particular seems clear that a relative JMP cannot do so.

    Is that a good enough basis for valid disagreement?

    This is a 1995 manual thus 32-bit architecture.
    https://en.wikipedia.org/wiki/List_of_Intel_microprocessors#Original_Pentium >>
    From https://c9x.me/x86/html/file_module_x86_id_147.html :

    A relative offset (rel8, rel16, or rel32) is generally specified as >>> a label in assembly code, but at the machine code level, it is
    encoded as a signed 8-, 16-, or 32-bit immediate value. This value >>> is added to the value in the EIP register. (Here, the EIP register >>> contains the address of the instruction following the JMP
    instruction).

    The EIP register is 64 bits. That's part of the x86 architecture. The
    source you cite contradicts your claim.

    You are confusing the 32-bit EIP register with the 64-bit RIP register.

    Quite possibly. In any case, the instruction pointer has a fixed width (whatever it is) and can only point to one of a finite number of memory locations.

    Now if you want to assume an x86-like architecture that can access
    unlimited memory, that's fine.


    I have switched gears now.
    I have a much more effective approach.

    To make the abstract model of computation implied by RIP relative
    addressing code have Turing complete memory access for both code and
    data we merely fail to assume any specific size of the instruction pointer.

    The abstract model of computation itself does not assume that the RIP
    register is 64-bits. Details such as this are not actually required for instruction pointer relative addressing.

    Because we know that the generic instruction pointer relative addressing
    model of computation is Turing complete any "C" code that could be
    translated into this model would also be Turing complete.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Tue Aug 25 12:50:05 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 1:30 PM, André G. Isaak wrote:
    On 2020-08-25 11:51, olcott wrote:
    On 8/25/2020 12:28 PM, Keith Thompson wrote:
    Your interpretation of somebody's web page describing the x86 JMP
    instruction is that it implies access to unlimited memory.  Since I have >>>> asked you several times to cite some document that states that same
    conclusion and you have failed to do so, I conclude that you cannot.


    http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF

    page 25-192

    E9 cd JMP rel32 1 Jump near, displacement relative to next
    instruction

    p25-192 of that document describes the jump instruction. It most
    definitely *does not* claim that this instruction gives access to
    unlimited memory.

    I said it is common knowledge and every source agrees that it is a
    verified fact.

    The above one certainly does not. If you think otherwise, please
    provide an actual quote from the above which talks about access to
    unlimited memory.

    The syntax of the language specifies this:
    Jumps to an address that is 7FFFFFFF greater than FFFFFFF5
    fffffff0: e9ffffff7f jmp 0xfffffff5 + 0x7fffffff
    fffffff5: 90 nop

    The syntax of the language specifies no such thing. You're talking
    about semantics.

    In any case the abstract model of computation defined by RIP relative addressing much more directly provides access to unlimited memory.

    An abstract model of computation having all memory access for both
    code and data specified as signed 32-bit offsets relative to the
    instruction pointer would define Turing complete access to memory when
    the size of the instruction pointer is unspecified.

    You've decided, with no particular basis, which parts of the x86
    architecture are part of your "abstract model" and which are not.
    You accept that offsets are 32 bits, but you ignore the fact that
    the instruction pointer is no more than 64 bits.

    If you want to call that *your* abstract model, that's fine. Your
    assertion that the x86 language/architecture/whatever implies access
    to unlimited memory is disproven by the very links you've posted.

    Please find and post a link to a definitive x86 description that
    actually *says* that it can access unlimited memory. Something that
    merely doesn't specify a limit does not count. You've claimed
    repeatedly that this is "common knowledge and every source agrees
    that it is a verified fact". If so, surely you can find *something*
    that actually says so.

    The "C" programming language could be mapped to such an abstract model
    thus allowing Turing complete programs to be written directly in "C".

    So Turing completeness is an attribute of programs? I thought it was an attribute of languages and similar entities. Are you talking about a
    program that implements some Turing-complete system?

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Tue Aug 25 14:51:19 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/24/2020 7:59 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/23/2020 7:38 PM, Ben Bacarisse wrote:
    <cut>
    But suit yourself. As I said, boasting and insulting is the main
    pleasure for a crank like you, and you won't stop just because I call
    you out on it. Have fun.

    I don't disrespect you. I disrespect your disrespecting me.
    The code is completed, it is only 65 bytes of x86 machine code.

    Another empty boast -- still no code. You can't seem to stop doing it.
    Or maybe repeating the excuse for not posting code -- that it would be "pearls before swine" -- would not look good next to your claim that you don't disrespect me!

    The code begins @ machine address 0x100. Fills the 256 bytes preceding
    it with “@@@@”. Dynamically modifies its own first instruction to
    refer to its new code address. Copies itself to a new address that is
    exactly 0x100 bytes above its current code address and then does a
    relative jump to this new address.

    Instead of 65 bytes of unambiguous code, you post 312 bytes of poorly
    worded description. Despite the poor wording, this is clearly not the
    code you said you could produce (it's not in any way a linked list) but that's OK. You say enough for me to know that even this more limited
    loop does not meet your objective. Post the code and you can find out
    why it can only access a finite amount memory. Don't post the code and
    you can boast away to year heart's content. It's up to you.

    I had the emulator terminate its execution after the first four full
    cycles. I output the first 1024 bytes of memory before and after
    execution and output complete execution trace of all of the execution.
    This is 23 pages long.

    Wow! You wrote code and got 23 pages of output. You must be right.
    I'll call the editor of the JACM right away.

    According to the meaning of the syntax of the x86 language this
    program would continue to operate after it completed its execution @
    machine address: FFFFFF00.

    By studiously hiding the code you can avoid anyone being able to be
    specific, but assuming a reasonable interpretation of your non-code
    waffle, the jumps will always be to addresses within the current code segment. I'm pretty sure there are other problems with it (hint: you
    talk of EIP and not RIP so data access is also a problem) but your
    hiding the code protects it from more specific analysis.

    The only reason that it would not continue to operate would be that
    the physical implementation of the abstract model did not implement
    the semantics specified by the language syntax.

    No. For more details, post the code.


    This whole line of reasoning is now moot: https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access

    Now I am showing that any "C" language program that can be translated
    into signed 32-bit addressing relative to an instruction pointer of
    unspecified size would have Turing complete memory access.

    This is the exactly same abstract model of computation that is
    physically implemented in 64-bit architectures relative to a 64-bit
    instruction pointer.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Tue Aug 25 12:54:33 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 2:29 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 12:28 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 10:34 AM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    The abstract model of computation defined by RIP relative addressing >>>>>>> clearly specifies access to unlimited memory. Your first step in >>>>>>> seeing that I am correct is understanding RIP relative addressing. >>>>>>
    Clearly? How is that clear?

    An abstract machine having a tape head that can be advanced in 0 to
    0x7FFFFFFF increments an unlimited number of times specifies a model >>>>> of computation that has access to unlimited memory. The technical name >>>>> for memory addressing based on displacement from the current memory
    address is relative addressing.

    RIP relative addressing directly implements this in hardware. The
    abstract model of computation specified by RIP relative addressing
    allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When >>>>> a jump is made outside of the bounds of installed memory a hardware
    fault occurs. Hardware faults are not part of any abstract model.

    That that RIP relative addressing happens to be currently implemented >>>>> on a 64-bit architecture is an implementation detail that is not a
    part of the abstract model of computation itself.

    Yes, I read all that the first N times you posted it. I'm not
    failing to understand in a way that would be solved by repetition.
    I just disagree.

    Valid disagreement always has a basis provided.

    I disagree.

    As far as I can tell, all you can reasonably is that relative addressing >>>>>> doesn't specify a limit. If you try to do a relative addressing
    operation that goes beyond the range of addresses the fixed-width
    instruction pointer can hold, there is as far as I know no specification >>>>>> in the x86 architecture of what happens. (Or perhaps there is such a >>>>>> specification. If so, perhaps you can find it in some document.)

    To implement the semantics specified by the x86 language would require >>>>> memory organzied as an unlimited sequence of contiguous 4GB
    blocks. Absolute addressing modes of the 86 language would then only >>>>> refer to addresses within the current 4GB block. The concept of a
    current 4GB block would be analogous to the current location of a
    Turing Machine tape head.

    That's not at all the same thing as specifying unlimited memory.

    The abstract model of computation specified by the language does
    define unlimited memory. Only the physical implementations of this
    abstract model place limits on memory access.

    Why would a specification for a real-world CPU specify unlimited memory >>>>>> when no actual hardware has unlimited memory?

    RIP relative addressing is a key example of this. Code written for the >>>>> RIP relative addressing model will run exactly the same on every
    memory architecture that supports it.

    If we suddenly have a technical breakthrough such that googleplex
    sized RAM chips can be produced RIP relative addressing machine code >>>>> would run from this RAM unmodified.

    Can you cite some x86 document that *actually says* anything about >>>>>> access to unlimited memory? I'm asking for something other than your >>>>>> infererence of unlimited memory from the lack of a specified limit. >>>>>>

    Look into RIP relative addressing as a simpler way of understanding
    the same thing that I am saying.

    I understand it would be convenient *for you* if the "x86 language" >>>>>> specified access to unlimited memory. That doesn't imply that it does. >>>>>
    It will make my Halting Problem refutation provided as x86 code have >>>>> much higher credibility and complete applicability to actual Turing
    Machines.

    BTW, this would all be irrelevant if you simply stated that, for your >>>>>> purposes, you were working with an extended x86 that allows for
    unlimited memory.

    Not really, not at all. As long as it is understood that the abstract >>>>> model of computation defined by the x86 language is fully Turing
    Complete physical implementations of this model that are not fully
    Turing complete would still be understood to be totally applicable.

    We might for example create a higher level language defined in terms >>>>> of 32-bit signed RIP relative addressing that is itself 100% Turing
    Complete and then translate this higher level language into specific >>>>> hardware platforms.

    Any such program needing less that 4GB RAM would run on existing
    32-bit platforms. Any such program needing less than 1.0 TB of RAM
    would run on existing 64-bit platforms.

    The key advantage of this is that theory of computation concepts can >>>>> be empirically validated concretely using a much higher level
    abstraction. The "C" programming language could be mapped to such a
    32-bit signed RIP relative addressing model such that Turing complete >>>>> machines could be written directly in "C".

    Your interpretation of somebody's web page describing the x86 JMP
    instruction is that it implies access to unlimited memory. Since I have >>>> asked you several times to cite some document that states that same
    conclusion and you have failed to do so, I conclude that you cannot.

    http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF

    Great, that's actually a document from Intel. I wonder why you couldn't
    have cited that earlier when I asked you to.

    page 25-192

    E9 cd JMP rel32 1 Jump near, displacement relative to next
    instruction

    IF instruction = relative JMP (* i.e. operand is rel8, rel16, or rel32 *)
    THEN
    EIP ← EIP + rel8/16/32;
    IF OperandSize = 16
    THEN EIP ← EIP AND 0000FFFFH;
    FI;
    FI;

    A relative JMP is defined in terms of its effect on EIP, which is
    specifed in that document as a 32-bit register. This directly
    contradicts your claim.

    I said it is common knowledge and every source agrees that it is a
    verified fact.

    No source that you've cited says that x86 can access unlimited memory.
    This one in particular seems clear that a relative JMP cannot do so.

    Is that a good enough basis for valid disagreement?

    This is a 1995 manual thus 32-bit architecture.
    https://en.wikipedia.org/wiki/List_of_Intel_microprocessors#Original_Pentium

    From https://c9x.me/x86/html/file_module_x86_id_147.html :

    A relative offset (rel8, rel16, or rel32) is generally specified as >>>> a label in assembly code, but at the machine code level, it is
    encoded as a signed 8-, 16-, or 32-bit immediate value. This value >>>> is added to the value in the EIP register. (Here, the EIP register >>>> contains the address of the instruction following the JMP
    instruction).

    The EIP register is 64 bits. That's part of the x86 architecture. The >>>> source you cite contradicts your claim.

    You are confusing the 32-bit EIP register with the 64-bit RIP register.

    Quite possibly. In any case, the instruction pointer has a fixed width
    (whatever it is) and can only point to one of a finite number of memory
    locations.

    Now if you want to assume an x86-like architecture that can access
    unlimited memory, that's fine.

    I have switched gears now.
    I have a much more effective approach.

    To make the abstract model of computation implied by RIP relative
    addressing code have Turing complete memory access for both code and
    data we merely fail to assume any specific size of the instruction
    pointer.

    The abstract model of computation itself does not assume that the RIP register is 64-bits. Details such as this are not actually required
    for instruction pointer relative addressing.

    Because we know that the generic instruction pointer relative
    addressing model of computation is Turing complete any "C" code that
    could be translated into this model would also be Turing complete.

    That *almost* sounds like an admission that you were wrong.

    Is it? If so, will you acknowledge the time and effort I and others
    have spent convincing you of that? Do you finally realize that the
    sources you've cited in support of your claims actually contradict them?

    Will you acknowledge that what you call "The abstract model of
    computation" is not something that follows from the x86 architecture,
    but that you've invented yourself?

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Tue Aug 25 15:14:11 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/25/2020 2:50 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 1:30 PM, André G. Isaak wrote:
    On 2020-08-25 11:51, olcott wrote:
    On 8/25/2020 12:28 PM, Keith Thompson wrote:
    Your interpretation of somebody's web page describing the x86 JMP
    instruction is that it implies access to unlimited memory.  Since I have >>>>> asked you several times to cite some document that states that same
    conclusion and you have failed to do so, I conclude that you cannot. >>>>>

    http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF

    page 25-192

    E9 cd JMP rel32 1 Jump near, displacement relative to next
    instruction

    p25-192 of that document describes the jump instruction. It most
    definitely *does not* claim that this instruction gives access to
    unlimited memory.

    I said it is common knowledge and every source agrees that it is a
    verified fact.

    The above one certainly does not. If you think otherwise, please
    provide an actual quote from the above which talks about access to
    unlimited memory.

    The syntax of the language specifies this:
    Jumps to an address that is 7FFFFFFF greater than FFFFFFF5
    fffffff0: e9ffffff7f jmp 0xfffffff5 + 0x7fffffff
    fffffff5: 90 nop

    The syntax of the language specifies no such thing. You're talking
    about semantics.

    In any case the abstract model of computation defined by RIP relative
    addressing much more directly provides access to unlimited memory.

    An abstract model of computation having all memory access for both
    code and data specified as signed 32-bit offsets relative to the
    instruction pointer would define Turing complete access to memory when
    the size of the instruction pointer is unspecified.

    You've decided, with no particular basis, which parts of the x86
    architecture are part of your "abstract model" and which are not.
    You accept that offsets are 32 bits, but you ignore the fact that
    the instruction pointer is no more than 64 bits.

    I am forming a bridge between existing concrete programming models and
    their Turing complete equivalents.

    If you want to call that *your* abstract model, that's fine. Your
    assertion that the x86 language/architecture/whatever implies access
    to unlimited memory is disproven by the very links you've posted.


    Any x86 code executed using only RIP relative addressing on actual
    hardware would have identical behavior to the behavior of the exact same
    code in the above specified Turing complete abstract model.

    If we only have RIP relative addressing then both memory size and the
    width of the instruction pointer can remain abstractions within an
    abstract model. With relative addressing there is no such thing as
    memory address size. All of the other concrete details about the x86
    32-bit or 64-bit architecture can remain as they are fully specified.

    Please find and post a link to a definitive x86 description that
    actually *says* that it can access unlimited memory. Something that
    merely doesn't specify a limit does not count. You've claimed
    repeatedly that this is "common knowledge and every source agrees
    that it is a verified fact". If so, surely you can find *something*
    that actually says so.

    The "C" programming language could be mapped to such an abstract model
    thus allowing Turing complete programs to be written directly in "C".

    So Turing completeness is an attribute of programs? I thought it was an attribute of languages and similar entities. Are you talking about a
    program that implements some Turing-complete system?



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Tue Aug 25 15:27:07 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/25/2020 2:54 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 2:29 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 12:28 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 10:34 AM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    The abstract model of computation defined by RIP relative addressing >>>>>>>> clearly specifies access to unlimited memory. Your first step in >>>>>>>> seeing that I am correct is understanding RIP relative addressing. >>>>>>>
    Clearly? How is that clear?

    An abstract machine having a tape head that can be advanced in 0 to >>>>>> 0x7FFFFFFF increments an unlimited number of times specifies a model >>>>>> of computation that has access to unlimited memory. The technical name >>>>>> for memory addressing based on displacement from the current memory >>>>>> address is relative addressing.

    RIP relative addressing directly implements this in hardware. The
    abstract model of computation specified by RIP relative addressing >>>>>> allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When >>>>>> a jump is made outside of the bounds of installed memory a hardware >>>>>> fault occurs. Hardware faults are not part of any abstract model.

    That that RIP relative addressing happens to be currently implemented >>>>>> on a 64-bit architecture is an implementation detail that is not a >>>>>> part of the abstract model of computation itself.

    Yes, I read all that the first N times you posted it. I'm not
    failing to understand in a way that would be solved by repetition.
    I just disagree.

    Valid disagreement always has a basis provided.

    I disagree.

    As far as I can tell, all you can reasonably is that relative addressing
    doesn't specify a limit. If you try to do a relative addressing >>>>>>> operation that goes beyond the range of addresses the fixed-width >>>>>>> instruction pointer can hold, there is as far as I know no specification
    in the x86 architecture of what happens. (Or perhaps there is such a >>>>>>> specification. If so, perhaps you can find it in some document.) >>>>>>
    To implement the semantics specified by the x86 language would require >>>>>> memory organzied as an unlimited sequence of contiguous 4GB
    blocks. Absolute addressing modes of the 86 language would then only >>>>>> refer to addresses within the current 4GB block. The concept of a
    current 4GB block would be analogous to the current location of a
    Turing Machine tape head.

    That's not at all the same thing as specifying unlimited memory.

    The abstract model of computation specified by the language does
    define unlimited memory. Only the physical implementations of this >>>>>> abstract model place limits on memory access.

    Why would a specification for a real-world CPU specify unlimited memory >>>>>>> when no actual hardware has unlimited memory?

    RIP relative addressing is a key example of this. Code written for the >>>>>> RIP relative addressing model will run exactly the same on every
    memory architecture that supports it.

    If we suddenly have a technical breakthrough such that googleplex
    sized RAM chips can be produced RIP relative addressing machine code >>>>>> would run from this RAM unmodified.

    Can you cite some x86 document that *actually says* anything about >>>>>>> access to unlimited memory? I'm asking for something other than your >>>>>>> infererence of unlimited memory from the lack of a specified limit. >>>>>>>

    Look into RIP relative addressing as a simpler way of understanding >>>>>> the same thing that I am saying.

    I understand it would be convenient *for you* if the "x86 language" >>>>>>> specified access to unlimited memory. That doesn't imply that it does. >>>>>>
    It will make my Halting Problem refutation provided as x86 code have >>>>>> much higher credibility and complete applicability to actual Turing >>>>>> Machines.

    BTW, this would all be irrelevant if you simply stated that, for your >>>>>>> purposes, you were working with an extended x86 that allows for
    unlimited memory.

    Not really, not at all. As long as it is understood that the abstract >>>>>> model of computation defined by the x86 language is fully Turing
    Complete physical implementations of this model that are not fully >>>>>> Turing complete would still be understood to be totally applicable. >>>>>>
    We might for example create a higher level language defined in terms >>>>>> of 32-bit signed RIP relative addressing that is itself 100% Turing >>>>>> Complete and then translate this higher level language into specific >>>>>> hardware platforms.

    Any such program needing less that 4GB RAM would run on existing
    32-bit platforms. Any such program needing less than 1.0 TB of RAM >>>>>> would run on existing 64-bit platforms.

    The key advantage of this is that theory of computation concepts can >>>>>> be empirically validated concretely using a much higher level
    abstraction. The "C" programming language could be mapped to such a >>>>>> 32-bit signed RIP relative addressing model such that Turing complete >>>>>> machines could be written directly in "C".

    Your interpretation of somebody's web page describing the x86 JMP
    instruction is that it implies access to unlimited memory. Since I have >>>>> asked you several times to cite some document that states that same
    conclusion and you have failed to do so, I conclude that you cannot.

    http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF

    Great, that's actually a document from Intel. I wonder why you couldn't >>> have cited that earlier when I asked you to.

    page 25-192

    E9 cd JMP rel32 1 Jump near, displacement relative to next
    instruction

    IF instruction = relative JMP (* i.e. operand is rel8, rel16, or rel32 *) >>> THEN
    EIP ← EIP + rel8/16/32;
    IF OperandSize = 16
    THEN EIP ← EIP AND 0000FFFFH;
    FI;
    FI;

    A relative JMP is defined in terms of its effect on EIP, which is
    specifed in that document as a 32-bit register. This directly
    contradicts your claim.

    I said it is common knowledge and every source agrees that it is a
    verified fact.

    No source that you've cited says that x86 can access unlimited memory.
    This one in particular seems clear that a relative JMP cannot do so.

    Is that a good enough basis for valid disagreement?

    This is a 1995 manual thus 32-bit architecture.
    https://en.wikipedia.org/wiki/List_of_Intel_microprocessors#Original_Pentium

    From https://c9x.me/x86/html/file_module_x86_id_147.html :

    A relative offset (rel8, rel16, or rel32) is generally specified as
    a label in assembly code, but at the machine code level, it is >>>>> encoded as a signed 8-, 16-, or 32-bit immediate value. This value >>>>> is added to the value in the EIP register. (Here, the EIP register >>>>> contains the address of the instruction following the JMP
    instruction).

    The EIP register is 64 bits. That's part of the x86 architecture. The >>>>> source you cite contradicts your claim.

    You are confusing the 32-bit EIP register with the 64-bit RIP register. >>>
    Quite possibly. In any case, the instruction pointer has a fixed width
    (whatever it is) and can only point to one of a finite number of memory
    locations.

    Now if you want to assume an x86-like architecture that can access
    unlimited memory, that's fine.

    I have switched gears now.
    I have a much more effective approach.

    To make the abstract model of computation implied by RIP relative
    addressing code have Turing complete memory access for both code and
    data we merely fail to assume any specific size of the instruction
    pointer.

    The abstract model of computation itself does not assume that the RIP
    register is 64-bits. Details such as this are not actually required
    for instruction pointer relative addressing.

    Because we know that the generic instruction pointer relative
    addressing model of computation is Turing complete any "C" code that
    could be translated into this model would also be Turing complete.

    That *almost* sounds like an admission that you were wrong.

    Is it? If so, will you acknowledge the time and effort I and others
    have spent convincing you of that? Do you finally realize that the
    sources you've cited in support of your claims actually contradict them?


    I am not wrong. The only reason that anyone thought that I was wrong is
    that they did not divide the line between the abstract model and its
    physical implementation correctly. They may persist in doing that
    intentionally because they really like to say that I am wrong.

    Will you acknowledge that what you call "The abstract model of
    computation" is not something that follows from the x86 architecture,
    but that you've invented yourself?


    It is exactly as I have always said it:
    The defined byte sequence of the machine code and the primary change of
    machine state that the instruction is defined to accomplish.

    Jumps to an address that is 7FFFFFFF greater than FFFFFFF5
    fffffff0: e9ffffff7f jmp 0xfffffff5 + 0x7fffffff
    fffffff5: 90 nop

    The above machine code is syntactically correct and says that it will
    jump to machine address: 0xfffffff5 + 0x7fffffff in the 32-bit model.

    When it does not actually jump to this address it is a failure of the
    hardware to implement what the machine language clearly specifies.

    If the hardware did jump to 0xfffffff5 + 0x7fffffff in the 32-bit model
    then there is no reason why it could not do this again and again.

    The abstract machine that the 32-bit model of the x86 language specifies
    really does have unlimited memory access.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Tue Aug 25 13:38:29 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 2:54 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    I have switched gears now.
    I have a much more effective approach.

    To make the abstract model of computation implied by RIP relative
    addressing code have Turing complete memory access for both code and
    data we merely fail to assume any specific size of the instruction
    pointer.

    The abstract model of computation itself does not assume that the RIP
    register is 64-bits. Details such as this are not actually required
    for instruction pointer relative addressing.

    Because we know that the generic instruction pointer relative
    addressing model of computation is Turing complete any "C" code that
    could be translated into this model would also be Turing complete.

    That *almost* sounds like an admission that you were wrong.

    Is it? If so, will you acknowledge the time and effort I and others
    have spent convincing you of that? Do you finally realize that the
    sources you've cited in support of your claims actually contradict them?

    I am not wrong. The only reason that anyone thought that I was wrong
    is that they did not divide the line between the abstract model and
    its physical implementation correctly. They may persist in doing that intentionally because they really like to say that I am wrong.

    And what makes the way you drew that line correct?

    Will you acknowledge that what you call "The abstract model of
    computation" is not something that follows from the x86 architecture,
    but that you've invented yourself?

    It is exactly as I have always said it:
    The defined byte sequence of the machine code and the primary change
    of machine state that the instruction is defined to accomplish.

    Jumps to an address that is 7FFFFFFF greater than FFFFFFF5
    fffffff0: e9ffffff7f jmp 0xfffffff5 + 0x7fffffff
    fffffff5: 90 nop

    The above machine code is syntactically correct and says that it will
    jump to machine address: 0xfffffff5 + 0x7fffffff in the 32-bit model.

    When it does not actually jump to this address it is a failure of the hardware to implement what the machine language clearly specifies.

    No, the link you posted makes it clear that the effect of a JMP
    instruction is defined by its effect on the instruction pointer
    register, which has a fixed size and cannot address more than 2**N
    memory locations.

    If the hardware did jump to 0xfffffff5 + 0x7fffffff in the 32-bit
    model then there is no reason why it could not do this again and
    again.

    The abstract machine that the 32-bit model of the x86 language
    specifies really does have unlimited memory access.

    And anyone to whom this isn't obvious just isn't intelligent enough to understand your wisdom, and your decision that the 32-bit limit on
    relative jumps is part of the abstract machine while the fixed width of
    the instruction pointer is merely an implementation detail isn't at all arbitrary, even though you can't cite a reference to anyone who actually
    agrees with you.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Tue Aug 25 16:36:06 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/25/2020 3:38 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 2:54 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    [...]
    I have switched gears now.
    I have a much more effective approach.

    To make the abstract model of computation implied by RIP relative
    addressing code have Turing complete memory access for both code and
    data we merely fail to assume any specific size of the instruction
    pointer.

    The abstract model of computation itself does not assume that the RIP
    register is 64-bits. Details such as this are not actually required
    for instruction pointer relative addressing.

    Because we know that the generic instruction pointer relative
    addressing model of computation is Turing complete any "C" code that
    could be translated into this model would also be Turing complete.

    That *almost* sounds like an admission that you were wrong.

    Is it? If so, will you acknowledge the time and effort I and others
    have spent convincing you of that? Do you finally realize that the
    sources you've cited in support of your claims actually contradict them?

    I am not wrong. The only reason that anyone thought that I was wrong
    is that they did not divide the line between the abstract model and
    its physical implementation correctly. They may persist in doing that
    intentionally because they really like to say that I am wrong.

    And what makes the way you drew that line correct?

    Will you acknowledge that what you call "The abstract model of
    computation" is not something that follows from the x86 architecture,
    but that you've invented yourself?

    It is exactly as I have always said it:
    The defined byte sequence of the machine code and the primary change
    of machine state that the instruction is defined to accomplish.

    Jumps to an address that is 7FFFFFFF greater than FFFFFFF5
    fffffff0: e9ffffff7f jmp 0xfffffff5 + 0x7fffffff
    fffffff5: 90 nop

    The above machine code is syntactically correct and says that it will
    jump to machine address: 0xfffffff5 + 0x7fffffff in the 32-bit model.

    When it does not actually jump to this address it is a failure of the
    hardware to implement what the machine language clearly specifies.

    No, the link you posted makes it clear that the effect of a JMP
    instruction is defined by its effect on the instruction pointer
    register, which has a fixed size and cannot address more than 2**N
    memory locations.


    The primary change of state effect of the instruction is to jump to
    address 0xfffffff5 + 0x7fffffff. The physical implementation way of
    achieving that effect results in an overflow to the EIP register.

    If we went with a more abastract model than the abstraction that I
    specified the size of the registers would be unspecified and the
    abstraction would unequivocally allow unlimited memory access.

    If we went with a less abstract model than the one that I specified such
    that the sum of 0xfffffff5 + 0x7fffffff rolls over to address lower
    memory, then this does not allow unlimited memory access and the
    language specification is somewhat self-contradictory.

    If we draw the line of demarcation exactly where I drew it and assume
    that the language specification is not self-contradictory then the only
    way for jmp 0xfffffff5 + 0x7fffffff to actually be executed to mean
    exactly what it says that it means is to roll over the EIP like an
    odometer and jump into the next adjacent 4GB block of memory.

    If the hardware did jump to 0xfffffff5 + 0x7fffffff in the 32-bit
    model then there is no reason why it could not do this again and
    again.

    The abstract machine that the 32-bit model of the x86 language
    specifies really does have unlimited memory access.

    And anyone to whom this isn't obvious just isn't intelligent enough to understand your wisdom, and your decision that the 32-bit limit on
    relative jumps is part of the abstract machine while the fixed width of
    the instruction pointer is merely an implementation detail isn't at all arbitrary, even though you can't cite a reference to anyone who actually agrees with you.


    Since I am focusing on building a precise bridge between a concrete
    programming models (none of which can possisbly be Turing Complete) and
    the closest abstract model that is Turing complete we must necessarily
    make at least minimal adaptations to the concrete model.

    The concrete model of computation requiring the least changes to adapt
    it to become a Turing complete abstract model of computation is the
    32-bit signed RIP relative addressing of the x86-64.

    If we take this machine language and get rid of all absolute addressing
    modes, leave the size of memory and the width of the instruction pointer unspecified then with these minimal changes x86-64 becomes Turing complete.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Tue Aug 25 20:07:28 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/25/2020 7:07 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 5:35 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 2:29 PM, Keith Thompson wrote:
    <cut>
    A relative JMP is defined in terms of its effect on EIP, which is
    specifed in that document as a 32-bit register. This directly
    contradicts your claim.
    <cut>
    I have switched gears now.
    I have a much more effective approach.

    To make the abstract model of computation implied by RIP relative
    addressing code have Turing complete memory access for both code and
    data we merely fail to assume any specific size of the instruction
    pointer.

    Finally! (Bravo Keith.) You will never admit that this change of gears >>> means you were wrong, but at least there is now no need for you to keep
    saying the same wrong thing over and over again.

    I was not wrong when I defined the line or demarcation between the
    abstract model and the physical implementation where I did.

    That you refuse to draw this line of demaracation where I specified
    does not make me wrong.

    I never said that drawing the line where you did was wrong. What you
    were wrong about was your claim that your choice of where to draw the
    line was forced by the "x86 language" that you never fully defined.


    I defined exactly how to interpret the hundreds of pages of existing documentation and provide a link.

    You defined (or tried to define) an abstract model based on the x86.
    Nothing wrong with that. You defined some aspects of the x86
    definition as part of the abstraction and others as implementation
    details. Nothing wrong with that either.

    OK

    But you insisted that the
    specific line between abstraction and implementation details would
    be obvious to anyone who read the references you cited -- when in
    fact those references made it clear that the instruction pointer
    has a fixed width of no more than 64 bits, and that jumping to an
    address exceeding 2**64 could not make sense given the published
    x86 specifications.

    You got these basic facts incorrectly it is 32-bits not 64-bits. I don't
    think any machine an Earth has 2^64 bytes of RAM.

    When I drew the line of demarcation at the functional effect that the instruction was designed to achieve not including any of the details of
    how it achieve this functional effect I am still correct that the 32-bit
    x86 language does specify unlimited memory access.

    When you say that this simply will not work in terms of how the
    processor is defined to achieve those functional effects you step
    outside of the line of demarcation that I drew.

    fffffff0: e9ffffff7f jmp 0xfffffff5 + 0x7fffffff
    fffffff5: 90 nop
    That says that it will jump to memory address: 0xfffffff5 + 0x7fffffff
    That it does not actually do this is an implementation detail that is
    outside of the scope of the abstraction.

    That it cannot do this because the 32-bit EIP cannot hold a memory
    address of that size is also an implementation detail that is outside of
    the scope of the line of demarcation that I drew.

    If we define the abstract model of computation of the x86 language only
    by the functional results that its instructions were designed to achieve
    then this abstract model does specify unlimited memory access.

    If we also include how the x86 architecture achieves these functional
    results then the x86 language does not specify unlimited memory access.


    It's great that you're no longer making that claim, but I'm annoyed
    that I spent so much time and effort arguing with you about it,
    you've effectively changed your argument to be consistent with what
    I've been saying, and you pretend that that never happened.

    Now, what was it you had when falsely claimed to have Linz's H and H^?
    That's the only interesting part of this whole sorry mess -- your claim
    to have something impossible. That unbounded register machines are
    Turing complete, unlike the x86, is just computability 101, despite it
    taking you weeks to get there.

    A lie requires an intentional falsehood. Human communication generally
    allows some subjective leeway of interpretation. I have always
    construed everything computationally identical to a Turing machine as
    one-and-the-same thing as a Turing Machine. That you do not define it
    this way does not make me a liar.

    Now that I have provided a way to directly map the "C" programming
    language to an abstract model of computation that is Turing complete
    there is a direct bridge to show that programs specified in the "C"
    programming language specify Turing complete virtual machines.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Wed Aug 26 16:32:09 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/25/2020 8:07 PM, olcott wrote:
    On 8/25/2020 7:07 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 5:35 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 2:29 PM, Keith Thompson wrote:
    <cut>
    A relative JMP is defined in terms of its effect on EIP, which is
    specifed in that document as a 32-bit register.  This directly
    contradicts your claim.
    <cut>
    I have switched gears now.
    I have a much more effective approach.

    To make the abstract model of computation implied by RIP relative
    addressing code have Turing complete memory access for both code and >>>>> data we merely fail to assume any specific size of the instruction
    pointer.

    Finally!  (Bravo Keith.)  You will never admit that this change of
    gears
    means you were wrong, but at least there is now no need for you to keep >>>> saying the same wrong thing over and over again.

    I was not wrong when I defined the line or demarcation between the
    abstract model and the physical implementation where I did.

    That you refuse to draw this line of demaracation where I specified
    does not make me wrong.

    I never said that drawing the line where you did was wrong.  What you
    were wrong about was your claim that your choice of where to draw the
    line was forced by the "x86 language" that you never fully defined.


    I defined exactly how to interpret the hundreds of pages of existing documentation and provide a link.

    You defined (or tried to define) an abstract model based on the x86.
    Nothing wrong with that.  You defined some aspects of the x86
    definition as part of the abstraction and others as implementation
    details.  Nothing wrong with that either.

    OK

    But you insisted that the
    specific line between abstraction and implementation details would
    be obvious to anyone who read the references you cited -- when in
    fact those references made it clear that the instruction pointer
    has a fixed width of no more than 64 bits, and that jumping to an
    address exceeding 2**64 could not make sense given the published
    x86 specifications.

    You got these basic facts incorrectly it is 32-bits not 64-bits. I don't think any machine an Earth has 2^64 bytes of RAM.

    When I drew the line of demarcation at the functional effect that the instruction was designed to achieve not including any of the details of
    how it achieve this functional effect I am still correct that the 32-bit
    x86 language does specify unlimited memory access.

    When you say that this simply will not work in terms of how the
    processor is defined to achieve those functional effects you step
    outside of the line of demarcation that I drew.

    fffffff0: e9ffffff7f        jmp 0xfffffff5 + 0x7fffffff
    fffffff5: 90                nop
    That says that it will jump to memory address: 0xfffffff5 + 0x7fffffff
    That it does not actually do this is an implementation detail that is
    outside of the scope of the abstraction.

    That it cannot do this because the 32-bit EIP cannot hold a memory
    address of that size is also an implementation detail that is outside of
    the scope of the line of demarcation that I drew.

    If we define the abstract model of computation of the x86 language only
    by the functional results that its instructions were designed to achieve
    then this abstract model does specify unlimited memory access.

    If we also include how the x86 architecture achieves these functional
    results then the x86 language does not specify unlimited memory access.

    I will take the lack of response as an indication that I finally found
    words clear enough to prove this point.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Wed Aug 26 15:40:21 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 8:07 PM, olcott wrote:
    [...]
    If we define the abstract model of computation of the x86 language
    only by the functional results that its instructions were designed
    to achieve then this abstract model does specify unlimited memory
    access.

    If we also include how the x86 architecture achieves these
    functional results then the x86 language does not specify unlimited
    memory access.

    I will take the lack of response as an indication that I finally found
    words clear enough to prove this point.

    You can take the lack of response as anything you like. You'd be
    wrong, but I'm ok with that.

    In fact I didn't respond because I didn't think there would be any point
    in doing so. You have not proven anything. Since you've now "switched
    gears", I decided not to spend any more time on it.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Wed Aug 26 18:12:16 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/26/2020 5:40 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 8:07 PM, olcott wrote:
    [...]
    If we define the abstract model of computation of the x86 language
    only by the functional results that its instructions were designed
    to achieve then this abstract model does specify unlimited memory
    access.

    If we also include how the x86 architecture achieves these
    functional results then the x86 language does not specify unlimited
    memory access.

    I will take the lack of response as an indication that I finally found
    words clear enough to prove this point.

    You can take the lack of response as anything you like. You'd be
    wrong, but I'm ok with that.

    In fact I didn't respond because I didn't think there would be any point
    in doing so. You have not proven anything. Since you've now "switched gears", I decided not to spend any more time on it.


    So you either implicitly acknowledge that there is nothing more to say
    about it because I did prove my point and you know it.

    Or the distinction between the functional definition of the x86 language
    (what each instruction was designed to achieve and nothing more) and alternative definitions that include how the functional result is
    achieved is still too subtle for you to grasp.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Keith Thompson on Wed Aug 26 19:52:21 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/26/2020 7:31 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/26/2020 5:40 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 8:07 PM, olcott wrote:
    [...]
    If we define the abstract model of computation of the x86 language
    only by the functional results that its instructions were designed
    to achieve then this abstract model does specify unlimited memory
    access.

    If we also include how the x86 architecture achieves these
    functional results then the x86 language does not specify unlimited
    memory access.

    I will take the lack of response as an indication that I finally found >>>> words clear enough to prove this point.

    You can take the lack of response as anything you like. You'd be
    wrong, but I'm ok with that.

    In fact I didn't respond because I didn't think there would be any point >>> in doing so. You have not proven anything. Since you've now "switched
    gears", I decided not to spend any more time on it.

    So you either implicitly acknowledge that there is nothing more to say
    about it because I did prove my point and you know it.

    Nope.

    Or the distinction between the functional definition of the x86
    language (what each instruction was designed to achieve and nothing
    more) and alternative definitions that include how the functional
    result is achieved is still too subtle for you to grasp.

    Nope.


    I assert that your disagreement has no possible correct basis.
    I could have simply said: Nope^2


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to olcott on Wed Aug 26 17:31:01 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    On 8/26/2020 5:40 PM, Keith Thompson wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/25/2020 8:07 PM, olcott wrote:
    [...]
    If we define the abstract model of computation of the x86 language
    only by the functional results that its instructions were designed
    to achieve then this abstract model does specify unlimited memory
    access.

    If we also include how the x86 architecture achieves these
    functional results then the x86 language does not specify unlimited
    memory access.

    I will take the lack of response as an indication that I finally found
    words clear enough to prove this point.

    You can take the lack of response as anything you like. You'd be
    wrong, but I'm ok with that.

    In fact I didn't respond because I didn't think there would be any point
    in doing so. You have not proven anything. Since you've now "switched
    gears", I decided not to spend any more time on it.

    So you either implicitly acknowledge that there is nothing more to say
    about it because I did prove my point and you know it.

    Nope.

    Or the distinction between the functional definition of the x86
    language (what each instruction was designed to achieve and nothing
    more) and alternative definitions that include how the functional
    result is achieved is still too subtle for you to grasp.

    Nope.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips Healthcare
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Fri Aug 28 10:42:15 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-28 10:25, olcott wrote:
    On 8/28/2020 10:39 AM, André G. Isaak wrote:
    On 2020-08-28 09:12, olcott wrote:
    On 8/25/2020 10:12 PM, olcott wrote:
    On 8/25/2020 9:37 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/25/2020 5:35 PM, Ben Bacarisse wrote:
    <cut>
    Now, what was it you had when falsely claimed to have Linz's H
    and H^?

    Note: unanswered.

    That's the only interesting part of this whole sorry mess -- your >>>>>>> claim
    to have something impossible.  That unbounded register machines are >>>>>>> Turing complete, unlike the x86, is just computability 101,
    despite it
    taking you weeks to get there.

    A lie requires an intentional falsehood. Human communication
    generally
    allows some subjective leeway of interpretation.  I have always
    construed everything computationally identical to a Turing machine as >>>>>> one-and-the-same thing as a Turing Machine.

    No, you did not always do that.  That excuse came later.  At the time >>>>> you made every effort to persuade anyone reading your posts that you >>>>> meant a Turing machine as defined by the book you kept referencing
    (and,
    for that matter, by everyone else):

       "The key thing is the H and Ĥ are 100% fully encoded as actual >>>>> Turing
       machines."

       "it is exactly and precisely the Peter Linz H and Ĥ"


    Yes and the halting decider aspect of this was fully encoded as
    pseudo-code essentially identical to "C". The inessential non-halt
    deciding aspect of this was not encoded.

    That you do not define it
    this way does not make me a liar.

    Indeed.  You started out either deluded or wrong.  What makes you a >>>>> liar
    is the deception that followed your deluded post of Dec 2018.  On Jan >>>>> 4th you had only "a pair of virtual machines functionally identical
    to H
    and Ĥ" and by 12th Jan 2019 you claimed to have had only "an
    algorithm"

    So you don't understand that an algorithm and a virtual machine are
    one-and-the-same thing?

    in Dec 2018 and that it was not encoded until "about a week ago"

    It was encoded the very same night that my little dog died.

    (i.e. some time after having "fully encoded actual Turning machines"). >>>>
    A "C" program that is computationally identical to a Turing machine
    implementing the same algorithm is a fully encoded virtual machine.

    Every post altering the story a little but there was never any
    admission
    that you did not have what you had claimed to have.  Event to this day >>>>> you have not admitted that you did not have "exactly and precisely the >>>>> Peter Linz H and Ĥ ... fully encoded as actual Turing machines".


    I only had the key halt deciding aspect fully encoded.
    If you think it is an exaggeration to say that I had it fully
    encoded on the basis that I did not have the conventional ordinary
    skill in the art aspects encoded, I would say that you are being
    unreasonable.

    If you say that I did not have a Turing machine because I did not
    have a five-tuple machine I say that you and I define our terms
    differently.
    Any code that is functionally identical to a Turing machine is for
    all practical purposes one-and-the-same as a Turing machine.

    The story has continued to unravel, including the more recent idea
    that
    pretty much anything can be called a Turing machine, but the clinching >>>>> evidence is that you refuse, to this day, to say what you really had >>>>> when made that deluded post in Dec 2018.  I invite you, yet again, to >>>>> come clean.  What did you really have on that day?


    I had the halt deciding aspect of H_Hat correctly deciding halting
    on itself fully encoded in pseudo-code that was essentially
    identical to "C". No other aspect of H_Hat was encoded at all. It
    was not a five-tuple machine it was a snippet of "C". Since I was
    doing it with pen and paper and I only had two pages of paper some
    of the syntactical requirements of "C" may have been skipped.


    Only because the snippet of "c" equivalent code contained every
    single detail of exactly how H_Hat would correctly decide halting on
    itself and all the remaining details of converting this snippet of
    code into a complete virtual machine required no more than ordinary
    skill-of-the-art of software engineering can my claim be construed as
    reasonable and not a wild exaggeration.

    So why not just post that snippet?

    André


    In its current state it would not be accurately reviewed.

    If the snippet really does contain "every single detail of exactly how
    H_Hat would correctly decide halting on itself", then how could it not
    be accurately reviewed in its current state?


    Only by seeing this snippet in the context of its execution in an x86
    based UTM will it be able to be accurately reviewed. Since I created one

    Once again, you need to stop using the term 'UTM' when clearly you do
    not mean UTM. A UTM is a Turing Machine. Turing Machines are described
    by a septuple as has been previously discussed. What you have (or claim
    to have) is an x86-based computer program.

    Why would your snippet require such a program to be accurately reviewed?
    Or at all, for that matter?

    André

    single instance of H_Hat correctly deciding halting on itself I created
    two whole classes of such instances. This update is enormously more
    robust than my 2018-12-13 7:00 PM work.

    It has consistently been the case that all of my work posted on these
    forums has been very consistently dismissed out-of-hand until I address
    every single detail of such a dismissal proving point-by-point,
    item-by-item, detail-by-detail that every possible dismissal is baseless.

    Normally people never provide any details what-so-ever of why my work is dismissed. I have to very rigorously challenge them to provide a basis
    or no basis what-so-ever is provided. In the final analysis the ultimate "reason" that my work is dismissed is that people really really don't
    believe that I am correct. They can never pin down an exhaustive set of complete reasoning supporting this belief.

    To say this in other words all of the dismissal of my work has been
    entirely based on my lack of credibility it has never been on the basis
    of pointing out actual errors in the essence my reasoning.

    With my halting problem work the code totally speaks for itself. My incompleteness theorem work requires a complete paradigm shift to be understood.




    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Fri Aug 28 11:25:20 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/28/2020 10:39 AM, André G. Isaak wrote:
    On 2020-08-28 09:12, olcott wrote:
    On 8/25/2020 10:12 PM, olcott wrote:
    On 8/25/2020 9:37 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/25/2020 5:35 PM, Ben Bacarisse wrote:
    <cut>
    Now, what was it you had when falsely claimed to have Linz's H and >>>>>> H^?

    Note: unanswered.

    That's the only interesting part of this whole sorry mess -- your
    claim
    to have something impossible.  That unbounded register machines are >>>>>> Turing complete, unlike the x86, is just computability 101,
    despite it
    taking you weeks to get there.

    A lie requires an intentional falsehood. Human communication generally >>>>> allows some subjective leeway of interpretation.  I have always
    construed everything computationally identical to a Turing machine as >>>>> one-and-the-same thing as a Turing Machine.

    No, you did not always do that.  That excuse came later.  At the time >>>> you made every effort to persuade anyone reading your posts that you
    meant a Turing machine as defined by the book you kept referencing
    (and,
    for that matter, by everyone else):

       "The key thing is the H and Ĥ are 100% fully encoded as actual
    Turing
       machines."

       "it is exactly and precisely the Peter Linz H and Ĥ"


    Yes and the halting decider aspect of this was fully encoded as
    pseudo-code essentially identical to "C". The inessential non-halt
    deciding aspect of this was not encoded.

    That you do not define it
    this way does not make me a liar.

    Indeed.  You started out either deluded or wrong.  What makes you a
    liar
    is the deception that followed your deluded post of Dec 2018.  On Jan >>>> 4th you had only "a pair of virtual machines functionally identical
    to H
    and Ĥ" and by 12th Jan 2019 you claimed to have had only "an algorithm" >>>
    So you don't understand that an algorithm and a virtual machine are
    one-and-the-same thing?

    in Dec 2018 and that it was not encoded until "about a week ago"

    It was encoded the very same night that my little dog died.

    (i.e. some time after having "fully encoded actual Turning machines").

    A "C" program that is computationally identical to a Turing machine
    implementing the same algorithm is a fully encoded virtual machine.

    Every post altering the story a little but there was never any
    admission
    that you did not have what you had claimed to have.  Event to this day >>>> you have not admitted that you did not have "exactly and precisely the >>>> Peter Linz H and Ĥ ... fully encoded as actual Turing machines".


    I only had the key halt deciding aspect fully encoded.
    If you think it is an exaggeration to say that I had it fully encoded
    on the basis that I did not have the conventional ordinary skill in
    the art aspects encoded, I would say that you are being unreasonable.

    If you say that I did not have a Turing machine because I did not
    have a five-tuple machine I say that you and I define our terms
    differently.
    Any code that is functionally identical to a Turing machine is for
    all practical purposes one-and-the-same as a Turing machine.

    The story has continued to unravel, including the more recent idea that >>>> pretty much anything can be called a Turing machine, but the clinching >>>> evidence is that you refuse, to this day, to say what you really had
    when made that deluded post in Dec 2018.  I invite you, yet again, to >>>> come clean.  What did you really have on that day?


    I had the halt deciding aspect of H_Hat correctly deciding halting on
    itself fully encoded in pseudo-code that was essentially identical to
    "C". No other aspect of H_Hat was encoded at all. It was not a
    five-tuple machine it was a snippet of "C". Since I was doing it with
    pen and paper and I only had two pages of paper some of the
    syntactical requirements of "C" may have been skipped.


    Only because the snippet of "c" equivalent code contained every single
    detail of exactly how H_Hat would correctly decide halting on itself
    and all the remaining details of converting this snippet of code into
    a complete virtual machine required no more than ordinary
    skill-of-the-art of software engineering can my claim be construed as
    reasonable and not a wild exaggeration.

    So why not just post that snippet?

    André


    In its current state it would not be accurately reviewed.
    Only by seeing this snippet in the context of its execution in an x86
    based UTM will it be able to be accurately reviewed. Since I created one
    single instance of H_Hat correctly deciding halting on itself I created
    two whole classes of such instances. This update is enormously more
    robust than my 2018-12-13 7:00 PM work.

    It has consistently been the case that all of my work posted on these
    forums has been very consistently dismissed out-of-hand until I address
    every single detail of such a dismissal proving point-by-point,
    item-by-item, detail-by-detail that every possible dismissal is baseless.

    Normally people never provide any details what-so-ever of why my work is dismissed. I have to very rigorously challenge them to provide a basis
    or no basis what-so-ever is provided. In the final analysis the ultimate "reason" that my work is dismissed is that people really really don't
    believe that I am correct. They can never pin down an exhaustive set of complete reasoning supporting this belief.

    To say this in other words all of the dismissal of my work has been
    entirely based on my lack of credibility it has never been on the basis
    of pointing out actual errors in the essence my reasoning.

    With my halting problem work the code totally speaks for itself. My incompleteness theorem work requires a complete paradigm shift to be understood.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Fri Aug 28 11:45:06 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/28/2020 11:25 AM, olcott wrote:
    On 8/28/2020 10:39 AM, André G. Isaak wrote:
    On 2020-08-28 09:12, olcott wrote:
    On 8/25/2020 10:12 PM, olcott wrote:
    On 8/25/2020 9:37 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/25/2020 5:35 PM, Ben Bacarisse wrote:
    <cut>
    Now, what was it you had when falsely claimed to have Linz's H
    and H^?

    Note: unanswered.

    That's the only interesting part of this whole sorry mess -- your >>>>>>> claim
    to have something impossible.  That unbounded register machines are >>>>>>> Turing complete, unlike the x86, is just computability 101,
    despite it
    taking you weeks to get there.

    A lie requires an intentional falsehood. Human communication
    generally
    allows some subjective leeway of interpretation.  I have always
    construed everything computationally identical to a Turing machine as >>>>>> one-and-the-same thing as a Turing Machine.

    No, you did not always do that.  That excuse came later.  At the time >>>>> you made every effort to persuade anyone reading your posts that you >>>>> meant a Turing machine as defined by the book you kept referencing
    (and,
    for that matter, by everyone else):

       "The key thing is the H and Ĥ are 100% fully encoded as actual >>>>> Turing
       machines."

       "it is exactly and precisely the Peter Linz H and Ĥ"


    Yes and the halting decider aspect of this was fully encoded as
    pseudo-code essentially identical to "C". The inessential non-halt
    deciding aspect of this was not encoded.

    That you do not define it
    this way does not make me a liar.

    Indeed.  You started out either deluded or wrong.  What makes you a >>>>> liar
    is the deception that followed your deluded post of Dec 2018.  On Jan >>>>> 4th you had only "a pair of virtual machines functionally identical
    to H
    and Ĥ" and by 12th Jan 2019 you claimed to have had only "an
    algorithm"

    So you don't understand that an algorithm and a virtual machine are
    one-and-the-same thing?

    in Dec 2018 and that it was not encoded until "about a week ago"

    It was encoded the very same night that my little dog died.

    (i.e. some time after having "fully encoded actual Turning machines"). >>>>
    A "C" program that is computationally identical to a Turing machine
    implementing the same algorithm is a fully encoded virtual machine.

    Every post altering the story a little but there was never any
    admission
    that you did not have what you had claimed to have.  Event to this day >>>>> you have not admitted that you did not have "exactly and precisely the >>>>> Peter Linz H and Ĥ ... fully encoded as actual Turing machines".


    I only had the key halt deciding aspect fully encoded.
    If you think it is an exaggeration to say that I had it fully
    encoded on the basis that I did not have the conventional ordinary
    skill in the art aspects encoded, I would say that you are being
    unreasonable.

    If you say that I did not have a Turing machine because I did not
    have a five-tuple machine I say that you and I define our terms
    differently.
    Any code that is functionally identical to a Turing machine is for
    all practical purposes one-and-the-same as a Turing machine.

    The story has continued to unravel, including the more recent idea
    that
    pretty much anything can be called a Turing machine, but the clinching >>>>> evidence is that you refuse, to this day, to say what you really had >>>>> when made that deluded post in Dec 2018.  I invite you, yet again, to >>>>> come clean.  What did you really have on that day?


    I had the halt deciding aspect of H_Hat correctly deciding halting
    on itself fully encoded in pseudo-code that was essentially
    identical to "C". No other aspect of H_Hat was encoded at all. It
    was not a five-tuple machine it was a snippet of "C". Since I was
    doing it with pen and paper and I only had two pages of paper some
    of the syntactical requirements of "C" may have been skipped.


    Only because the snippet of "c" equivalent code contained every
    single detail of exactly how H_Hat would correctly decide halting on
    itself and all the remaining details of converting this snippet of
    code into a complete virtual machine required no more than ordinary
    skill-of-the-art of software engineering can my claim be construed as
    reasonable and not a wild exaggeration.

    So why not just post that snippet?

    André


    In its current state it would not be accurately reviewed.
    Only by seeing this snippet in the context of its execution in an x86
    based UTM will it be able to be accurately reviewed. Since I created one single instance of H_Hat correctly deciding halting on itself I created
    two whole classes of such instances. This update is enormously more
    robust than my 2018-12-13 7:00 PM work.

    It has consistently been the case that all of my work posted on these
    forums has been very consistently dismissed out-of-hand until I address
    every single detail of such a dismissal proving point-by-point,
    item-by-item, detail-by-detail that every possible dismissal is baseless.

    Normally people never provide any details what-so-ever of why my work is dismissed. I have to very rigorously challenge them to provide a basis
    or no basis what-so-ever is provided. In the final analysis the ultimate "reason" that my work is dismissed is that people really really don't
    believe that I am correct. They can never pin down an exhaustive set of complete reasoning supporting this belief.

    To say this in other words all of the dismissal of my work has been
    entirely based on my lack of credibility it has never been on the basis
    of pointing out actual errors in the essence my reasoning.

    With my halting problem work the code totally speaks for itself. My incompleteness theorem work requires a complete paradigm shift to be understood.



    Since one key aspect of righteousness requires maintaining a
    lighthearted spirit I will make a silly little pun:

    If you had two 10 cent coins you would have a pair-o-dime(s).
    If you moved them around you would have a pair-o-dime shift.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Fri Aug 28 11:59:43 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/28/2020 11:42 AM, André G. Isaak wrote:
    On 2020-08-28 10:25, olcott wrote:
    On 8/28/2020 10:39 AM, André G. Isaak wrote:
    On 2020-08-28 09:12, olcott wrote:
    On 8/25/2020 10:12 PM, olcott wrote:
    On 8/25/2020 9:37 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/25/2020 5:35 PM, Ben Bacarisse wrote:
    <cut>
    Now, what was it you had when falsely claimed to have Linz's H >>>>>>>> and H^?

    Note: unanswered.

    That's the only interesting part of this whole sorry mess --
    your claim
    to have something impossible.  That unbounded register machines are >>>>>>>> Turing complete, unlike the x86, is just computability 101,
    despite it
    taking you weeks to get there.

    A lie requires an intentional falsehood. Human communication
    generally
    allows some subjective leeway of interpretation.  I have always >>>>>>> construed everything computationally identical to a Turing
    machine as
    one-and-the-same thing as a Turing Machine.

    No, you did not always do that.  That excuse came later.  At the time >>>>>> you made every effort to persuade anyone reading your posts that you >>>>>> meant a Turing machine as defined by the book you kept referencing >>>>>> (and,
    for that matter, by everyone else):

       "The key thing is the H and Ĥ are 100% fully encoded as actual >>>>>> Turing
       machines."

       "it is exactly and precisely the Peter Linz H and Ĥ"


    Yes and the halting decider aspect of this was fully encoded as
    pseudo-code essentially identical to "C". The inessential non-halt
    deciding aspect of this was not encoded.

    That you do not define it
    this way does not make me a liar.

    Indeed.  You started out either deluded or wrong.  What makes you >>>>>> a liar
    is the deception that followed your deluded post of Dec 2018.  On Jan >>>>>> 4th you had only "a pair of virtual machines functionally
    identical to H
    and Ĥ" and by 12th Jan 2019 you claimed to have had only "an
    algorithm"

    So you don't understand that an algorithm and a virtual machine are
    one-and-the-same thing?

    in Dec 2018 and that it was not encoded until "about a week ago"

    It was encoded the very same night that my little dog died.

    (i.e. some time after having "fully encoded actual Turning
    machines").

    A "C" program that is computationally identical to a Turing machine
    implementing the same algorithm is a fully encoded virtual machine.

    Every post altering the story a little but there was never any
    admission
    that you did not have what you had claimed to have.  Event to this >>>>>> day
    you have not admitted that you did not have "exactly and precisely >>>>>> the
    Peter Linz H and Ĥ ... fully encoded as actual Turing machines".


    I only had the key halt deciding aspect fully encoded.
    If you think it is an exaggeration to say that I had it fully
    encoded on the basis that I did not have the conventional ordinary
    skill in the art aspects encoded, I would say that you are being
    unreasonable.

    If you say that I did not have a Turing machine because I did not
    have a five-tuple machine I say that you and I define our terms
    differently.
    Any code that is functionally identical to a Turing machine is for
    all practical purposes one-and-the-same as a Turing machine.

    The story has continued to unravel, including the more recent idea >>>>>> that
    pretty much anything can be called a Turing machine, but the
    clinching
    evidence is that you refuse, to this day, to say what you really had >>>>>> when made that deluded post in Dec 2018.  I invite you, yet again, to >>>>>> come clean.  What did you really have on that day?


    I had the halt deciding aspect of H_Hat correctly deciding halting
    on itself fully encoded in pseudo-code that was essentially
    identical to "C". No other aspect of H_Hat was encoded at all. It
    was not a five-tuple machine it was a snippet of "C". Since I was
    doing it with pen and paper and I only had two pages of paper some
    of the syntactical requirements of "C" may have been skipped.


    Only because the snippet of "c" equivalent code contained every
    single detail of exactly how H_Hat would correctly decide halting on
    itself and all the remaining details of converting this snippet of
    code into a complete virtual machine required no more than ordinary
    skill-of-the-art of software engineering can my claim be construed
    as reasonable and not a wild exaggeration.

    So why not just post that snippet?

    André


    In its current state it would not be accurately reviewed.

    If the snippet really does contain "every single detail of exactly how
    H_Hat would correctly decide halting on itself", then how could it not
    be accurately reviewed in its current state?


    It is a systematic error of reasoning that seems to be the predominant
    aspect of human reasoning that is directly hard-wired into brains called
    bias. It is bias that makes the human mind woefully fallible.


    Only by seeing this snippet in the context of its execution in an x86
    based UTM will it be able to be accurately reviewed. Since I created one

    Once again, you need to stop using the term 'UTM' when clearly you do
    not mean UTM. A UTM is a Turing Machine. Turing Machines are described
    by a septuple as has been previously discussed. What you have (or claim
    to have) is an x86-based computer program.


    Any virtual machine that is computationally identical to a Turing
    machine is for all practical purposes an actual Turing machine. Now that
    I have shown how the abstract machine defined by the x86 language has
    access to unlimited memory I have shown that this abstract machine is computationally identical to a Turing machine.

    https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access

    Why would your snippet require such a program to be accurately reviewed?
    Or at all, for that matter?

    André


    The key woefully fallible aspect of human nature called bias.

    single instance of H_Hat correctly deciding halting on itself I
    created two whole classes of such instances. This update is enormously
    more robust than my 2018-12-13 7:00 PM work.

    It has consistently been the case that all of my work posted on these
    forums has been very consistently dismissed out-of-hand until I
    address every single detail of such a dismissal proving
    point-by-point, item-by-item, detail-by-detail that every possible
    dismissal is baseless.

    Normally people never provide any details what-so-ever of why my work
    is dismissed. I have to very rigorously challenge them to provide a
    basis or no basis what-so-ever is provided. In the final analysis the
    ultimate "reason" that my work is dismissed is that people really
    really don't believe that I am correct. They can never pin down an
    exhaustive set of complete reasoning supporting this belief.

    To say this in other words all of the dismissal of my work has been
    entirely based on my lack of credibility it has never been on the
    basis of pointing out actual errors in the essence my reasoning.

    With my halting problem work the code totally speaks for itself. My
    incompleteness theorem work requires a complete paradigm shift to be
    understood.






    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Sat Aug 29 08:27:50 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/28/2020 11:34 PM, André G. Isaak wrote:
    On 2020-08-28 11:50, olcott wrote:
    On 8/28/2020 12:19 PM, André G. Isaak wrote:
    On 2020-08-28 11:08, olcott wrote:

    Any virtual machine that is computationally identical to a Turing
    machine is for all practical purposes an actual Turing machine. Now
    that I have shown how the abstract machine defined by the x86
    language has access to unlimited memory I have shown that this
    abstract machine is computationally identical to a Turing machine.

    https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access




    Before you claimed to have shown that x86 was Turing Complete. (You
    didn't, but that was your claim).

    But above you are now claiming that it is Turing Equivalent. Those
    are not the same thing.

    If you could show that x86 was Turing Equivalent, then you could
    argue that for every x86 program, a corresponding Turing Machine must
    exist. This, however, does not follow from x86 being Turing Complete.


    Turing completeness
    A computational system that can compute every Turing-computable
    function is called Turing-complete (or Turing-powerful).
    Alternatively, such a system is one that can simulate a universal
    Turing machine.

    Turing equivalence
    A Turing-complete system is called Turing-equivalent if every function
    it can compute is also Turing-computable; i.e., it computes precisely
    the same class of functions as do Turing machines. Alternatively, a
    Turing-equivalent system is one that can simulate, and be simulated
    by, a universal Turing machine.

    Within the Church-Turing thesis we get essentially that a Turing
    machine can compute any damn thing that can possibly be computed.

        There are various equivalent formulations of the
        Church-Turing thesis. A common one is that every
        effective computation can be carried out by a
        Turing machine. https://plato.stanford.edu/entries/church-turing/

    A Turing complete language is defined to compute this same set.

    This makes the conventional term of the art: "Turing equivalence"
    redundant, and thus "Turing completeness" and "Turing equivalence" are
    merely different words for the exact same thing.
    https://en.wikipedia.org/wiki/Turing_completeness#Formal_definitions

    You seem to be missing my point.

    It is possible that Turing Complete and Turing Equivalent have the same extension, but they mean different things.

    So a Turing machine can accept the entire set of recursively enumerable languages and another machine can accept this same entire set of
    recursively enumerable languages and some how through the magic of
    ambiguity one set is larger than the other?

    I am pointing out that you
    are being sloppy with your terminology, just as you are being sloppy
    with your use of terms like 'Turing Machine' and UTM.

    You claim that you want to eventually publish your work. Even if it
    turns out that you have something interesting to show, no journal is
    going to take you seriously if your presentation is so rife with misuse
    of terminology since it won't be clear what you are trying to say.

    When people here point out your terminological abuse, they are trying to
    be *helpful*. They are pointing out that you need to ensure that what
    you say is what you actually mean, and that the only way to actually
    achieve this is to master the terminology of the field. When errors in terminology are pointed out, you are far better off to acknowledge those errors and correct your usage than to go to absurd lengths to try to
    justify your redefinition of accepted terms.

    André

    And even if you could demonstrate the existence of a Turing Machine
    which performs the same computation as your x86 program, that
    wouldn't make your x86 program a Turing Machine.

    If you want to be understood, why not invest the time in learning
    existing terminology and then using it correctly (by which I mean the
    same way the rest of the world uses it)? Then people wouldn't
    constantly argue with you over all those things which you consider to
    be pesky details.

    André






    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Sat Aug 29 07:44:00 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-29 07:27, olcott wrote:
    On 8/28/2020 11:34 PM, André G. Isaak wrote:
    On 2020-08-28 11:50, olcott wrote:
    On 8/28/2020 12:19 PM, André G. Isaak wrote:
    On 2020-08-28 11:08, olcott wrote:

    Any virtual machine that is computationally identical to a Turing
    machine is for all practical purposes an actual Turing machine. Now
    that I have shown how the abstract machine defined by the x86
    language has access to unlimited memory I have shown that this
    abstract machine is computationally identical to a Turing machine.

    https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access





    Before you claimed to have shown that x86 was Turing Complete. (You
    didn't, but that was your claim).

    But above you are now claiming that it is Turing Equivalent. Those
    are not the same thing.

    If you could show that x86 was Turing Equivalent, then you could
    argue that for every x86 program, a corresponding Turing Machine
    must exist. This, however, does not follow from x86 being Turing
    Complete.


    Turing completeness
    A computational system that can compute every Turing-computable
    function is called Turing-complete (or Turing-powerful).
    Alternatively, such a system is one that can simulate a universal
    Turing machine.

    Turing equivalence
    A Turing-complete system is called Turing-equivalent if every
    function it can compute is also Turing-computable; i.e., it computes
    precisely the same class of functions as do Turing machines.
    Alternatively, a Turing-equivalent system is one that can simulate,
    and be simulated by, a universal Turing machine.

    Within the Church-Turing thesis we get essentially that a Turing
    machine can compute any damn thing that can possibly be computed.

        There are various equivalent formulations of the
        Church-Turing thesis. A common one is that every
        effective computation can be carried out by a
        Turing machine. https://plato.stanford.edu/entries/church-turing/ >>>
    A Turing complete language is defined to compute this same set.

    This makes the conventional term of the art: "Turing equivalence"
    redundant, and thus "Turing completeness" and "Turing equivalence"
    are merely different words for the exact same thing.
    https://en.wikipedia.org/wiki/Turing_completeness#Formal_definitions

    You seem to be missing my point.

    It is possible that Turing Complete and Turing Equivalent have the
    same extension, but they mean different things.

    So a Turing machine can accept the entire set of recursively enumerable languages and another machine can accept this same entire set of
    recursively enumerable languages and some how through the magic of
    ambiguity one set is larger than the other?

    Something is Turing Equivalent if it the set of problems it can compute
    is identical to a Turing Machine.

    Something is Turing Complete if the set of problems it can compute is
    identical to *or greater than* that of a Turing Machine.

    If it could be shown that everything that can be computed can be
    computed by a Turing Machine, then those two would have the same
    extension. But this is *not* something that has been proven.

    If you somehow managed to prove that your (still undefined) 'x86
    language' was Turing complete, if your program happened to be in that
    'greater than' portion of Turing Complete, then there would be no
    corresponding Turing Machine.

    I'm not saying this is likely. I am pointing out that you are using your
    terms incorrectly.

    And even if there is a Turing Machine which does the same computation as
    your program, that still doesn't make your program a Turing Machine.

    Again, you are using the terms incorrectly. If you don't want to deal
    with Turing Machines, then talk about C programs or x86 programs, not
    Turing Machines. A program that decided halting in C would be no less interesting (and no less impossible), so why do you insist on calling it
    a Turing Machine?

    André


    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Sat Aug 29 08:58:05 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-29 08:26, olcott wrote:
    On 8/29/2020 8:44 AM, André G. Isaak wrote:
    On 2020-08-29 07:27, olcott wrote:
    On 8/28/2020 11:34 PM, André G. Isaak wrote:
    On 2020-08-28 11:50, olcott wrote:
    On 8/28/2020 12:19 PM, André G. Isaak wrote:
    On 2020-08-28 11:08, olcott wrote:

    Any virtual machine that is computationally identical to a Turing >>>>>>> machine is for all practical purposes an actual Turing machine.
    Now that I have shown how the abstract machine defined by the x86 >>>>>>> language has access to unlimited memory I have shown that this
    abstract machine is computationally identical to a Turing machine. >>>>>>>
    https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access







    Before you claimed to have shown that x86 was Turing Complete.
    (You didn't, but that was your claim).

    But above you are now claiming that it is Turing Equivalent. Those >>>>>> are not the same thing.

    If you could show that x86 was Turing Equivalent, then you could
    argue that for every x86 program, a corresponding Turing Machine
    must exist. This, however, does not follow from x86 being Turing
    Complete.


    Turing completeness
    A computational system that can compute every Turing-computable
    function is called Turing-complete (or Turing-powerful).
    Alternatively, such a system is one that can simulate a universal
    Turing machine.

    Turing equivalence
    A Turing-complete system is called Turing-equivalent if every
    function it can compute is also Turing-computable; i.e., it
    computes precisely the same class of functions as do Turing
    machines. Alternatively, a Turing-equivalent system is one that can
    simulate, and be simulated by, a universal Turing machine.

    Within the Church-Turing thesis we get essentially that a Turing
    machine can compute any damn thing that can possibly be computed.

        There are various equivalent formulations of the
        Church-Turing thesis. A common one is that every
        effective computation can be carried out by a
        Turing machine. https://plato.stanford.edu/entries/church-turing/ >>>>>
    A Turing complete language is defined to compute this same set.

    This makes the conventional term of the art: "Turing equivalence"
    redundant, and thus "Turing completeness" and "Turing equivalence"
    are merely different words for the exact same thing.
    https://en.wikipedia.org/wiki/Turing_completeness#Formal_definitions

    You seem to be missing my point.

    It is possible that Turing Complete and Turing Equivalent have the
    same extension, but they mean different things.

    So a Turing machine can accept the entire set of recursively
    enumerable languages and another machine can accept this same entire
    set of recursively enumerable languages and some how through the
    magic of ambiguity one set is larger than the other?

    Something is Turing Equivalent if it the set of problems it can
    compute is identical to a Turing Machine.

    Something is Turing Complete if the set of problems it can compute is
    identical to *or greater than* that of a Turing Machine.


    I knew that it said that yet:
    Yet within the Church-Turing thesis this is impossible.

    Again, you are missing my point.

    The Church-Turing thesis is a *conjecture*. It is not something that has
    been proven.

    And even if it were proven, it still makes sense to use the term which
    more accurately captures what you mean to say. Just because two terms
    have the same extension doesn't make them interchangeable.

    If I want to claim that something moves at the speed of light, I would
    claim that v = c. I would not claim that v >= c and then justify this on
    the grounds that it isn't actually possible to travel faster than light, therefore the two mean the same thing. They don't mean the same thing.

    If it could be shown that everything that can be computed can be
    computed by a Turing Machine, then those two would have the same
    extension. But this is *not* something that has been proven.

    If you somehow managed to prove that your (still undefined) 'x86
    language' was Turing complete, if your program happened to be in that
    'greater than' portion of Turing Complete, then there would be no
    corresponding Turing Machine.

    I'm not saying this is likely. I am pointing out that you are using
    your terms incorrectly.


    If we start with Church-Turing as a "given" then the terms themselves
    are incorrect.

    And even if there is a Turing Machine which does the same computation
    as your program, that still doesn't make your program a Turing Machine.


    Two machines are functionally identical if these two machines compute
    the exact same set of functions and no more.

    Just because two things compute the same function doesn't make them the
    same thing.

    The following two programs might be considered to be functionally
    equivalent:

    #include <stdio.h>

    int main(void) {
    printf("hello, world!\n");
    }

    PROGRAM HELLO;
    BEGIN
    WRITELN('hello, world!');
    END.

    That doesn't mean one can refer to the second as a C program, nor the
    first as a Pascal program. And neither can be referred to as a Turing.

    I don't understand your resistance to simply using terminology in a way
    that conforms with how everyone else uses it. Is it your goal to be misunderstood and/or deliberately confuse?

    André

    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to I knew that it on Sat Aug 29 09:26:50 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/29/2020 8:44 AM, André G. Isaak wrote:
    On 2020-08-29 07:27, olcott wrote:
    On 8/28/2020 11:34 PM, André G. Isaak wrote:
    On 2020-08-28 11:50, olcott wrote:
    On 8/28/2020 12:19 PM, André G. Isaak wrote:
    On 2020-08-28 11:08, olcott wrote:

    Any virtual machine that is computationally identical to a Turing
    machine is for all practical purposes an actual Turing machine.
    Now that I have shown how the abstract machine defined by the x86
    language has access to unlimited memory I have shown that this
    abstract machine is computationally identical to a Turing machine. >>>>>>
    https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access






    Before you claimed to have shown that x86 was Turing Complete. (You
    didn't, but that was your claim).

    But above you are now claiming that it is Turing Equivalent. Those
    are not the same thing.

    If you could show that x86 was Turing Equivalent, then you could
    argue that for every x86 program, a corresponding Turing Machine
    must exist. This, however, does not follow from x86 being Turing
    Complete.


    Turing completeness
    A computational system that can compute every Turing-computable
    function is called Turing-complete (or Turing-powerful).
    Alternatively, such a system is one that can simulate a universal
    Turing machine.

    Turing equivalence
    A Turing-complete system is called Turing-equivalent if every
    function it can compute is also Turing-computable; i.e., it computes
    precisely the same class of functions as do Turing machines.
    Alternatively, a Turing-equivalent system is one that can simulate,
    and be simulated by, a universal Turing machine.

    Within the Church-Turing thesis we get essentially that a Turing
    machine can compute any damn thing that can possibly be computed.

        There are various equivalent formulations of the
        Church-Turing thesis. A common one is that every
        effective computation can be carried out by a
        Turing machine. https://plato.stanford.edu/entries/church-turing/ >>>>
    A Turing complete language is defined to compute this same set.

    This makes the conventional term of the art: "Turing equivalence"
    redundant, and thus "Turing completeness" and "Turing equivalence"
    are merely different words for the exact same thing.
    https://en.wikipedia.org/wiki/Turing_completeness#Formal_definitions

    You seem to be missing my point.

    It is possible that Turing Complete and Turing Equivalent have the
    same extension, but they mean different things.

    So a Turing machine can accept the entire set of recursively
    enumerable languages and another machine can accept this same entire
    set of recursively enumerable languages and some how through the magic
    of ambiguity one set is larger than the other?

    Something is Turing Equivalent if it the set of problems it can compute
    is identical to a Turing Machine.

    Something is Turing Complete if the set of problems it can compute is identical to *or greater than* that of a Turing Machine.


    I knew that it said that yet:
    Yet within the Church-Turing thesis this is impossible.

    If it could be shown that everything that can be computed can be
    computed by a Turing Machine, then those two would have the same
    extension. But this is *not* something that has been proven.

    If you somehow managed to prove that your (still undefined) 'x86
    language' was Turing complete, if your program happened to be in that 'greater than' portion of Turing Complete, then there would be no corresponding Turing Machine.

    I'm not saying this is likely. I am pointing out that you are using your terms incorrectly.


    If we start with Church-Turing as a "given" then the terms themselves
    are incorrect.

    And even if there is a Turing Machine which does the same computation as
    your program, that still doesn't make your program a Turing Machine.


    Two machines are functionally identical if these two machines compute
    the exact same set of functions and no more.

    Again, you are using the terms incorrectly. If you don't want to deal
    with Turing Machines, then talk about C programs or x86 programs, not
    Turing Machines. A program that decided halting in C would be no less interesting (and no less impossible), so why do you insist on calling it
    a Turing Machine?

    André




    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Sat Aug 29 10:12:38 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/29/2020 9:58 AM, André G. Isaak wrote:
    On 2020-08-29 08:26, olcott wrote:
    On 8/29/2020 8:44 AM, André G. Isaak wrote:
    On 2020-08-29 07:27, olcott wrote:
    On 8/28/2020 11:34 PM, André G. Isaak wrote:
    On 2020-08-28 11:50, olcott wrote:
    On 8/28/2020 12:19 PM, André G. Isaak wrote:
    On 2020-08-28 11:08, olcott wrote:

    Any virtual machine that is computationally identical to a
    Turing machine is for all practical purposes an actual Turing
    machine. Now that I have shown how the abstract machine defined >>>>>>>> by the x86 language has access to unlimited memory I have shown >>>>>>>> that this abstract machine is computationally identical to a
    Turing machine.

    https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access








    Before you claimed to have shown that x86 was Turing Complete.
    (You didn't, but that was your claim).

    But above you are now claiming that it is Turing Equivalent.
    Those are not the same thing.

    If you could show that x86 was Turing Equivalent, then you could >>>>>>> argue that for every x86 program, a corresponding Turing Machine >>>>>>> must exist. This, however, does not follow from x86 being Turing >>>>>>> Complete.


    Turing completeness
    A computational system that can compute every Turing-computable
    function is called Turing-complete (or Turing-powerful).
    Alternatively, such a system is one that can simulate a universal
    Turing machine.

    Turing equivalence
    A Turing-complete system is called Turing-equivalent if every
    function it can compute is also Turing-computable; i.e., it
    computes precisely the same class of functions as do Turing
    machines. Alternatively, a Turing-equivalent system is one that
    can simulate, and be simulated by, a universal Turing machine.

    Within the Church-Turing thesis we get essentially that a Turing
    machine can compute any damn thing that can possibly be computed.

        There are various equivalent formulations of the
        Church-Turing thesis. A common one is that every
        effective computation can be carried out by a
        Turing machine. https://plato.stanford.edu/entries/church-turing/ >>>>>>
    A Turing complete language is defined to compute this same set.

    This makes the conventional term of the art: "Turing equivalence"
    redundant, and thus "Turing completeness" and "Turing equivalence" >>>>>> are merely different words for the exact same thing.
    https://en.wikipedia.org/wiki/Turing_completeness#Formal_definitions >>>>>
    You seem to be missing my point.

    It is possible that Turing Complete and Turing Equivalent have the
    same extension, but they mean different things.

    So a Turing machine can accept the entire set of recursively
    enumerable languages and another machine can accept this same entire
    set of recursively enumerable languages and some how through the
    magic of ambiguity one set is larger than the other?

    Something is Turing Equivalent if it the set of problems it can
    compute is identical to a Turing Machine.

    Something is Turing Complete if the set of problems it can compute is
    identical to *or greater than* that of a Turing Machine.


    I knew that it said that yet:
    Yet within the Church-Turing thesis this is impossible.

    Again, you are missing my point.

    The Church-Turing thesis is a *conjecture*. It is not something that has
    been proven.

    And even if it were proven, it still makes sense to use the term which
    more accurately captures what you mean to say. Just because two terms
    have the same extension doesn't make them interchangeable.

    If I want to claim that something moves at the speed of light, I would
    claim that v = c. I would not claim that v >= c and then justify this on
    the grounds that it isn't actually possible to travel faster than light, therefore the two mean the same thing. They don't mean the same thing.

    If it could be shown that everything that can be computed can be
    computed by a Turing Machine, then those two would have the same
    extension. But this is *not* something that has been proven.

    If you somehow managed to prove that your (still undefined) 'x86
    language' was Turing complete, if your program happened to be in that
    'greater than' portion of Turing Complete, then there would be no
    corresponding Turing Machine.

    I'm not saying this is likely. I am pointing out that you are using
    your terms incorrectly.


    If we start with Church-Turing as a "given" then the terms themselves
    are incorrect.

    And even if there is a Turing Machine which does the same computation
    as your program, that still doesn't make your program a Turing Machine.


    Two machines are functionally identical if these two machines compute
    the exact same set of functions and no more.

    Just because two things compute the same function doesn't make them the
    same thing.

    The following two programs might be considered to be functionally
    equivalent:

    #include <stdio.h>

    int main(void) {
    printf("hello, world!\n");
    }

    PROGRAM HELLO;
    BEGIN
      WRITELN('hello, world!');
    END.

    That doesn't mean one can refer to the second as a C program, nor the
    first as a Pascal program. And neither can be referred to as a Turing.

    I don't understand your resistance to simply using terminology in a way
    that conforms with how everyone else uses it. Is it your goal to be misunderstood and/or deliberately confuse?

    André



    The strong version of the Sapir–Whorf hypothesis says that language determines thought and that linguistic categories limit and determine
    cognitive categories.

    https://en.wikipedia.org/wiki/Linguistic_relativity

    Although linguists currently reject the strong version of the
    Sapir–Whorf hypothesis as it applies to ordinary natural language communication, I have found that the strong version does indeed apply to
    how sets of conventional terms-of-the-art apply to some technical
    subject matter.

    Some of the basic concepts at the foundation of logic, mathematics and
    computer science are pieced together incoherently yet their error is
    utterly inexpressible within the conventional terms of the art in these
    fields.

    There are errors at the foundation of logic, mathematics and computer
    science that cannot be discerned because they have been defined out-of-existence. The only way to see these errors requires the paradigm
    shift of reconstructing the essential semantic meanings of these
    conventional terms of the art from scratch.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Sat Aug 29 10:29:29 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/29/2020 9:58 AM, André G. Isaak wrote:
    On 2020-08-29 08:26, olcott wrote:
    On 8/29/2020 8:44 AM, André G. Isaak wrote:
    On 2020-08-29 07:27, olcott wrote:
    On 8/28/2020 11:34 PM, André G. Isaak wrote:
    On 2020-08-28 11:50, olcott wrote:
    On 8/28/2020 12:19 PM, André G. Isaak wrote:
    On 2020-08-28 11:08, olcott wrote:

    Any virtual machine that is computationally identical to a
    Turing machine is for all practical purposes an actual Turing
    machine. Now that I have shown how the abstract machine defined >>>>>>>> by the x86 language has access to unlimited memory I have shown >>>>>>>> that this abstract machine is computationally identical to a
    Turing machine.

    https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access








    Before you claimed to have shown that x86 was Turing Complete.
    (You didn't, but that was your claim).

    But above you are now claiming that it is Turing Equivalent.
    Those are not the same thing.

    If you could show that x86 was Turing Equivalent, then you could >>>>>>> argue that for every x86 program, a corresponding Turing Machine >>>>>>> must exist. This, however, does not follow from x86 being Turing >>>>>>> Complete.


    Turing completeness
    A computational system that can compute every Turing-computable
    function is called Turing-complete (or Turing-powerful).
    Alternatively, such a system is one that can simulate a universal
    Turing machine.

    Turing equivalence
    A Turing-complete system is called Turing-equivalent if every
    function it can compute is also Turing-computable; i.e., it
    computes precisely the same class of functions as do Turing
    machines. Alternatively, a Turing-equivalent system is one that
    can simulate, and be simulated by, a universal Turing machine.

    Within the Church-Turing thesis we get essentially that a Turing
    machine can compute any damn thing that can possibly be computed.

        There are various equivalent formulations of the
        Church-Turing thesis. A common one is that every
        effective computation can be carried out by a
        Turing machine. https://plato.stanford.edu/entries/church-turing/ >>>>>>
    A Turing complete language is defined to compute this same set.

    This makes the conventional term of the art: "Turing equivalence"
    redundant, and thus "Turing completeness" and "Turing equivalence" >>>>>> are merely different words for the exact same thing.
    https://en.wikipedia.org/wiki/Turing_completeness#Formal_definitions >>>>>
    You seem to be missing my point.

    It is possible that Turing Complete and Turing Equivalent have the
    same extension, but they mean different things.

    So a Turing machine can accept the entire set of recursively
    enumerable languages and another machine can accept this same entire
    set of recursively enumerable languages and some how through the
    magic of ambiguity one set is larger than the other?

    Something is Turing Equivalent if it the set of problems it can
    compute is identical to a Turing Machine.

    Something is Turing Complete if the set of problems it can compute is
    identical to *or greater than* that of a Turing Machine.


    I knew that it said that yet:
    Yet within the Church-Turing thesis this is impossible.

    Again, you are missing my point.

    The Church-Turing thesis is a *conjecture*. It is not something that has
    been proven.

    And even if it were proven, it still makes sense to use the term which
    more accurately captures what you mean to say. Just because two terms
    have the same extension doesn't make them interchangeable.

    If I want to claim that something moves at the speed of light, I would
    claim that v = c. I would not claim that v >= c and then justify this on
    the grounds that it isn't actually possible to travel faster than light, therefore the two mean the same thing. They don't mean the same thing.

    If it could be shown that everything that can be computed can be
    computed by a Turing Machine, then those two would have the same
    extension. But this is *not* something that has been proven.

    If you somehow managed to prove that your (still undefined) 'x86
    language' was Turing complete, if your program happened to be in that
    'greater than' portion of Turing Complete, then there would be no
    corresponding Turing Machine.

    I'm not saying this is likely. I am pointing out that you are using
    your terms incorrectly.


    If we start with Church-Turing as a "given" then the terms themselves
    are incorrect.

    And even if there is a Turing Machine which does the same computation
    as your program, that still doesn't make your program a Turing Machine.


    Two machines are functionally identical if these two machines compute
    the exact same set of functions and no more.

    Just because two things compute the same function doesn't make them the
    same thing.

    The following two programs might be considered to be functionally
    equivalent:

    #include <stdio.h>

    int main(void) {
    printf("hello, world!\n");
    }

    PROGRAM HELLO;
    BEGIN
      WRITELN('hello, world!');
    END.

    The "C" programming language can be construed to be functionally
    identical to a Turing machine if it can be mapped to an abstract model
    of computation that is functionally identical to a Turing machine.

    https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access

    When a computer program written in "C" is construed to be functionally identical to its Turing machine equivalent, this "C" program is (for the
    purpose of studying the limits of algorithmic computation) one-and-the-same-thing as a Turing Machine.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Sat Aug 29 12:32:54 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/29/2020 11:47 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    The "C" programming language can be construed to be functionally
    identical to a Turing machine if it can be mapped to an abstract model
    of computation that is functionally identical to a Turing machine.

    I am not sure what this conditional, hedged and vague remark is intended
    to say, but the C programming language (without the standard IO library)
    is not Turing complete[1]. I suspect it isn't Turing complete even
    /with/ the standard IO library, but I'd have to think about that. It
    all comes down to what parts of the language are bounded by definition.

    [1] I am not 100% sure of this. I have seen no proof, but I can't see
    any way round the bounded nature of the language. C appears to be too low-level to be formally Turing complete, but I'd love to find out that
    I'm wrong about this.


    When the "C" programming language is mapped to this abstract model of computation it becomes functionally identical to a Turing Machine if the following abstract model is functionally identical to a Turing Machine:

    https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access

    This is a concrete every 4GB RAM machine within each 4GB block.
    "C" already perfectly maps to a 4GB RAM machine.

    We augment this concrete model transforming it into an abstract model by
    simply hypothesizing an unlimited number of adjacent 4GB blocks.

    If necessary it can always drop into pure Turing machine mode and
    essentially have a tape head that can move forward or backward in
    increments of 1 to 7FFFFFFF bytes. In pure Turing machine mode it only
    has unconditional jumps and conditional jumps.

    To truly prove beyond all possible doubt that "C" maps to a machine that
    is functionally identical to a Turing machine would most readily be accomplished by mapping "C" to the above machine when this machine is in
    pure Turing machine mode.

    I figured out how to easily make a linked list that would keep track of
    2GB of information about every single atom in the whole universe updated
    every second since the beginning of time using ordinary x86-64 RIP
    relative addressing.

    What may be impossible even for a Turing machine is to make a truly
    unlimited linked list that can have any of its pointers updated as new
    data is inserted in sort order.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sat Aug 29 14:01:46 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/29/2020 12:32 PM, olcott wrote:
    On 8/29/2020 11:47 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    The "C" programming language can be construed to be functionally
    identical to a Turing machine if it can be mapped to an abstract model
    of computation that is functionally identical to a Turing machine.

    I am not sure what this conditional, hedged and vague remark is intended
    to say, but the C programming language (without the standard IO library)
    is not Turing complete[1].  I suspect it isn't Turing complete even
    /with/ the standard IO library, but I'd have to think about that.  It
    all comes down to what parts of the language are bounded by definition.

    [1] I am not 100% sure of this.  I have seen no proof, but I can't see
    any way round the bounded nature of the language.  C appears to be too
    low-level to be formally Turing complete, but I'd love to find out that
    I'm wrong about this.


    When the "C" programming language is mapped to this abstract model of computation it becomes functionally identical to a Turing Machine if the following abstract model is functionally identical to a Turing Machine:

    https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access


    This is a concrete every 4GB RAM machine within each 4GB block.
    "C" already perfectly maps to a 4GB RAM machine.

    We augment this concrete model transforming it into an abstract model by simply hypothesizing an unlimited number of adjacent 4GB blocks.

    If necessary it can always drop into pure Turing machine mode and
    essentially have a tape head that can move forward or backward in
    increments of 1 to 7FFFFFFF bytes. In pure Turing machine mode it only
    has unconditional jumps and conditional jumps.

    To truly prove beyond all possible doubt that "C" maps to a machine that
    is functionally identical to a Turing machine would most readily be accomplished by mapping "C" to the above machine when this machine is in
    pure Turing machine mode.

    I figured out how to easily make a linked list that would keep track of
    2GB of information about every single atom in the whole universe updated every second since the beginning of time using ordinary x86-64 RIP
    relative addressing.

    What may be impossible even for a Turing machine is to make a truly
    unlimited linked list that can have any of its pointers updated as new
    data is inserted in sort order.


    We can transform a tiny subset of the x64 concrete model of computation
    into its Turing complete abstraction by simply hypothesizing unlimited
    memory.

    The only instructions of the x64 language that can definitely be
    retained are RIP-relative addressing, and conditional Jumps. A machine
    such as this can move its tape head (conditionally or unconditionally)
    forward or backward in increments of 1 to 0x7fffffff bytes. This model
    can execute conditional jumps based on {8,16,32,64} bit wide memory
    elements.

    Whatever aspects of the "C" programming language that map to the above
    abstract model of computation could be construed as Turing complete.

    We can't have pointers to an absolute memory locations because like a
    Turing Machine tape head's current position, there is no such thing as
    an absolute location. We can have pointers to relative memory locations
    that can have an arbitrary number of repetitions of 0x7fffffff jumps.

    We might be able to simulate pointers to absolute memory locations by translating this to relative addressing during compilation. If we can
    then it seems that all of the essence of "C" (without the libraries)
    could be translated into this model. The one library function most
    useful would be alloc().

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Sat Aug 29 14:48:01 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/29/2020 2:03 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/29/2020 11:47 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    The "C" programming language can be construed to be functionally
    identical to a Turing machine if it can be mapped to an abstract model >>>> of computation that is functionally identical to a Turing machine.

    I am not sure what this conditional, hedged and vague remark is intended >>> to say, but the C programming language (without the standard IO library) >>> is not Turing complete[1]. I suspect it isn't Turing complete even
    /with/ the standard IO library, but I'd have to think about that. It
    all comes down to what parts of the language are bounded by definition.

    [1] I am not 100% sure of this. I have seen no proof, but I can't see
    any way round the bounded nature of the language. C appears to be too
    low-level to be formally Turing complete, but I'd love to find out that
    I'm wrong about this.

    When the "C" programming language is mapped to this abstract model of
    computation it becomes functionally identical to a Turing Machine if
    the following abstract model is functionally identical to a Turing
    Machine:

    No. This is a technical question that can't be resolved with words like "mapped". Any non-Turing complete model such as a pushdown automaton
    can be "mapped" to a Turing machine (which is certainly Turing
    complete).

    You made that point.
    We map the "C" programming language to the tiny subset of the x64
    programming language and then validate that this "C" language can
    recognize the set of recursively enumerable languages.

    The trickiest thing for me was determining the feasibility of linked
    lists of literally unlimited length that allow arbitrary insertion of
    new elements in some sort order.

    A 96-byte pointer could keep track of a 2GB story about every atom in
    the universe's relation to each other atom in the whole universe at one
    second intervals since the beginning of time.

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Sat Aug 29 16:26:16 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/29/2020 3:35 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/29/2020 2:03 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/29/2020 11:47 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    The "C" programming language can be construed to be functionally
    identical to a Turing machine if it can be mapped to an abstract model >>>>>> of computation that is functionally identical to a Turing machine.

    I am not sure what this conditional, hedged and vague remark is intended >>>>> to say, but the C programming language (without the standard IO library) >>>>> is not Turing complete[1]. I suspect it isn't Turing complete even
    /with/ the standard IO library, but I'd have to think about that. It >>>>> all comes down to what parts of the language are bounded by definition. >>>>>
    [1] I am not 100% sure of this. I have seen no proof, but I can't see >>>>> any way round the bounded nature of the language. C appears to be too >>>>> low-level to be formally Turing complete, but I'd love to find out that >>>>> I'm wrong about this.

    When the "C" programming language is mapped to this abstract model of
    computation it becomes functionally identical to a Turing Machine if
    the following abstract model is functionally identical to a Turing
    Machine:

    No. This is a technical question that can't be resolved with words like >>> "mapped". Any non-Turing complete model such as a pushdown automaton
    can be "mapped" to a Turing machine (which is certainly Turing
    complete).

    You made that point.

    Obviously, but did you understand it?

    When we show all of the details required to map the "C" programming
    language to a Turing complete machine then the Church-Turing thesis
    becomes a formal proof relative to TMs and "C", it is no longer a mere
    thesis within this scope.


    We map the "C" programming language to the tiny subset of the x64
    programming language and then validate that this "C" language can
    recognize the set of recursively enumerable languages.

    I don't think you know what you are talking about. Sorry, I'll
    re-phrase that. I am sure you don't know what you are talking about.


    I think that the only higher level construct required to make "C" truly
    Turing complete is the capability to create an unlimited number of
    totally unlimited linked lists. I have almost all of the details figured
    out how to do that.

    Can you see how adding this to "C" would make "C" Turing complete?

    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Sun Aug 30 07:27:56 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-29 09:29, olcott wrote:

    The "C" programming language can be construed to be functionally
    identical to a Turing machine if it can be mapped to an abstract model
    of computation that is functionally identical to a Turing machine.

    https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access


    When a computer program written in "C" is construed to be functionally identical to its Turing machine equivalent, this "C" program is (for the
     purpose of studying the limits of algorithmic computation) one-and-the-same-thing as a Turing Machine.

    To be blunt, the position you are taking here is patently ridiculous.

    The fact that I can drive from 9th St Station to Hoboken doesn't make a
    car one-and-the-same-thing as a PATH train.

    One thousand pennies may be equivalent to a ten-dollar bill, but that
    doesn't make them the same thing.

    If you use 'Turing Machine' to refer to pretty much anything that can do something a Turing Machine can do, then how do you refer to a Turing
    Machine?

    André

    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?QW5kcsOpIEcuIElzYWFr?=@21:1/5 to olcott on Sun Aug 30 07:38:46 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 2020-08-29 09:12, olcott wrote:

    The strong version of the Sapir–Whorf hypothesis says that language determines thought and that linguistic categories limit and determine cognitive categories.

    https://en.wikipedia.org/wiki/Linguistic_relativity

    Although linguists currently reject the strong version of the
    Sapir–Whorf hypothesis as it applies to ordinary natural language communication, I have found that the strong version does indeed apply to
    how sets of conventional terms-of-the-art apply to some technical
    subject matter.

    Some of the basic concepts at the foundation of logic, mathematics and computer science are pieced together incoherently yet their error is
    utterly inexpressible within the conventional terms of the art in these fields.

    There are errors at the foundation of logic, mathematics and computer
    science that cannot be discerned because they have been defined out-of-existence. The only way to see these errors requires the paradigm shift of reconstructing the essential semantic meanings of these
    conventional terms of the art from scratch.

    This is yet another patently ridiculous notion.

    First of all, it's the Whorf Hypothesis. Calling it the Sapir-Whorf
    Hypothesis is a dead give-away that you are not familiar with it. That's
    what you get from trying to learn everything from Wikipedia articles (or
    worse, from George Orwell).

    The Whorf Hypothesis concerns the relation between thought and the
    grammatical categories of people's *native* language. Formal languages
    don't have native speakers. Nor do they have anything analogous to the
    types of grammatical categories with which Whorf was concerned.

    Absolutely nothing precludes you from proposing your own terms if you
    find existing ones inadequate *provided that you precisely define those
    terms*.

    What you can't do is simply use existing terms to mean entirely
    different things from their conventional meanings and expect to be
    understood.

    André

    --
    To email remove 'invalid' & replace 'gm' with well known Google mail
    service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Sun Aug 30 10:21:59 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/30/2020 8:27 AM, André G. Isaak wrote:
    On 2020-08-29 09:29, olcott wrote:

    The "C" programming language can be construed to be functionally
    identical to a Turing machine if it can be mapped to an abstract model
    of computation that is functionally identical to a Turing machine.

    https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access


    When a computer program written in "C" is construed to be functionally
    identical to its Turing machine equivalent, this "C" program is (for
    the   purpose of studying the limits of algorithmic computation)
    one-and-the-same-thing as a Turing Machine.

    To be blunt, the position you are taking here is patently ridiculous.

    The fact that I can drive from 9th St Station to Hoboken doesn't make a
    car one-and-the-same-thing as a PATH train.


    That your destination and departure points were the same makes the trips
    functionally equivalent no matter what the underlying process was.

    When a pair of functions produces the same output from the same set of
    input these functions are functionally equivalent. When a pair of
    machines always produces the same output for the same input these
    machines are functionally equivalent.

    An abstract machine that is equivalent to a Turing machine can be
    implemented as a concrete machine in finite hardware.

    The abstract Turng equivalanet machine and its finite concrete
    implementation can have identical machine description languages. When
    the concrete machine directly executes the machine descriptions of an
    abstract model of computation these computations are state-change-by-state-change, tape-head-move-by-tape-head-move identical between the concrete machine and the abstract machine.

    One thousand pennies may be equivalent to a ten-dollar bill, but that
    doesn't make them the same thing.

    If you use 'Turing Machine' to refer to pretty much anything that can do something a Turing Machine can do, then how do you refer to a Turing
    Machine?

    André


    I elaborated the distinction between functionally equivalent and
    identical computations.

    If a machine equivalent to a Turing has its exact machine description
    language implemented in concrete physical hardware then every
    computation on the physical machine that can be computed on the physical machine is state-change-by-state-change,
    tape-hard-move-by-tape-head-move identical to that of the Turing
    equivalent machine.

    When we create a "C" compiler for the Turing equivalent machine then
    every program written in this "C" language translates into a program
    that has identical execution to that of the Turing equivalent machine.

    This would make programs written in this "C" language functionally
    equivalent to the same computation on an actual Turing Machine.

    The key bridge that ties it all together would be implementing a way to
    provide an arbitrary number of linked lists of arbitrary length
    having an identical implementation in the abstract and concrete models.

    On 8/29/2020 5:53 PM, Ben Bacarisse wrote:
    olcott<NoOne@NoWhere.com> writes:
    I think that the only higher level construct required to make "C"
    truly Turing complete is the capability to create an unlimited number
    of totally unlimited linked lists.

    You need much less than this, but that would do it.


    The implementation details of this of this doubly linked list would be
    hidden from the "C" abstraction. From the perspective of "C" it would
    seem like an easy way to obtain access to an unlimited number of Turing
    machine tapes.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Sun Aug 30 11:11:40 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/30/2020 8:38 AM, André G. Isaak wrote:
    On 2020-08-29 09:12, olcott wrote:

    The strong version of the Sapir–Whorf hypothesis says that language
    determines thought and that linguistic categories limit and determine
    cognitive categories.

    https://en.wikipedia.org/wiki/Linguistic_relativity

    Although linguists currently reject the strong version of the
    Sapir–Whorf hypothesis as it applies to ordinary natural language
    communication, I have found that the strong version does indeed apply
    to how sets of conventional terms-of-the-art apply to some technical
    subject matter.

    Some of the basic concepts at the foundation of logic, mathematics and
    computer science are pieced together incoherently yet their error is
    utterly inexpressible within the conventional terms of the art in
    these fields.

    There are errors at the foundation of logic, mathematics and computer
    science that cannot be discerned because they have been defined
    out-of-existence. The only way to see these errors requires the
    paradigm shift of reconstructing the essential semantic meanings of
    these conventional terms of the art from scratch.

    This is yet another patently ridiculous notion.

    First of all, it's the Whorf Hypothesis. Calling it the Sapir-Whorf Hypothesis is a dead give-away that you are not familiar with it. That's
    what you get from trying to learn everything from Wikipedia articles (or worse, from George Orwell).

    The Whorf Hypothesis concerns the relation between thought and the grammatical categories of people's *native* language. Formal languages
    don't have native speakers. Nor do they have anything analogous to the
    types of grammatical categories with which Whorf was concerned.

    Absolutely nothing precludes you from proposing your own terms if you
    find existing ones inadequate *provided that you precisely define those terms*.

    What you can't do is simply use existing terms to mean entirely
    different things from their conventional meanings and expect to be understood.

    André


    That you do not understand that I am correct provides zero evidence that
    I am not correct. It turns out that key aspects of the infrastructure of mathematics are only tied together with natural language.

    I only need to focus on using the terms of the art of computer science correctly and/or creating new terms to refer to new computer science
    meanings. I am postponing my other non-computer science work until after
    my computer science work is complete.

    Functional equivalence is a term that may or may not have a
    corresponding term-of-the-art. I only initially defined it using broad
    strokes that may not capture the precise meaning that I intend.

    My initial guess is that an actual Turing machine that performs a
    computations with bit sized addressing and single tape head moves to the
    right or the left would be (conventional term of the art) equivalent to
    the same machine that can perform {8,16,32,64)-bit addressing and move
    its tape head in 1 to 0x7fffffff byte increments forward of backward.

    That such a machine can restrict itself to behaving exactly like a
    Turing machine proves that it is Turing complete. That the enhancements
    do not provide an increase in computation power seem intuitive.

    Within these two models of computation computing the sum of two 63 bit
    unsigned integers would have a different input form, output form and set
    of operations between the two machines.

    In both cases the addends represent the same 63-bit integers and the sum
    is the same 64-bit integer.

    So to generalize what I mean by functionally equivalent computations:
    (1) The inputs are equivalent yet may be represented differently.
    (2) The outputs are equivalent yet may be represented differently.

    To generalize what I mean identical computations
    (1) The inputs are identical.
    (2) The outputs are identical.
    The computations are state change by state change and tape head move by
    tape head move identical.

    The above much more accurately defines my prior rather blurry imprecise intuition.


    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Sun Aug 30 11:42:35 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    On 8/30/2020 9:38 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/29/2020 8:35 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    The Turing complete stuff would be machine code that the "C" does not
    know about or care about. If "C" needs unlimited integers it makes
    them out of ASCII digits.

    You are confused. "Implementing" a non-Turing complete language by
    using a Turing complete notation (even a Turing machine itself) does not >>> make the original language Turing complete.

    You already agreed that it does.

    You are having trouble understanding what people (it's not just me) are saying to you. I'm not bothered if you don't know what I am saying, but
    I am happy to answer questions about it if you want to find out.

    Obviously it's simpler for me if you continue as you are.

    On 8/29/2020 5:53 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    I think that the only higher level construct required to make "C"
    truly Turing complete is the capability to create an unlimited number
    of totally unlimited linked lists.

    You need much less than this, but that would do it.

    Would you like some help to understand how this stuff works?

    Not interested?


    I have defined two terms:
    (a) Functionally equivalent machines / computations.
    (b) Identical computations

    I completely redefined this latter term my initial blurry intuition was
    stated very incorrectly.

    My intention is that functionally equivalent machines are exactly the
    same thing as Turing equivalent machines. I have also augmented Turing equivalent machines with a possibly new term: Turing equivalent
    compuations.

    My initial guess is that an actual Turing machine that performs a
    computations with bit sized addressing and single tape head moves to the
    right or the left would be (conventional term of the art) equivalent to
    the same machine that can perform {8,16,32,64)-bit addressing and move
    its tape head in 1 to 0x7fffffff byte increments forward or backward.

    That such a machine can restrict itself to behaving exactly like a
    Turing machine proves that it is Turing complete. That the enhancements
    do not provide an increase in computation power seems intuitive.

    Within these two models of computation computing the sum of two 63 bit
    unsigned integers would have a different input form, output form and a different set of operations. In both cases the addends represent the
    same 63-bit integers and the sum is the same 64-bit integer.

    So to generalize what I mean by functionally equivalent computations:
    (1) The inputs are equivalent yet may be represented differently.
    (2) The outputs are equivalent yet may be represented differently.
    Functionally equivalent machines would have functionally equivalent computations in the other machine for its entire set of possible inputs.

    I totally redefined this from scratch:
    To generalize what I mean identical computations
    (1) The inputs are identical.
    (2) The outputs are identical.
    The computations are state change by state change and tape head move by
    tape head move identical.

    The tape head and the tape are imaginary on the abstract machine. The
    tape head and the tape of the concrete implementation of the abstract
    machine is physically implemented as an instruction pointer and RAM in
    the concrete machine.

    Both machines execute the exact same machine description language having
    the exact same execution trace for their computations.



    --
    Copyright 2020 Pete Olcott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)