It boils down to this: If Turing, Gödel and Tarski are correct then
truth itself is fundamentally broken. Since truth itself cannot
possibly be broken human understanding of the nature of truth is
necessarily incorrect.
Since provability is an aspect of satisfiability true and provable
cannot possibly diverge.
olcott <NoOne@NoWhere.com> writes:
[...]
It boils down to this: If Turing, Gödel and Tarski are correct then
truth itself is fundamentally broken. Since truth itself cannot
possibly be broken human understanding of the nature of truth is
necessarily incorrect.
Since provability is an aspect of satisfiability true and provable
cannot possibly diverge.
According to some stories (I don't know whether this is 100%
accurate), the Pythagoreans were horrified to discover that the
length of the diagonal of a square cannot be expressed as the ratio
of two integers. In modern terms, sqrt(2) is irrational. This fact
attacked their concept of mathematical reality. They might have
said that truth itself is fundamentally broken.
But there was a rigorous proof of this disturbing fact, and they
could not refute it. (Supposedly one person was murdered for
divulging it.)
They were deeply offended by the irrationality of sqrt(2).
(Note that "irrational" here merely means that it's not a ratio,
not that it's in any way insane.) And today mathmaticians simply
accept that not all numbers are rational.
https://en.wikipedia.org/wiki/Square_root_of_2#History
The idea that there are statements that are true within a system
but cannot be proven (or disproven) within that system is perhaps
equally disturbing. But again, there is a rigorous proof,
and mathematicians have no choice but to accept it. Gödel was
right, and the world has not collapsed. Mathematical truth is not
controlled by our prejudices.
You might as well dedicate your life to proving that sqrt(2) is
rational.
olcott <NoOne@NoWhere.com> writes:
On 8/12/2020 7:21 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/12/2020 5:00 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:I want to establish the theoretical foundation that an x86 based
On 8/12/2020 3:36 PM, Ben Bacarisse wrote:<cut>
You babbled. Throwing terms around that you don't understand is not a >>>>>>> good look for anyone. That whole post was a dog's dinner.
Let me break it down for you using more precise terms:
No. I know what you are saying because it's not at all complex, unusual >>>>> or new. Your response stems the same problem that leads you to think >>>>> people are mad or lying or playing mind games. Sometimes people think >>>>> you are babbling, not because they don't understand, but because you are >>>>> tossing a word salad.
A precise degree of partial Turing completeness can be defined as: >>>>>> The subset of Turing-computable functions that are computable on an >>>>>> abstract machine or programming language.
The second sentence is fine. It's standard fare in computability
theory, though we go a bit further by not limiting ourselves to Turing >>>>> computable functions.
The babbling comes from using phrases like Turing completeness
inappropriately. The latest, "partial Turing completeness", is of
course an oxymoron. Were you aiming for some archaic rhetorical effect? >>>>
refutation of the conventional halting problem counter-examples would
immediately and directly apply to Turing Machines.
<screeching of brakes> Reality check: what does what you call "partial
Turing completeness" have to do with your claim? Are you planning to
demonstrate something about a computational model that can't compute all >>> the functions a Turing machine can? Do you plan to extrapolate that up
to Turing computability?
I want to explicitly account for the cases that may be computable in
finite memory that are not computable in unlimited memory. It was
proposed that such cases can possibly exist.
There are no such cases.
To do this it must be universally understood that the computability of >>>> an x86 based machine is identical to the computability of a Turing
machine on a subset of functions.
Two years ago, when you were still lying about having a Turing machine,
people (certainly me but I think others too) counselled you to use
anything but a Turing machine because, due to computational equivalence, >>> there was no need for the mess that is an actual Turing machine.
Apparently some people believe that there is a difference between
Turing complete and Turing equivalent
Of course there is. Essentially it's the difference between X >= Y and
X = Y. But it does not matter for what you claim.
https://scienceblogs.com/goodmath/2007/01/05/turing-equivalent-vs-turing-co
Not a good article in my opinion.
But you, being you, have managed to pick the worst possible alternative. >>> I an not sure that this is an accident. A pile of x86 is a great place
to hide.
Not in this case. I did it this way to make my solution as easy as
possible to fully understand. I could have buried it in a few billion
Turing Machine interpreter instructions where simply accessing the
memory location @ 0xffff takes 65,0000 steps.
I was suggesting a high-level language. Of course even there you could
bury your error, but it would be harder.
It boils down to this: If Turing, Gödel and Tarski are correct then
truth itself is fundamentally broken.
You mean mathematical truth is not what you always thought it was.
Since truth itself cannot possibly be broken human understanding of
the nature of truth is necessarily incorrect.
Mathematics does not bend to your pronouncements. Mathematical truth is
what it is, and your declaring it broken has no effect at all.
Since provability is an aspect of satisfiability true and provable
cannot possibly diverge.
PO-provability may be an aspect of PO-satisfiability and PO-truth and PO-provable may not diverge, but the sentence as you wrote it (with
words you don't yet understand) is just wrong.
It boils down to this: If Turing, Gödel and Tarski are correct then
truth itself is fundamentally broken. Since truth itself cannot possibly
be broken human understanding of the nature of truth is necessarily incorrect.
Since provability is an aspect of satisfiability true and provable
cannot possibly diverge.
If a proposition is true you must have some basis to know that it is
true besides a wild guess or a hunch. What-so-ever definitive basis that
you have to show that a proposition is definitely true is the proof of
this proposition. THERE IS NO WAY OUT OF THIS
On 2020-08-12, olcott <NoOne@NoWhere.com> wrote:
I want to establish the theoretical foundation that an x86 based
refutation of the convetional halting problem counter-examples would
immediately and directy apply to Turing Machines.
Your idea for this is intellectually dishonest, invalid, and
frankly, stupid.
The impossibility of determining halting for all machines is
demonstrated by proofs involving a trick which breaks the halting
detector by embedding it in a machine, which applies it to itself.
Your plan is, evidently, to make the halting detector unavailable to researchers (through trade secrets and denial of licensing) for
that sort of embedding. Only running it a stand-alone black box will
be permitted. Or perhaps even as a remote service over a network,
with no access to the technology. That's likely why you're hinting at
IPC mechanisms in your vague descriptions.
But the halting theorem is an assertion about all machines. Machines
that you disallow researchers to have still exist, mathematically.
The following does not fly: "Halting is solved for the machines you're allowed to test with; believe me that it also works for the secret ones you're not allowed to test." That is not science; it is intellectual
fraud.
One small problem is also that you're not capable of developing a
halting detector of such a pedigree that testers will actually *have* to resort to self-referential test casees to break it. It will either give
the wrong answer or else itself fail to halt on all sorts of ordinary
inputs that do not perpetrate any trickery.
olcott <NoOne@NoWhere.com> writes:
On 8/12/2020 10:25 PM, Ben Bacarisse wrote:<cut>
olcott <NoOne@NoWhere.com> writes:
It boils down to this: If Turing, Gödel and Tarski are correct then
truth itself is fundamentally broken.
You mean mathematical truth is not what you always thought it was.
Since truth itself cannot possibly be broken human understanding of
the nature of truth is necessarily incorrect.
Mathematics does not bend to your pronouncements. Mathematical truth is >>> what it is, and your declaring it broken has no effect at all.
Since provability is an aspect of satisfiability true and provable
cannot possibly diverge.
PO-provability may be an aspect of PO-satisfiability and PO-truth and
PO-provable may not diverge, but the sentence as you wrote it (with
words you don't yet understand) is just wrong.
If a proposition is true you must have some basis to know that it is
true besides a wild guess or a hunch.
Every arithmetical formula is either true or false. Not every
arithmetical formula is provable or refutable. Deal with it.
What-so-ever definitive basis that you have to show that a proposition
is definitely true is the proof of this proposition.
In a sound system, if you have a proof, the proposition is true. No one doubts that. But in the absence of a proof every proposition P
corresponds to one as yet unknown truth: either P or ~P.
THERE IS NO WAY OUT OF THIS
Yes there is: education. You don't have to keep use words you don't understand.
olcott <NoOne@NoWhere.com> writes:
On 8/12/2020 10:11 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/12/2020 6:39 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
int main()
{
uint32_t* S1;
S1 = (uint32_t*) 16; // size param passed to AllocateHeap
AllocateHeap(&S1); // Allocates 16 uint32_t
SaveState(&S1); // Saves 16 32-bit registers
RestoreState(&S1); // Process context switch
__asm hlt
}
Shocking. But then you are not a programmer are you?
In other words you don't understand the code.
Says the person who had to post a correction to their own code because
they got the levels of pointer indirection wrong! As far as I know you /still/ haven't posted a correct declaration for AllocateHeap (not that anyone cares).
On 2020-08-12 21:39, olcott wrote:
If a proposition is true you must have some basis to know that it is
true besides a wild guess or a hunch. What-so-ever definitive basis
that you have to show that a proposition is definitely true is the
proof of this proposition. THERE IS NO WAY OUT OF THIS
You are conflating truth with knowledge. They are not the same thing.
There are many propositions which must either be true or false but which
we don't know which. For example 'Tachyons exist.' is either true or it
is false since there are no things which both exist and do not exist.
But we do not know whether 'tachyons exist' is true.
André
On 2020-08-12 18:54, olcott wrote:
It boils down to this: If Turing, Gödel and Tarski are correct then
truth itself is fundamentally broken. Since truth itself cannot
possibly be broken human understanding of the nature of truth is
necessarily incorrect.
None of these people claim that truth is 'fundamentally broken'.
They
simply point out the limitations of what certain types of systems are
able to achieve.
The fact that something has limitations doesn't mean it is fundamentally broken. Your car, for example, cannot travel at 1000 mph. That's a limitation. It doesn't mean your car is broken.
Since provability is an aspect of satisfiability true and provable
cannot possibly diverge.
Not according to standard definitions of those terms.
André
On 2020-08-13 09:48, olcott wrote:
On 8/13/2020 3:56 AM, André G. Isaak wrote:
On 2020-08-12 21:39, olcott wrote:
If a proposition is true you must have some basis to know that it is
true besides a wild guess or a hunch. What-so-ever definitive basis
that you have to show that a proposition is definitely true is the
proof of this proposition. THERE IS NO WAY OUT OF THIS
You are conflating truth with knowledge. They are not the same thing.
There are many propositions which must either be true or false but
which we don't know which. For example 'Tachyons exist.' is either
true or it is false since there are no things which both exist and do
not exist. But we do not know whether 'tachyons exist' is true.
André
That Gödel's G can be known to be definititely true and known to be
definitely unprovable is just as impossible as determining the sum of
a pair of integers without using any numbers (integers are numbers).
You should probably learn what Gödel's proof actually says and address
that rather than what you think it says.
Gödel's G is a statement such that neither G nor ¬G can be proven.
That entails that there is *some* true statement which cannot be proven.
We do not know, however, whether that true statement is G or ¬G. But one
of those two statements must be true.
André
On 8/13/2020 3:56 AM, André G. Isaak wrote:
On 2020-08-12 21:39, olcott wrote:
If a proposition is true you must have some basis to know that it is
true besides a wild guess or a hunch. What-so-ever definitive basis
that you have to show that a proposition is definitely true is the
proof of this proposition. THERE IS NO WAY OUT OF THIS
You are conflating truth with knowledge. They are not the same thing.
There are many propositions which must either be true or false but
which we don't know which. For example 'Tachyons exist.' is either
true or it is false since there are no things which both exist and do
not exist. But we do not know whether 'tachyons exist' is true.
André
That Gödel's G can be known to be definititely true and known to be definitely unprovable is just as impossible as determining the sum of a
pair of integers without using any numbers (integers are numbers).
On 8/13/2020 11:51 AM, André G. Isaak wrote:
On 2020-08-13 09:48, olcott wrote:
On 8/13/2020 3:56 AM, André G. Isaak wrote:
On 2020-08-12 21:39, olcott wrote:
If a proposition is true you must have some basis to know that it
is true besides a wild guess or a hunch. What-so-ever definitive
basis that you have to show that a proposition is definitely true
is the proof of this proposition. THERE IS NO WAY OUT OF THIS
You are conflating truth with knowledge. They are not the same
thing. There are many propositions which must either be true or
false but which we don't know which. For example 'Tachyons exist.'
is either true or it is false since there are no things which both
exist and do not exist. But we do not know whether 'tachyons exist'
is true.
André
That Gödel's G can be known to be definititely true and known to be
definitely unprovable is just as impossible as determining the sum of
a pair of integers without using any numbers (integers are numbers).
You should probably learn what Gödel's proof actually says and address
that rather than what you think it says.
Gödel's G is a statement such that neither G nor ¬G can be proven.
That entails that there is *some* true statement which cannot be
proven. We do not know, however, whether that true statement is G or
¬G. But one of those two statements must be true.
André
If it was as simple as that there would not be a problem.
What is actually meant by "true and unprovable" is that it is true that
G is unprovable yet it is not provable that G is unprovable.
The only possible way to know for sure that G is definitely unprovable
is a proof that G is unprovable anything less than this is not definite.
Gödel: I know for sure that G is unprovable!
Pete: Can you prove that?
Gödel: No
Pete: Then your "knowledge" about G is mere presumption.
Whatever you're doing, it's unlikely that the code you've shown is
the right way to do it.
This is the interprocess communication protocol that I have[assembly code snipped]
established for a slave UTM to request three different kinds of
operating system functions. The code for these three functions is
fully designed. I only need to code the "spawn process" code and then
the master UTM will be essentially complete.
int main()
{
uint32_t* S1;
S1 = (uint32_t*) 16; // size param passed to AllocateHeap
AllocateHeap(&S1); // Allocates 16 uint32_t
SaveState(&S1); // Saves 16 32-bit registers
RestoreState(&S1); // Process context switch
__asm hlt
}
On 2020-08-13 12:34, Keith Thompson wrote:
Whatever you're doing, it's unlikely that the code you've shown is
the right way to do it.
Also, even if we make assumptions about what this code is *intended* to
do based on the names given to the undefined functions, none of it has anything to do with determining whether something halts.
André
On 2020-08-13 11:12, olcott wrote:
On 8/13/2020 11:51 AM, André G. Isaak wrote:
On 2020-08-13 09:48, olcott wrote:
On 8/13/2020 3:56 AM, André G. Isaak wrote:You should probably learn what Gödel's proof actually says and
On 2020-08-12 21:39, olcott wrote:
If a proposition is true you must have some basis to know that it
is true besides a wild guess or a hunch. What-so-ever definitive
basis that you have to show that a proposition is definitely true
is the proof of this proposition. THERE IS NO WAY OUT OF THIS
You are conflating truth with knowledge. They are not the same
thing. There are many propositions which must either be true or
false but which we don't know which. For example 'Tachyons exist.'
is either true or it is false since there are no things which both
exist and do not exist. But we do not know whether 'tachyons exist'
is true.
André
That Gödel's G can be known to be definititely true and known to be
definitely unprovable is just as impossible as determining the sum
of a pair of integers without using any numbers (integers are numbers). >>>
address that rather than what you think it says.
Gödel's G is a statement such that neither G nor ¬G can be proven.
That entails that there is *some* true statement which cannot be
proven. We do not know, however, whether that true statement is G or
¬G. But one of those two statements must be true.
André
If it was as simple as that there would not be a problem.
It is as simple as that.
What is actually meant by "true and unprovable" is that it is true
that G is unprovable yet it is not provable that G is unprovable.
Where on earth do you get that idea from? Gödel proves that both G and
¬G are unprovable.
The only possible way to know for sure that G is definitely unprovable
is a proof that G is unprovable anything less than this is not definite.
Gödel: I know for sure that G is unprovable!
Pete: Can you prove that?
Gödel: No
Except that Gödel's answer would be yes.
olcott <NoOne@NoWhere.com> writes:
[...]
This is the interprocess communication protocol that I have[assembly code snipped]
established for a slave UTM to request three different kinds of
operating system functions. The code for these three functions is
fully designed. I only need to code the "spawn process" code and then
the master UTM will be essentially complete.
int main()
{
uint32_t* S1;
S1 = (uint32_t*) 16; // size param passed to AllocateHeap
AllocateHeap(&S1); // Allocates 16 uint32_t
SaveState(&S1); // Saves 16 32-bit registers
RestoreState(&S1); // Process context switch
__asm hlt
}
OK, let's go over this C program.
What you've written above is of course not a complete program.
Your functions AllocateHeap, SaveState, and RestoreState are
not declared, and as of C99 calling an undeclared function is a
constraint violation. The type name uint32_t is not visible unless
you have a #include directive for <stdint.h> (or <inttypes.h>).
(And the fact that you're using uint32_t, which was introduced in
C99, implies that you're not using C90, which did allow calls to
undeclared functions.)
S1 = (uint32_t*) 16; // size param passed to AllocateHeap
This makes S1 point to a uint32_t object at address 16. That's
... odd. Converting a constant integer value to a pointer is
rarely meaningful, unless you're on some kind of embedded system
where you know the exact memory layout. I suppose if you intend to
run this code on your x86 simulator, you can guarantee that there
is addressable memory at address 16. Why 16? Is some known value
stored at that location by some other mechanism? But the following
lines make me think that's not what you intended.
AllocateHeap(&S1); // Allocates 16 uint32_t
I presume you have reasons not to use the standard malloc function.
If I understand the comment correctly, this is intended to allocate
16 uint32_t objects (i.e., 64 contiguous 8-bit bytes, aligned
for 4-byte elements). If the argument is intended to specify the
number of uint32_t elements to allocate, why on Earth would you
pass a uint32_t** argument rather than just a uint32_t (or an int,
or a size_t)?
Can you describe what's stored at address 16, and why AllocateHeap
doesn't take a simple integer argument? And why does it not return a
result? (Or if it does, you discard it.) Allocation functions
typically return a pointer to the allocated memory.
I'm fairly sure I've misunderstood your code because I can only
see what you've written, not what you think it's supposed to mean.
Given the above, I won't try to guess how SaveState and RestoreState
are supposed to work. They both take arguments of type uint32_t**,
but it's hard to imagine what they'd do with them.
As it says SaveState saves all of the 16 registers that an x86 machineSaveState(&S1); // Saves 16 32-bit registers
Copies the saved registers back to the master UTM machine registers.RestoreState(&S1); // Process context switch
Whatever you're doing, it's unlikely that the code you've shown is
the right way to do it. Frankly, it looks like it was written by
someone who doesn't yet understand what "*" and "&" mean, and who has
just been adding arbitrary symbols to make the code look plausible.
(I would have said it looks like you added arbitrary symbols until
it compiled, which is a particularly dangerous approach in C,
but it doesn't even compile.)
Or am I missing something?
olcott <NoOne@NoWhere.com> writes:
On 8/13/2020 5:50 AM, Ben Bacarisse wrote:<cut>
olcott <NoOne@NoWhere.com> writes:
On 8/12/2020 10:25 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
Since provability is an aspect of satisfiability true and provable >>>>>> cannot possibly diverge.
PO-provability may be an aspect of PO-satisfiability and PO-truth and >>>>> PO-provable may not diverge, but the sentence as you wrote it (with
words you don't yet understand) is just wrong.
If a proposition is true you must have some basis to know that it is
true besides a wild guess or a hunch.
Every arithmetical formula is either true or false. Not every
arithmetical formula is provable or refutable. Deal with it.
The only possible way to correctly conclude that a WFF is true or
false is some kind of proof.
Do you see how that is not a reply to my point? Probably not.
If anyone concludes that a WFF is true or false without some kind of
proof this is necessarily mere presumption.
If... So many things if... But fortunately the antecedent is false.
No one (apart from you) does that.
(Do you remember that way back in 2012 when you declared: "If God can
not solve the Halting Problem, then there is something wrong with the problem"?)
What-so-ever definitive basis that you have to show that a proposition >>>> is definitely true is the proof of this proposition.
In a sound system, if you have a proof, the proposition is true. No one >>> doubts that. But in the absence of a proof every proposition P
corresponds to one as yet unknown truth: either P or ~P.
Sure that is correct, That that is not what is asserted by the Gödel
proof.
Yes it is. You've never read it, have you? You would fall at the first page.
The Gödel proof asserts that G is definitely true and definitely
unprovable.
The paper has an extra sting in the tail: that g must be true. But this
is a metamathematical deduction from outside PM.
You once thought it would be wise to learn what words like "true" mean
in a mathematical context, but that did not last long. You stumbled
when it turned out you did not know what a function was.
That is the same thing as calculating the sum of two
integers without using any numbers (integers are numbers).
Analogies like this are only useful for people like me to see just how
little you understand. They won't persuade anyone, but the fact that
you think this is an analogy is very revealing.
THERE IS NO WAY OUT OF THIS
Yes there is: education. You don't have to keep use words you don't
understand.
You briefly thought you might try this way out...
On 8/13/2020 1:34 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
[...]
This is the interprocess communication protocol that I have[assembly code snipped]
established for a slave UTM to request three different kinds of
operating system functions. The code for these three functions is
fully designed. I only need to code the "spawn process" code and then
the master UTM will be essentially complete.
int main()
{
uint32_t* S1;
S1 = (uint32_t*) 16; // size param passed to AllocateHeap
AllocateHeap(&S1); // Allocates 16 uint32_t
SaveState(&S1); // Saves 16 32-bit registers
RestoreState(&S1); // Process context switch
__asm hlt
}
OK, let's go over this C program.
What you've written above is of course not a complete program.
Your functions AllocateHeap, SaveState, and RestoreState are
not declared, and as of C99 calling an undeclared function is a
constraint violation. The type name uint32_t is not visible unless
you have a #include directive for <stdint.h> (or <inttypes.h>).
(And the fact that you're using uint32_t, which was introduced in
C99, implies that you're not using C90, which did allow calls to
undeclared functions.)
S1 = (uint32_t*) 16; // size param passed to AllocateHeap
This makes S1 point to a uint32_t object at address 16. That's
... odd. Converting a constant integer value to a pointer is
rarely meaningful, unless you're on some kind of embedded system
where you know the exact memory layout. I suppose if you intend to
run this code on your x86 simulator, you can guarantee that there
is addressable memory at address 16. Why 16? Is some known value
stored at that location by some other mechanism? But the following
lines make me think that's not what you intended.
This is merely a short-cut way to send the requested size parameter to AllocateHeap(). If we do this using normal "c" conventions it
generates additional machine code increasing the complexity of the
assembly language execution trace.
AllocateHeap(&S1); // Allocates 16 uint32_t
I presume you have reasons not to use the standard malloc function.
Yes. The master UTM is its own self-contained operating system and all
memory is a single contiguous block as if it was an actual UTM tape.
If I understand the comment correctly, this is intended to allocate
16 uint32_t objects (i.e., 64 contiguous 8-bit bytes, aligned
for 4-byte elements). If the argument is intended to specify the
number of uint32_t elements to allocate, why on Earth would you
pass a uint32_t** argument rather than just a uint32_t (or an int,
or a size_t)?
I must pass uint32_t** because the caller returns the address of the allocated memory block in this pointer.
Can you describe what's stored at address 16, and why AllocateHeap
doesn't take a simple integer argument? And why does it not return a
result? (Or if it does, you discard it.) Allocation functions
typically return a pointer to the allocated memory.
The 16 is not used as an address. It is used as the single integer
parameter. I did it this unconventional way to reduce the number of
machine language instructions to make the assembly language debug
trace easier to understand.
I'm fairly sure I've misunderstood your code because I can only
see what you've written, not what you think it's supposed to mean.
Given the above, I won't try to guess how SaveState and RestoreState
are supposed to work. They both take arguments of type uint32_t**,
but it's hard to imagine what they'd do with them.
As it says SaveState saves all of the 16 registers that an x86 machineSaveState(&S1); // Saves 16 32-bit registers
has. The conventions required to make all of the underlying slave UTMs
work correctly is that they never define or use any global data. All
of their data is on their stack or in the heap. Every slave UTM that
is created with SpawnProcess() gets its own StackSpace from the
HeapSpace.
Copies the saved registers back to the master UTM machine registers.RestoreState(&S1); // Process context switch
This causes a context switch between slave UTMs, AKA time slicing.
Whatever you're doing, it's unlikely that the code you've shown is
the right way to do it. Frankly, it looks like it was written by
someone who doesn't yet understand what "*" and "&" mean, and who has
just been adding arbitrary symbols to make the code look plausible.
(I would have said it looks like you added arbitrary symbols until
it compiled, which is a particularly dangerous approach in C,
but it doesn't even compile.)
Or am I missing something?
Yes you are missing the fact that the "C" code is written to minimize
the complexity of the generated machine code. I might go the other way
with this and use regular "c" conventions allowing the assembly
language code to have this additional complexity.
The standard code generated by a "c" compiler seems to have about
double the complexity as the simplest hand written assembly
language. The simplest hand written assembly language is very easy to understand. The code generated by the "c" compiler not so much.
olcott <NoOne@NoWhere.com> writes:
On 8/13/2020 1:34 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
[...]
This is the interprocess communication protocol that I have[assembly code snipped]
established for a slave UTM to request three different kinds of
operating system functions. The code for these three functions is
fully designed. I only need to code the "spawn process" code and then
the master UTM will be essentially complete.
int main()
{
uint32_t* S1;
S1 = (uint32_t*) 16; // size param passed to AllocateHeap
AllocateHeap(&S1); // Allocates 16 uint32_t
SaveState(&S1); // Saves 16 32-bit registers
RestoreState(&S1); // Process context switch
__asm hlt
}
OK, let's go over this C program.
What you've written above is of course not a complete program.
Your functions AllocateHeap, SaveState, and RestoreState are
not declared, and as of C99 calling an undeclared function is a
constraint violation. The type name uint32_t is not visible unless
you have a #include directive for <stdint.h> (or <inttypes.h>).
(And the fact that you're using uint32_t, which was introduced in
C99, implies that you're not using C90, which did allow calls to
undeclared functions.)
S1 = (uint32_t*) 16; // size param passed to AllocateHeap
This makes S1 point to a uint32_t object at address 16. That's
... odd. Converting a constant integer value to a pointer is
rarely meaningful, unless you're on some kind of embedded system
where you know the exact memory layout. I suppose if you intend to
run this code on your x86 simulator, you can guarantee that there
is addressable memory at address 16. Why 16? Is some known value
stored at that location by some other mechanism? But the following
lines make me think that's not what you intended.
This is merely a short-cut way to send the requested size parameter to
AllocateHeap(). If we do this using normal "c" conventions it
generates additional machine code increasing the complexity of the
assembly language execution trace.
If I understand you correctly, rather than passing an integer
argument and returning a pointer result, you've decided to pass a
single pointer-to-pointer argument and do some ugly type conversions,
because you think that passing a single argument rather than either
(a) passing two arguments or (b) passing an argument and returning
a result makes the code simpler. Is that what you're saying?
AllocateHeap(&S1); // Allocates 16 uint32_t
I presume you have reasons not to use the standard malloc function.
Yes. The master UTM is its own self-contained operating system and all
memory is a single contiguous block as if it was an actual UTM tape.
If I understand the comment correctly, this is intended to allocate
16 uint32_t objects (i.e., 64 contiguous 8-bit bytes, aligned
for 4-byte elements). If the argument is intended to specify the
number of uint32_t elements to allocate, why on Earth would you
pass a uint32_t** argument rather than just a uint32_t (or an int,
or a size_t)?
I must pass uint32_t** because the caller returns the address of the
allocated memory block in this pointer.
What do you mean by this? The caller is the main function, which
doesn't explicitly return anything (it implicitly returns 0 to the
calling environment).
Can you describe what's stored at address 16, and why AllocateHeap
doesn't take a simple integer argument? And why does it not return a
result? (Or if it does, you discard it.) Allocation functions
typically return a pointer to the allocated memory.
The 16 is not used as an address. It is used as the single integer
parameter. I did it this unconventional way to reduce the number of
machine language instructions to make the assembly language debug
trace easier to understand.
Does AllocateHeap internally convert its argument back from uint32_t**
to some integer type before using its value?
Your C code, interpreted as C, is horribly convoluted. Apparently you
think there's some benefit that makes this worthwhile.
I'm fairly sure I've misunderstood your code because I can onlyAs it says SaveState saves all of the 16 registers that an x86 machine
see what you've written, not what you think it's supposed to mean.
Given the above, I won't try to guess how SaveState and RestoreState
are supposed to work. They both take arguments of type uint32_t**,
but it's hard to imagine what they'd do with them.
SaveState(&S1); // Saves 16 32-bit registers
has. The conventions required to make all of the underlying slave UTMs
work correctly is that they never define or use any global data. All
of their data is on their stack or in the heap. Every slave UTM that
is created with SpawnProcess() gets its own StackSpace from the
HeapSpace.
Saves them where? Is the entire state stored in the 16 32-bit
registers?
Copies the saved registers back to the master UTM machine registers.RestoreState(&S1); // Process context switch
This causes a context switch between slave UTMs, AKA time slicing.
Whatever you're doing, it's unlikely that the code you've shown is
the right way to do it. Frankly, it looks like it was written by
someone who doesn't yet understand what "*" and "&" mean, and who has
just been adding arbitrary symbols to make the code look plausible.
(I would have said it looks like you added arbitrary symbols until
it compiled, which is a particularly dangerous approach in C,
but it doesn't even compile.)
Or am I missing something?
Yes you are missing the fact that the "C" code is written to minimize
the complexity of the generated machine code. I might go the other way
with this and use regular "c" conventions allowing the assembly
language code to have this additional complexity.
The standard code generated by a "c" compiler seems to have about
double the complexity as the simplest hand written assembly
language. The simplest hand written assembly language is very easy to
understand. The code generated by the "c" compiler not so much.
Why not write assembly code in the first place?
On 8/13/2020 1:03 PM, André G. Isaak wrote:
On 2020-08-13 11:12, olcott wrote:
On 8/13/2020 11:51 AM, André G. Isaak wrote:
On 2020-08-13 09:48, olcott wrote:
On 8/13/2020 3:56 AM, André G. Isaak wrote:
On 2020-08-12 21:39, olcott wrote:
If a proposition is true you must have some basis to know that it >>>>>>> is true besides a wild guess or a hunch. What-so-ever definitive >>>>>>> basis that you have to show that a proposition is definitely true >>>>>>> is the proof of this proposition. THERE IS NO WAY OUT OF THIS
You are conflating truth with knowledge. They are not the same
thing. There are many propositions which must either be true or
false but which we don't know which. For example 'Tachyons exist.' >>>>>> is either true or it is false since there are no things which both >>>>>> exist and do not exist. But we do not know whether 'tachyons
exist' is true.
André
That Gödel's G can be known to be definititely true and known to be >>>>> definitely unprovable is just as impossible as determining the sum
of a pair of integers without using any numbers (integers are
numbers).
You should probably learn what Gödel's proof actually says and
address that rather than what you think it says.
Gödel's G is a statement such that neither G nor ¬G can be proven.
That entails that there is *some* true statement which cannot be
proven. We do not know, however, whether that true statement is G or
¬G. But one of those two statements must be true.
André
If it was as simple as that there would not be a problem.
It is as simple as that.
What is actually meant by "true and unprovable" is that it is true
that G is unprovable yet it is not provable that G is unprovable.
Where on earth do you get that idea from? Gödel proves that both G and
¬G are unprovable.
The only possible way to know for sure that G is definitely
unprovable is a proof that G is unprovable anything less than this is
not definite.
Gödel: I know for sure that G is unprovable!
Pete: Can you prove that?
Gödel: No
Except that Gödel's answer would be yes.
Any formal proposition that only asserts its own unprovability is
unprovable for the same reason that the liar paradox is untrue.
Gödel used a very convoluted process to make it seem that his
proposition was not actually referring to an assertion about its own unprovability none-the-less his proposition was at least logically
equivalent to a proposition that asserts its own unprovability and has a vacuous truth value for the same reason and the same way that a
proposition that only asserts its own falsity has a vacuous truth value.
This sentence is true.
What is it true about?
It is not true about anything it is simply true.
That is not the way that it works.
On 14/08/2020 13:59, Ben Bacarisse wrote:
[I wrote:]
Whoa! I'm not an "x86" expert, but every standard m/c codex86 + support chips + some OS, yes. x86 no.
that I am familiar with is perfectly capable of encoding any one of
the standard UTMs [and therefore any TM computation]. [...]
In the early '70s, we had a PDP 11/05 with 8Kb of storage,
no disc, no OS, not much of anything bar a paper tape punch, paper
tape reader, and a teletype. That would have been enough for a UTM,
'tho I suspect the dept would not have appreciated the bill for the
reels of paper tape had we implemented it.
And, to be frank, the idea
that PO has designed a memory and device access mechanism and an
addressing scheme that is genuinely unbounded is not credible.
Well, yes, tho' people here have dropped enough hints. I
find it hard to imagine what he even thinks he might have that could
be interesting. The best I can make of it is that there are a whole
lot of PO-words that fit into a PO-theory of PO-computing in which
PO-halting and other PO-problems are PO-soluble. The alternative
is that he's a ridiculously successful troll. Take your pick.
On 14/08/2020 11:29, Ben Bacarisse wrote:
[...] x86 code can't represent any TM computation and even
an x86 simulator can't execute any x86 computation.
Whoa! I'm not an "x86" expert, but every standard m/c code
that I am familiar with is perfectly capable of encoding any one of
the standard UTMs [and therefore any TM computation]. All that is
needed beyond a quite simple FSM, such as that on the front cover of
Minsky's "Computation", is a mechanism for accessing arbitrarily much storage, eg by the device I mentioned a few days ago -- that when the [simulated] tape head reaches one or other end of actual storage, the computer prints a message "please remove current USB stick and insert
the next/previous one"*.
With a certain amount of buffering and a few obvious efficiency hacks, this process can even be reasonably practicable. For things
like "computable numbers" [another topic where PO's limited knowledge
of the theory is beginning to show], you would perhaps want several
USB sticks as otherwise you spend far too much time searching up and
down on one "tape" for items that are arbitrarily widely separated.
If you're going to object that my proposal requires human intervention, then I "merely" place the computer inside a robot-
controlled factory in which robots are responsible for feeding the
computer with USB sticks which they manufacture .... Somewhere
along the line, the situation becomes absurd, and misses the real
point of the theory.
_______
* You can't, of course, in general say "please insert stick 1234"
as the serial numbers may grow too large to be contained in the
accessible storage. But there is always a "next" and a "previous"
stick. Where you keep them all is another matter.
An abstract machine that has a tape head that can be advanced in 0 to 0xffffffff increments an unlimited number of times does specify a[43 lines deleted]
model of computation that has access to unlimited memory. The
technical name for memory addressing based on the displacement from
the current memory address is relative addressing.
Any software engineer of ordinary skill in the art of x86 programming
would be able to leverage this Turing complete control flow to obtain
Turing complete access to data.
olcott <NoOne@NoWhere.com> writes:
An abstract machine that has a tape head that can be advanced in 0 to[43 lines deleted]
0xffffffff increments an unlimited number of times does specify a
model of computation that has access to unlimited memory. The
technical name for memory addressing based on the displacement from
the current memory address is relative addressing.
Any software engineer of ordinary skill in the art of x86 programming
would be able to leverage this Turing complete control flow to obtain
Turing complete access to data.
I've lost count of how many times I've read this same text in multiple articles you've posted. There are probably minor changes in wording,
but I can't be bothered to check.
If you want people to read what you write, decide what you want to say
before you post, and post it just once.
On 8/14/2020 1:52 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
An abstract machine that has a tape head that can be advanced in 0 to[43 lines deleted]
0xffffffff increments an unlimited number of times does specify a
model of computation that has access to unlimited memory. The
technical name for memory addressing based on the displacement from
the current memory address is relative addressing.
Any software engineer of ordinary skill in the art of x86 programming
would be able to leverage this Turing complete control flow to obtain
Turing complete access to data.
I've lost count of how many times I've read this same text in multiple
articles you've posted. There are probably minor changes in wording,
but I can't be bothered to check.
If you want people to read what you write, decide what you want to say
before you post, and post it just once.
I kept posting it as its own new thread and Google groups kept
appending it to a prior thread. If you notice the one with new
material has a new version number. Version 40 is the current version.
olcott <NoOne@NoWhere.com> writes:
On 8/14/2020 1:52 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
An abstract machine that has a tape head that can be advanced in 0 to[43 lines deleted]
0xffffffff increments an unlimited number of times does specify a
model of computation that has access to unlimited memory. The
technical name for memory addressing based on the displacement from
the current memory address is relative addressing.
Any software engineer of ordinary skill in the art of x86 programming
would be able to leverage this Turing complete control flow to obtain
Turing complete access to data.
I've lost count of how many times I've read this same text in multiple
articles you've posted. There are probably minor changes in wording,
but I can't be bothered to check.
If you want people to read what you write, decide what you want to say
before you post, and post it just once.
I kept posting it as its own new thread and Google groups kept
appending it to a prior thread. If you notice the one with new
material has a new version number. Version 40 is the current version.
That doesn't affect what I wrote. I don't keep track of your version numbers. Every time you start a new thread, you still quote everything
that went before, so as far as I'm concerned it's just one very long
thread. There's no indication of what you've changed, and I can't be bothered to compare to previous versions.
Of course you'll continue to do what you want to do.
olcott <NoOne@NoWhere.com> writes:
On 8/14/2020 1:52 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
An abstract machine that has a tape head that can be advanced in 0 to[43 lines deleted]
0xffffffff increments an unlimited number of times does specify a
model of computation that has access to unlimited memory. The
technical name for memory addressing based on the displacement from
the current memory address is relative addressing.
Any software engineer of ordinary skill in the art of x86 programming
would be able to leverage this Turing complete control flow to obtain
Turing complete access to data.
I've lost count of how many times I've read this same text in multiple
articles you've posted. There are probably minor changes in wording,
but I can't be bothered to check.
If you want people to read what you write, decide what you want to say
before you post, and post it just once.
I kept posting it as its own new thread and Google groups kept
appending it to a prior thread. If you notice the one with new
material has a new version number. Version 40 is the current version.
That doesn't affect what I wrote. I don't keep track of your version numbers. Every time you start a new thread, you still quote everything
that went before, so as far as I'm concerned it's just one very long
thread. There's no indication of what you've changed, and I can't be bothered to compare to previous versions.
Of course you'll continue to do what you want to do.
olcott <NoOne@NoWhere.com> writes:
On 8/13/2020 2:38 PM, Ben Bacarisse wrote:<cut>
olcott <NoOne@NoWhere.com> writes:
On 8/13/2020 5:50 AM, Ben Bacarisse wrote:
In a sound system, if you have a proof, the proposition is true. No one >>>>> doubts that. But in the absence of a proof every proposition P
corresponds to one as yet unknown truth: either P or ~P.
Sure that is correct, That that is not what is asserted by the Gödel
proof.
Yes it is. You've never read it, have you? You would fall at the first page.
The Gödel proof asserts that G is definitely true and definitely
unprovable.
The paper has an extra sting in the tail: that g must be true. But this >>> is a metamathematical deduction from outside PM.
Ah great you are directly delving into the actual heart of it. The key
mistake is that true outside of PM does not count as true inside of
PM.
Nonsense. There is no sign that you want to know what "true" means
anymore so it would seem pointless to try to explain. Until you know,
it would be absurd for me to reply. I only know the conventional
meanings for words.
olcott <NoOne@NoWhere.com> writes:
[...]
This is the interprocess communication protocol that I have[assembly code snipped]
established for a slave UTM to request three different kinds of
operating system functions. The code for these three functions is
fully designed. I only need to code the "spawn process" code and then
the master UTM will be essentially complete.
int main()
{
uint32_t* S1;
S1 = (uint32_t*) 16; // size param passed to AllocateHeap
AllocateHeap(&S1); // Allocates 16 uint32_t
SaveState(&S1); // Saves 16 32-bit registers
RestoreState(&S1); // Process context switch
__asm hlt
}
OK, let's go over this C program.
What you've written above is of course not a complete program.
Your functions AllocateHeap, SaveState, and RestoreState are
not declared, and as of C99 calling an undeclared function is a
constraint violation. The type name uint32_t is not visible unless
you have a #include directive for <stdint.h> (or <inttypes.h>).
(And the fact that you're using uint32_t, which was introduced in
C99, implies that you're not using C90, which did allow calls to
undeclared functions.)
S1 = (uint32_t*) 16; // size param passed to AllocateHeap
This makes S1 point to a uint32_t object at address 16. That's
... odd. Converting a constant integer value to a pointer is
rarely meaningful, unless you're on some kind of embedded system
where you know the exact memory layout. I suppose if you intend to
run this code on your x86 simulator, you can guarantee that there
is addressable memory at address 16. Why 16? Is some known value
stored at that location by some other mechanism? But the following
lines make me think that's not what you intended.
AllocateHeap(&S1); // Allocates 16 uint32_t
olcott <NoOne@NoWhere.com> writes:
32-bit x86 function calls already use signed 32-bit relative addressing.
void Bar(int N)
{
Foo(5);
}
void Foo(int N)
{
Bar(6);
}
101: 55 push ebp
102: 8BEC mov ebp, esp
104: 6A05 push 0x5
106: E823000000 call 0x12e ; +23h bytes little endian
10b: 83C404 add esp, 0x4 ; of 00000023h
10e: 5D pop ebp
10f: C3 ret
12e: 55 push ebp
12f: 8BEC mov ebp, esp
131: 6A06 push 0x6
133: E8C9FFFFFF call 0x101 ; -37h bytes little endian
138: 83C404 add esp, 0x4 ; of FFFFFFC9h
13b: 5D pop ebp
13c: C3 ret
The x86 architecture specifies a Turing complete model of computation
for its control flow operations:
Signed 32-bit relative addressing for function calls. (shown above).
Signed 32-bit relative addressing for unconditional branches.
Signed relative addressing for all conditional branches.
There are two notions knocking around that we must distinguish between.
There is an informal, rather casual, notion of Turing complete that
glosses over all the reasons why some notation or other is not really
Turing complete. Then there is the more formal demonstration that some notation really is Turing complete by demonstrating (often in general
terms) how that notation can simulate any Turing machine computation.
From the above, you seem to be going for the casual meaning because you
don't explain how you would deal with having, say, 87 trillion
functions. Just doing 32-bit relative calls won't work in that case.
every standard m/c code
that I am familiar with is perfectly capable of encoding any one of
the standard UTMs [and therefore any TM computation]. All that is
needed beyond a quite simple FSM, such as that on the front cover of Minsky's "Computation", is a mechanism for accessing arbitrarily much storage...
Data access is also based on relative addressing therefore: The x86
language specifies an abstract model of computation that is Turing
complete.
On Friday, August 14, 2020 at 3:08:28 PM UTC-7, olcott wrote:
On 8/14/2020 4:28 PM, David Kleinecke wrote:
On Friday, August 14, 2020 at 1:56:36 PM UTC-7, olcott wrote:
The assumption that every closed WFF maps to a
Boolean value is simply a false assumption.
As Godel proved almost a century ago.
No that is not what he proved.
He proved that formal systems are incomplete on the basis that they
cannot determine a Boolean value that is assumed to certainly exist.
And that differs from "it is false that "every closed WFF maps to a
Boolean value""? In what way?
That the only reason that formal systems cannot determine a Boolean
value is because no such values exists would not make them incomplete.
olcott <NoOne@NoWhere.com> writes:
On 8/14/2020 6:48 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
32-bit x86 function calls already use signed 32-bit relative addressing. >>>>
void Bar(int N)
{
Foo(5);
}
void Foo(int N)
{
Bar(6);
}
101: 55 push ebp
102: 8BEC mov ebp, esp
104: 6A05 push 0x5
106: E823000000 call 0x12e ; +23h bytes little endian
10b: 83C404 add esp, 0x4 ; of 00000023h
10e: 5D pop ebp
10f: C3 ret
12e: 55 push ebp
12f: 8BEC mov ebp, esp
131: 6A06 push 0x6
133: E8C9FFFFFF call 0x101 ; -37h bytes little endian
138: 83C404 add esp, 0x4 ; of FFFFFFC9h
13b: 5D pop ebp
13c: C3 ret
The x86 architecture specifies a Turing complete model of computation
for its control flow operations:
Signed 32-bit relative addressing for function calls. (shown above).
Signed 32-bit relative addressing for unconditional branches.
Signed relative addressing for all conditional branches.
There are two notions knocking around that we must distinguish between.
There is an informal, rather casual, notion of Turing complete that
glosses over all the reasons why some notation or other is not really
Turing complete. Then there is the more formal demonstration that some
notation really is Turing complete by demonstrating (often in general
terms) how that notation can simulate any Turing machine computation.
From the above, you seem to be going for the casual meaning because you >>> don't explain how you would deal with having, say, 87 trillion
functions. Just doing 32-bit relative calls won't work in that case.
We don't have to actually have the hardware. We only really need the
abstract model of computation.
But you are declaring it to be x86 code. You can banish the hardware,
but x86 code embodies that hardware in the notation.
To specify 87 trillion functions merely requires a linked list with 87
trillion elements.
What does that look like in x86 code? And whatever answer you come up
with, be sure it extends further. There must be no finite limit. This
is a genuine question. I know you like to just ignore questions and
move on to say more wrong stuff, but please take a moment to prove me
wrong. How is this data structure implemented in x86 code?
Even if you can do it, picking an actual machine language was a silly
idea. High-level languages are usually defined without reference to hardware, so they have no such limits in the notation. You don't have
to do any of this work in, say, Haskell.
On 8/14/2020 6:34 AM, Andy Walker wrote:
every standard m/c code
that I am familiar with is perfectly capable of encoding any one of
the standard UTMs [and therefore any TM computation]. All that is
needed beyond a quite simple FSM, such as that on the front cover of
Minsky's "Computation", is a mechanism for accessing arbitrarily much
storage...
In V41 (x86 is Turing Complete)
I replace the last two sentences with this:
On 8/14/2020 2:32 PM, olcott wrote:
Data access is also based on relative addressing therefore: The x86
language specifies an abstract model of computation that is Turing
complete.
proving (to anyone knowing how easy it is to simulate a Turing Machine
such as Andy and I)
How do you choose whose claims you accept without proof?
On 14/08/2020 21:27, Ben Bacarisse wrote:
[I wrote:]
However, this is a moderately daft debate, and I don't really
want to pursue it any further. Unless you reply with something
outrageous [not your style!], I'll drop it here, and return to
watching from the sidelines while we wait for PO's magnum opus.
Andy Walker <anw@cuboid.co.uk> writes:
On 14/08/2020 21:27, Ben Bacarisse wrote:
[I wrote:]
In the early '70s, we had a PDP 11/05 with 8Kb of storage,Nowadays we might say "support chips" but in those days it would be
no disc, no OS, not much of anything bar a paper tape punch, paper
tape reader, and a teletype. That would have been enough for a UTM,
'tho I suspect the dept would not have appreciated the bill for the
reels of paper tape had we implemented it.
"peripherals". The PDP 11 instruction set does not define a machine
that is universal and, if I'm going to be picky, (I am!) you wrote the
OS -- just enough of one to drive those peripherals.
Um. Well, given the "Unibus" construction of the PDP 11/05,
the tape reader/punch consisted of setting a status byte, writing or
reading a data byte and dealing with an interrupt when the status
changed [or busy-waiting to the same effect]. None of this was in
any way protected from other users, indeed there were no other users
or other programs, and to describe half a dozen instructions as an
operating system is really stretching the definition:
" An operating system is system software that manages computer
" hardware, software resources, and provides common services for
" computer programs. " [Wiki]
In the case at hand, it wasn't "system" software, the hardware was
"managed" by direct access to its status and data bytes, there were
no software resources provided, and no common services. I rest my
case, m'lud.
Access to hardware can help because the hardware might have the right
sort of "unbounded" interface. Don't remember if that was the case for
the paper tape reader/writer but I think they often did have a stepping function.
This is why the only sane theoretical approach is to use a high-level
language.
Approach to ...?
To what PO appears to be trying to do. I don't think he's simulating a
UTM, he's writing a TM simulator that is supposed to be universal. Mind
you, as with so much PO stuff, the universality might be a red herring.
It's not at all clear that he is still making any claim that is
"interesting" (by which I mean "logically impossible") as you originally pointed out he was.
I don't recall ever seeing a HLL used to
demonstrate the *theory*.
I only meant the theoretical claim that some particular programming
notation is TM complete.
<cut>
olcott <NoOne@NoWhere.com> writes:
On 8/14/2020 8:13 PM, Ben Bacarisse wrote:<cut>
olcott <NoOne@NoWhere.com> writes:
We don't have to actually have the hardware. We only really need the
abstract model of computation.
But you are declaring it to be x86 code. You can banish the hardware,
but x86 code embodies that hardware in the notation.
The abstract model of computation that the x86 language specifies is
Turing complete.
I am glad you believe that. I wonder why you worked so hard in the past
(and continue to do so below) to suggest you were working within finite limits.
To specify 87 trillion functions merely requires a linked list with 87 >>>> trillion elements.
What does that look like in x86 code? And whatever answer you come up
with, be sure it extends further. There must be no finite limit. This
is a genuine question. I know you like to just ignore questions and
move on to say more wrong stuff, but please take a moment to prove me
wrong. How is this data structure implemented in x86 code?
Any software engineer of ordinary skill in the art would be able to
create a linked list of memory elements of unlimited length on the
basis of a signed 32-bit relative jump instruction.
I asked what the data structure would look like. Jumps have nothing to
do with that. What does the data structure for an essentially unbounded linked list look like?
How do you choose whose claims you accept without proof?
That I directly know how easy it is to simulate a Turing machine
because I studied this one
http://www.lns.mit.edu/~dsw/turing/turing.html
That is not a Turing-complete implementation.
This all appears to be a rabbit hole down which there is nothing to be learned. No one doubts that something very like x86 code (as assembler,
not machine code) can be tweaked to remove limits and would therefore be Turing-complete. But you were at pains to point out how this was really
x86 code in every detail.
olcott <NoOne@NoWhere.com> writes:
On 8/15/2020 11:49 AM, Ben Bacarisse wrote:<cut>
olcott <NoOne@NoWhere.com> writes:
On 8/14/2020 8:13 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
To specify 87 trillion functions merely requires a linked list with 87 >>>>>> trillion elements.
What does that look like in x86 code? And whatever answer you come up >>>>> with, be sure it extends further. There must be no finite limit. This >>>>> is a genuine question. I know you like to just ignore questions and >>>>> move on to say more wrong stuff, but please take a moment to prove me >>>>> wrong. How is this data structure implemented in x86 code?
Any software engineer of ordinary skill in the art would be able to
create a linked list of memory elements of unlimited length on the
basis of a signed 32-bit relative jump instruction.
I asked what the data structure would look like. Jumps have nothing to
do with that. What does the data structure for an essentially unbounded >>> linked list look like?
We have an unlimited number of memory address sized blocks of memory
where the beginning of each block has a relative jump into the prior
block and the end of each block has a relative jump into the next
block.
What do jumps have to do with it?
You are either totally lost or you are trying to construct a
distraction. Since it seems you can't describe the data structure, just
show the code that changes every 0 to a 1 (these can be any fixed size)
data in the list elements. This loop will make the data structure clear
by showing how the code moves from one element to the next, and how the
end of the list is detected.
How do you choose whose claims you accept without proof?
That I directly know how easy it is to simulate a Turing machine
because I studied this one
http://www.lns.mit.edu/~dsw/turing/turing.html
That is not a Turing-complete implementation.
http://www.lns.mit.edu/~dsw/turing/doc/tm_manual.txt
Here is a Turing Complete adaptation, that I derived two years ago:
All integers are specified as hexadecimal digits [0-9A-F] encoded as:
H. Each element of the quintuple is space delimited. One Turing
Machine Description language instruction per line:
The original input language in Turing complete. It's the implementation
that is not.
H+ HHHHHHHH HHHHHHHH H+ [RL]
States are of arbitrary finite length strings of hexadecimal digits.
Tape elements are unsigned 32-bit integers.
That was a pointless change from the point of view of the TM notation.
The original and your extended on are both Turing complete. (It's a odd
term for a notation that simply a way of writing TMs.)
And if you make corresponding changes to the implementation (I don't
think you did) I very must suspect it was not more Turing complete than
the original.
You should stick with the casual meaning. No one will object to a bit
of hand-waving on this point (unless it is somehow central to whatever
it is you are claiming these days). The trouble starts when you
over-specify and state, as facts, things that need to be proven.
This all appears to be a rabbit hole down which there is nothing to be
learned. No one doubts that something very like x86 code (as assembler, >>> not machine code) can be tweaked to remove limits and would therefore be >>> Turing-complete. But you were at pains to point out how this was really >>> x86 code in every detail.
My halting problem research will be presented as a program that runs
on an x86 based UTM.
In what sense is this x86 "thing" U and in what sense is it a TM?
olcott <NoOne@NoWhere.com> writes:
<cut>
I have shown that the abstract model of computation specified by the
x86 language is Turing complete pertaining to access to unlimited
memory.
No, you have asserted it. In fact you've dodged requests to show it.
Write the loop I talked about. That will overcome the problem you have
in describing data structures.
Your jmp examples are just silly (EIP is fixed-width and it's no help
for data accesses).
It is common knowledge in the field that requirements of simulating a
Turing Machine are trivial.
You have no idea what is common knowledge in this field. What certainly
is common is a lot of hand-waving about limits being arbitrary and so
not crucial. And, to be frank, that's pretty much all anyone cares
about. But the technical detail -- can some abstract notation really describe potentially unbounded access to storage -- is much less
commonly discussed.
I invite you to show that it's possible.
olcott <NoOne@NoWhere.com> writes:
On 8/15/2020 8:36 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
I have shown that the abstract model of computation specified by the
x86 language is Turing complete pertaining to access to unlimited
memory.
No, you have asserted it. In fact you've dodged requests to show it.
Write the loop I talked about. That will overcome the problem you have
in describing data structures.
Your jmp examples are just silly (EIP is fixed-width and it's no help
for data accesses).
Within the abstract model of computation specified by the x86 language
we would understand that jumping into the next 4GB block resets the EIP
to an offset within this block. Since this is not currently defined
within the current specification it would be a required adjustment to
the current definition. You have definitely made a valid p0oiint
here. There might even be other teeny tiny little adjustments that
would be required.
Yes, since jumps are of no use for the key data structure, you need the
same "teeny tiny" change to the data addressing modes and, hey presto,
you have access to unbounded storage implicitly defined in the abstract machine. The notation itself takes on the burden of there always being
a "next" data block. You can't do it in pure x86 code.
So now, since you can't use x86 addressing to access unbounded store,
why bother with the mess that x86 code?
You could invent your own
abstract machine language with the few simple instructions you need. Of
you could use an exiting one. Or you could use a high-level language
that has the right concepts already defined.
x86 code is the wrong choice. Taken literally it does not work. You
need to invent some new entirely abstract addressing mode for it but you still have all the complication of it being, in all other respects, and actual machine language.
olcott <NoOne@NoWhere.com> writes:
On 8/16/2020 2:13 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/16/2020 11:41 AM, Ben Bacarisse wrote:<cut>
Show me a loop that scans the tape replacing zeros with ones.
I have already shown you enough code that any coder would get it.
How curious, though, that you have not shown the simple loop I
requested. Is it a "trade secret"?
You are apparently not a coder.
You know, I don't think that will be the conclusion that other readers
draw!
I already have other readers of exceptional credibility that have
certified that my analysis is precisely correct:
The relative addressing of the x86 language does specify an abstract
model of computation with access to unlimited memory.
Who said that? I need to go and correct that post too.
Meanwhile, a simple unbounded loop is sill not possible (or at least,
you refuse to post your secret code).
By the way, you can be right
without there being any such loop. There are many obscure routes to
being Turing complete, so you can just say that such a loop is not
possible without having to admit you are necessarily wrong.
If you disagree with this I will assume ignorance on your part rather
than dishonesty.
Assume away, but your track record over the years would have been
improved if you included a third option: that you might be wrong.
Remember you called bullshit on a post of mine reminding you of what you
had said? Remember that I was, according to you, obviously mad to say a denumerable sequence can have a finite range? Both recent cases where considering that you might be wrong would have avoided you making these
sorts of accusation.
olcott <NoOne@NoWhere.com> writes:
On 8/16/2020 2:13 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/16/2020 11:41 AM, Ben Bacarisse wrote:<cut>
Show me a loop that scans the tape replacing zeros with ones.
I have already shown you enough code that any coder would get it.
How curious, though, that you have not shown the simple loop I
requested. Is it a "trade secret"?
You are apparently not a coder.
You know, I don't think that will be the conclusion that other readers
draw!
I already have other readers of exceptional credibility that have
certified that my analysis is precisely correct:
The relative addressing of the x86 language does specify an abstract
model of computation with access to unlimited memory.
If you disagree with this I will assume ignorance on your part rather
than dishonesty.
You've admitted being wrong before. You were wrong about Ben being a
liar, and you admitted it. You're wrong about Ben not being a coder.
And unless you show the concrete demonstration that Ben requested,
I'll just assume that you're wrong about that as well.
Show me a loop that scans the tape replacing zeros with ones.
I have already shown you enough code that any coder would get it.
Are you a coder? Are you able to "get it"? Can you show actual
code that scans the tape replacing zeros with ones? The point is
not whether Ben or I, or "any software engineer of ordinary skill
in the art of x86 programming", can do it. You're being asked to demonstrate, without handwaving, that *you* can do it.
On 2020-08-17 09:20, olcott wrote:
On 8/17/2020 1:44 AM, André G. Isaak wrote:
On 2020-08-16 17:17, olcott wrote:
That the relative addressing of the x86 language specifies an
abstract model of computation with access to unlimited memory really
is blatantly obvious.
The problem is the you have a poor track record with respect to
things you claim are 'blatantly obvious'.
André
It really seems that some people in these forums have an established
track record of arguing against me when they know that I am right.
They simply play head games. One person recently has argued against me
that knows that they don't even understand the underlying subject
matter. Their whole basis was ad hominem.
To prove unlimited memory access In this case merely requires
conventional coding. Ben's example surely is a great one.
If an actual program could be written that fills unlimited memory with
zeroes this would most unequivocally prove my point.
The question, though, was about creating an unbounded linked list using
x86 code, which you have claimed is possible.
André
Show me a loop that scans the tape replacing zeros with ones.
On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:
On 8/17/2020 1:44 AM, André G. Isaak wrote:
On 2020-08-16 17:17, olcott wrote:
That the relative addressing of the x86 language specifies an abstract >>>> model of computation with access to unlimited memory really is
blatantly obvious.
The problem is the you have a poor track record with respect to things
you claim are 'blatantly obvious'.
André
It really seems that some people in these forums have an established
track record of arguing against me when they know that I am right.
The Intel x86 instruction set defines a fixed address space.
That limitation is not simply an instance of an infinite abstraction;
it is the abstraction.
No addressing mode provides unlimited memory.
I.e. you're dead wrong here about something that some elementary school
kids know about computers.
Now we could build a Universal Turing Machine with even something far
less powerful than x86; some microcontroller with storage measured in
bytes. It just has to be hooked up to a tape transport mechanism with read/write abilities; then it's just a matter of giving it enough tape
for whatever problem we feed it.
But the x86 relative addressing mode isn't equivalent to that indefinite
tape mechanism, sorry.
They simply play head games.
Get a better head, then you can participate.
olcott <NoOne@NoWhere.com> writes:
On 8/17/2020 1:44 AM, André G. Isaak wrote:
On 2020-08-16 17:17, olcott wrote:
That the relative addressing of the x86 language specifies an
abstract model of computation with access to unlimited memory
really is blatantly obvious.
The problem is the you have a poor track record with respect to
things you claim are 'blatantly obvious'.
It really seems that some people in these forums have an established
track record of arguing against me when they know that I am
right. They simply play head games.
I don't believe anyone here has argued against you while knowing that
you're right. You seem to have an extremely overinflated sense of how obviously correct your ideas are.
One person recently has argued
against me that knows that they don't even understand the underlying
subject matter. Their whole basis was ad hominem.
Was that referring to me? (I've acknowledged several times my lack of expertise in certain areas.)
To prove unlimited memory access In this case merely requires
conventional coding. Ben's example surely is a great one.
If an actual program could be written that fills unlimited memory with
zeroes this would most unequivocally prove my point.
What is preventing you from writing such a program and showing it to us?
On 8/17/2020 12:53 PM, Keith Thompson wrote:[...]
olcott <NoOne@NoWhere.com> writes:
To prove unlimited memory access In this case merely requires
conventional coding. Ben's example surely is a great one.
If an actual program could be written that fills unlimited memory with
zeroes this would most unequivocally prove my point.
What is preventing you from writing such a program and showing it to us?
The fact that the existence of such a program would be self-evident to
anyone truly understanding what I am saying tends to provide very
strong evidence that this would be a significant waste of my time.
If people simply don't believe this self-evident truth then they
probably won't believe it when all the details are fully elaborated.
olcott <NoOne@NoWhere.com> writes:
On 8/17/2020 12:53 PM, Keith Thompson wrote:[...]
olcott <NoOne@NoWhere.com> writes:
To prove unlimited memory access In this case merely requires
conventional coding. Ben's example surely is a great one.
If an actual program could be written that fills unlimited memory with >>>> zeroes this would most unequivocally prove my point.
What is preventing you from writing such a program and showing it to us?
The fact that the existence of such a program would be self-evident to
anyone truly understanding what I am saying tends to provide very
strong evidence that this would be a significant waste of my time.
If people simply don't believe this self-evident truth then they
probably won't believe it when all the details are fully elaborated.
I think the issue being disputed here is not just whether such a
program can be written. It's also whether you are competent to
write it. The small samples of C and/or C++ code that I've seen
you post here do not make me optimistic on that point.
Please please stop assuming that the only possible reasons someone
could disagree with you is a lack of understanding. *Even if
you're right*, telling people who disagree with you that they're
not capable of understanding is a waste of everyone's time.
Let's back up a bit. You're presenting some controversial ideas,
and you're discussing them with some very smart people, many of
whom are experts in relevant fields. Imagine that, at some point
in the future, you've managed to persuade at least one person that
you're right. (I presume that's your goal here. If not, please
clarify what your actual goal is.) Now think about how that can
have happened.
This:
Olcott: I can demonstrate this.
Someone else: Fine, please do so.
Olcott: No, it would be a waste of my time. It's obvious that
I can demonstrate it so I don't actually need to do
so. You wouldn't understand my demonstration anyway.
Someone else: Ah yes, you're right, I see it now.
is not a plausible path to your goal.
What is? How do you convince us? How do you convince *anyone*?
Is that even your goal?
(I acknowledge that I'm putting words in your mouth here, but the above captures the gist of how you appear to me, and probably to others.)
On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:The abstract model of computation specified by the x86 language already
On 8/17/2020 1:44 AM, André G. Isaak wrote:
On 2020-08-16 17:17, olcott wrote:
That the relative addressing of the x86 language specifies an abstract >>>> model of computation with access to unlimited memory really is
blatantly obvious.
The problem is the you have a poor track record with respect to things
you claim are 'blatantly obvious'.
André
It really seems that some people in these forums have an established
track record of arguing against me when they know that I am right.
The Intel x86 instruction set defines a fixed address space.
That limitation is not simply an instance of an infinite abstraction;
it is the abstraction.
No addressing mode provides unlimited memory.
I.e. you're dead wrong here about something that some elementary school
kids know about computers.
The abstract model of computation specified by the x86 language already
has access to unlimited memory when we assume the implementation detail
that the underlying memory architecture of this abstract model is
organized as an unlimited sequence of adjacent 4GB blocks.
(1) signed 32-bit relative jumps
// Jumps to the first address of the next 4GB block
FFFFFFF0: E90B000000 ; signed 32-bit relative jumps
(2) signed 32-bit relative offsets
// Moves a 32-bit integer from the first address of the next 4GB block xxxxxxxx: BBFFFFFFFF mov ebx, 0xffffffff
xxxxxxxx: 8B4301 mov eax, dword ptr [ebx+0x1]
x86 Instruction Set Reference
https://c9x.me/x86/
olcott <NoOne@NoWhere.com> writes:
So now, since you can't use x86 addressing to access unbounded store,
why bother with the mess that x86 code? You could invent your own
abstract machine language with the few simple instructions you need. Of
you could use an exiting one. Or you could use a high-level language
that has the right concepts already defined.
olcott <NoOne@NoWhere.com> writes:
<cut>
So you don't have the gumption to admit when you have been proved
wrong?
Me: Since it seems you can't describe the data structure, just show the
code that changes every 0 to a 1 (these can be any fixed size) data
in the list elements. This loop will make the data structure clear
by showing how the code moves from one element to the next, and how
the end of the list is detected.
You: <silence>
Still waiting...
Still waiting for you to admit you lied when you said you had an "actual Turing machine" pair "exactly and precisely the Peter Linz H and Ĥ" way
back in 2018.
Still waiting for any evidence at all that you have anything to postother than empty boasts.
On 2020-08-21 08:31, olcott wrote:
On 8/21/2020 5:20 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
So you don't have the gumption to admit when you have been proved
wrong?
Me: Since it seems you can't describe the data structure, just show the
code that changes every 0 to a 1 (these can be any fixed size) data
in the list elements. This loop will make the data structure clear
by showing how the code moves from one element to the next, and how
the end of the list is detected.
So you are saying that you are incapable of this?
From where I stand, your unwillingness to do this would seem to
indicate that you are incapable of doing it.
If you are incapable of this then that would also seem to indicate
that you are incapable of evaluating this when it is completed.
How is he supposed to evaluate that which you have not provided?
On 8/21/2020 5:20 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
So you don't have the gumption to admit when you have been proved
wrong?
Me: Since it seems you can't describe the data structure, just show the
code that changes every 0 to a 1 (these can be any fixed size) data >> in the list elements. This loop will make the data structure clear
by showing how the code moves from one element to the next, and how >> the end of the list is detected.
So you are saying that you are incapable of this?
If you are incapable
of this then that would also seem to indicate that you are incapable of evaluating this when it is completed.
On 8/21/2020 1:17 PM, André G. Isaak wrote:
On 2020-08-21 08:31, olcott wrote:
On 8/21/2020 5:20 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
So you don't have the gumption to admit when you have been proved
wrong?
Me: Since it seems you can't describe the data structure, just show the >>>> code that changes every 0 to a 1 (these can be any fixed size) >>>> data
in the list elements. This loop will make the data structure >>>> clear
by showing how the code moves from one element to the next, and >>>> how
the end of the list is detected.
So you are saying that you are incapable of this?
From where I stand, your unwillingness to do this would seem to
indicate that you are incapable of doing it.
If you are incapable of this then that would also seem to indicate
that you are incapable of evaluating this when it is completed.
How is he supposed to evaluate that which you have not provided?
The part that I did provide that you disingenuously removed.
Your assertion that I did not provide this is verifiably false.
A tactic used to win a debate that shows bias against the truth can be reasonably construed as dishonest.
If you not can see how these two key features of the x86 language
already provide access to unlimited memory within the abstract model of computation defined by this language then you would not be able to
understand any solution provided for your challenge.
Signed {8,16,32} bit Jumps relative to EIP and
Signed {8,16,32} offsets relative to Base Register
already specify access to unlimited memory.
An abstract machine having a tape head that can be advanced in 0 to 0xFFFFFFFF increments an unlimited number of times specifies a model of computation that has access to unlimited memory. The technical name for memory addressing based on displacement from the current memory address
is relative addressing.
The abstract model of computation specified by the x86 language already
has access to unlimited memory when we assume the implementation detail
that the underlying memory architecture of this abstract model is
organized as an unlimited sequence of adjacent 4GB blocks.
The x86 language has control flow instructions having {8,16,32} bit
signed offsets to the current EIP (instruction pointer) value. It also
has data access instructions having {8,16,32} bit signed offsets to the
base register.
Signed {8,16,32} bit Jumps relative to EIP
; Jump to the first address of the next 4GB block
FFFFFFF0: E90B000000
Signed {8,16,32} offsets relative to Base Register
; Moves a 32-bit integer from the first address of the next 4GB block
e4: BBFFFFFFFF mov ebx, 0xffffffff
f5: 8B4301 mov eax, dword ptr [ebx+0x1]
Because both the control flow and data access instructions of the x86 language have relative addressing the x86 language specifies an abstract model of computation having access to unlimited memory.
On 2020-08-21 12:46, olcott wrote:
On 8/21/2020 1:17 PM, André G. Isaak wrote:
On 2020-08-21 08:31, olcott wrote:
On 8/21/2020 5:20 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
So you don't have the gumption to admit when you have been proved
wrong?
Me: Since it seems you can't describe the data structure, just show
the
code that changes every 0 to a 1 (these can be any fixed size) >>>>> data
in the list elements. This loop will make the data structure >>>>> clear
by showing how the code moves from one element to the next, >>>>> and how
the end of the list is detected.
So you are saying that you are incapable of this?
From where I stand, your unwillingness to do this would seem to
indicate that you are incapable of doing it.
If you are incapable of this then that would also seem to indicate
that you are incapable of evaluating this when it is completed.
How is he supposed to evaluate that which you have not provided?
The part that I did provide that you disingenuously removed.
Your assertion that I did not provide this is verifiably false.
I didn't include this part because you've already posted it around ten
times. Everyone following this has already seen it. But it doesn't
actually address Ben's question.
A tactic used to win a debate that shows bias against the truth can be
reasonably construed as dishonest.
If you not can see how these two key features of the x86 language
already provide access to unlimited memory within the abstract model
of computation defined by this language then you would not be able to
understand any solution provided for your challenge.
Signed {8,16,32} bit Jumps relative to EIP and
Signed {8,16,32} offsets relative to Base Register
already specify access to unlimited memory.
An abstract machine having a tape head that can be advanced in 0 to
0xFFFFFFFF increments an unlimited number of times specifies a model
of computation that has access to unlimited memory. The technical name
for memory addressing based on displacement from the current memory
address is relative addressing.
The abstract model of computation specified by the x86 language
already has access to unlimited memory when we assume the
implementation detail that the underlying memory architecture of this
abstract model is organized as an unlimited sequence of adjacent 4GB
blocks.
The x86 language has control flow instructions having {8,16,32} bit
signed offsets to the current EIP (instruction pointer) value. It also
has data access instructions having {8,16,32} bit signed offsets to
the base register.
Signed {8,16,32} bit Jumps relative to EIP
; Jump to the first address of the next 4GB block
FFFFFFF0: E90B000000
Signed {8,16,32} offsets relative to Base Register
; Moves a 32-bit integer from the first address of the next 4GB block
e4: BBFFFFFFFF mov ebx, 0xffffffff
f5: 8B4301 mov eax, dword ptr [ebx+0x1]
So why not simply show how to use this to address Ben's question?
Provide a simple program that zeros out some data structure of an
arbitrary length.
Because both the control flow and data access instructions of the x86
language have relative addressing the x86 language specifies an
abstract model of computation having access to unlimited memory.
Relative addressing is always relative to *something*, whether it be the program counter, an index register, or whatever. Unless those registers
can hold a value of unbounded size, relative addressing doesn't give you access to unlimited memory.
olcott <NoOne@NoWhere.com> writes:
On 8/22/2020 8:04 PM, Ben Bacarisse wrote:<cut>
So don't prove me wrong! I don't mind.
But I wonder what you gain by not posting the x86 program (the one that
scans a linked list changing zeros to ones, but where the list is not in >>> any way bounded in size). It's odd that you have decided not to show
/anyone/ that you are right, because you think I am biased. Why give me >>> that power? The only loser in that strategy is you.
Its a pearls before swine thing.
As I say, you owe me nothing (though an apology for calling me a pig
would not go amiss). By all means, keep all your great discoveries
secret forever, but I don't see how that benefits you one iota.
Oddly, when I got bored telling you were wrong about the x86 and stopped replying you complained about being ignored. It seems only being told
you are right is acceptable.
You consistently prove that you lack the required prerequisites:
As does anyone who knows the x86. The required prerequisite is to
accept your extended architecture as being called "x86".
No one disputes that suitable abstract instruction sets, unbounded
registers,
olcott <NoOne@NoWhere.com> writes:
<cut>
In other words you continue to refrain from agreeing this this is true:
If the implementation detail of the memory architecture of the x86
language was an unlimited sequence of contiguous 4GB blocks and the
absolute addressing modes of the 86 language only referred to
addresses within the current 4GB block then it becomes rather obvious
that the x86 language exactly as it currently exists specifies access
to unlimited memory.
The antecedent is false,
olcott <NoOne@NoWhere.com> writes:
On 8/23/2020 7:00 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/22/2020 8:04 PM, Ben Bacarisse wrote:<cut>
So don't prove me wrong! I don't mind.
But I wonder what you gain by not posting the x86 program (the one that >>>>> scans a linked list changing zeros to ones, but where the list is not in >>>>> any way bounded in size). It's odd that you have decided not to show >>>>> /anyone/ that you are right, because you think I am biased. Why give me >>>>> that power? The only loser in that strategy is you.
Its a pearls before swine thing.
As I say, you owe me nothing (though an apology for calling me a pig
would not go amiss). By all means, keep all your great discoveries
secret forever, but I don't see how that benefits you one iota.
Oddly, when I got bored telling you were wrong about the x86 and stopped >>> replying you complained about being ignored. It seems only being told
you are right is acceptable.
You consistently prove that you lack the required prerequisites:
As does anyone who knows the x86. The required prerequisite is to
accept your extended architecture as being called "x86".
No one disputes that suitable abstract instruction sets, unbounded
registers,
Yet you continue to fail to understand (or acknowledge) that the
32-bit wide fixed width registers defined by one of the two x86
abstract models of computation would by themselves provide access to
unlimited memory without have to actually change the abstract model of
computation defined by this x86 language at all.
I can not "understand" a falsehood. Why do you say "would" not "do"?
Maybe you really do know that the x86 does not provide access to theoretically unlimited memory, so you merely say that is "would"? Why
the subjunctive implying conditionality? You mean it would if it could?
It is the relative addressing of the x86 abstract model of computation
defined by the x86 language that provides memory access and control
flow access to an unlimited number of 4GB blocks of memory.
No. The x86 has no way to access unbounded memory.
It is only hardware implementations of this abstract model that place
any limit on memory access. These hardware limits are implementation
details that are not any aspect of the abstract model itself.
No, it is the design of the x86 that prevents it. Did you try to write
the loop I suggested?
This is a very definitive statement from me: it's not possible. In
fact, I maybe wrong. There could be some corner of the specification
that would permit access to some unbounded data (like the PDP-11 paper
tape) so I am relying on other people to review what I write (and they
may all have given up). I know /you/ have not demonstrated it, but
The x86 language itself allows the instruction pointer and/or the data
pointer
There is no "data pointer" in this context. Relative addressing is RIP-relative only. You can (when not in IA-32 and compatibility mode)
use RIP-relative addressing to access data but your remark suggests it applies to registers other than the instruction pointer. Mind you, you
have not been clear if you are talking about 64 bit mode. All your
previous posts seemed to suggest IA-32.
to move forward in 1 to 0x7fffffff incremenents an unlimited number of
times from their current location.
To see why you are wrong (even about that) you need to read the details
about that addressing mode. But you don't do details.
On 8/17/2020 1:12 PM, Kaz Kylheku wrote:
On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:You made the rookie mistake of conflating the abstract model of
On 8/17/2020 1:44 AM, André G. Isaak wrote:
On 2020-08-16 17:17, olcott wrote:
That the relative addressing of the x86 language specifies an abstract >>>>> model of computation with access to unlimited memory really is
blatantly obvious.
The problem is the you have a poor track record with respect to things >>>> you claim are 'blatantly obvious'.
It really seems that some people in these forums have an established
track record of arguing against me when they know that I am right.
The Intel x86 instruction set defines a fixed address space.
That limitation is not simply an instance of an infinite abstraction;
it is the abstraction.
No addressing mode provides unlimited memory.
I.e. you're dead wrong here about something that some elementary school
kids know about computers.
computation defined by the x86 language with its typical Intel
architecture physical implementation.
Physical implementation details are never a part of an abstract model.
On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:
On 8/17/2020 1:44 AM, André G. Isaak wrote:
On 2020-08-16 17:17, olcott wrote:
That the relative addressing of the x86 language specifies an abstract >>>> model of computation with access to unlimited memory really is
blatantly obvious.
The problem is the you have a poor track record with respect to things
you claim are 'blatantly obvious'.
André
It really seems that some people in these forums have an established
track record of arguing against me when they know that I am right.
The Intel x86 instruction set defines a fixed address space.
That limitation is not simply an instance of an infinite abstraction;
it is the abstraction.
No addressing mode provides unlimited memory.
I.e. you're dead wrong here about something that some elementary school
kids know about computers.
olcott <NoOne@NoWhere.com> writes:
On 8/17/2020 1:12 PM, Kaz Kylheku wrote:
On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:You made the rookie mistake of conflating the abstract model of
On 8/17/2020 1:44 AM, André G. Isaak wrote:
On 2020-08-16 17:17, olcott wrote:
That the relative addressing of the x86 language specifies an abstract >>>>>> model of computation with access to unlimited memory really is
blatantly obvious.
The problem is the you have a poor track record with respect to things >>>>> you claim are 'blatantly obvious'.
It really seems that some people in these forums have an established
track record of arguing against me when they know that I am right.
The Intel x86 instruction set defines a fixed address space.
That limitation is not simply an instance of an infinite abstraction;
it is the abstraction.
No addressing mode provides unlimited memory.
I.e. you're dead wrong here about something that some elementary school
kids know about computers.
computation defined by the x86 language with its typical Intel
architecture physical implementation.
Physical implementation details are never a part of an abstract model.
You keep talking about the "x86 language" as something distinct from
the x86 physical architecture. Where is this "x86 language" defined?
Is there a published document that specifies it, independently of
the hardware implementation? Is there some specific document that
you're relying on for your assertions? (This is a serious question.)
On 8/23/2020 4:49 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/17/2020 1:12 PM, Kaz Kylheku wrote:
On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:You made the rookie mistake of conflating the abstract model of
On 8/17/2020 1:44 AM, André G. Isaak wrote:
On 2020-08-16 17:17, olcott wrote:
That the relative addressing of the x86 language specifies an abstract >>>>>>> model of computation with access to unlimited memory really is
blatantly obvious.
The problem is the you have a poor track record with respect to things >>>>>> you claim are 'blatantly obvious'.
It really seems that some people in these forums have an established >>>>> track record of arguing against me when they know that I am right.
The Intel x86 instruction set defines a fixed address space.
That limitation is not simply an instance of an infinite abstraction;
it is the abstraction.
No addressing mode provides unlimited memory.
I.e. you're dead wrong here about something that some elementary school >>>> kids know about computers.
computation defined by the x86 language with its typical Intel
architecture physical implementation.
Physical implementation details are never a part of an abstract model.
You keep talking about the "x86 language" as something distinct from
the x86 physical architecture. Where is this "x86 language" defined?
Is there a published document that specifies it, independently of
the hardware implementation? Is there some specific document that
you're relying on for your assertions? (This is a serious question.)
That is a very good question.
The documentation that directly pertains how each individual
instruction is encoded as a sequence of bytes and the primary change
of state that executing the instruction is defined to cause specifies
the abstract model of computation that the x86 language defines.
Jumping to a machine address is defined to have a primary change of
state of changing the instruction pointer to the target address. https://c9x.me/x86/html/file_module_x86_id_147.html
If the jump happens to be in the middle of the operating system or
somewhere outside of physical memory these are implemtation details
that are not defined by the abstract model of the x86 language itself.
olcott <NoOne@NoWhere.com> writes:
On 8/23/2020 4:49 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/17/2020 1:12 PM, Kaz Kylheku wrote:
On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:You made the rookie mistake of conflating the abstract model of
On 8/17/2020 1:44 AM, André G. Isaak wrote:
On 2020-08-16 17:17, olcott wrote:
That the relative addressing of the x86 language specifies an abstract >>>>>>>> model of computation with access to unlimited memory really is >>>>>>>> blatantly obvious.
The problem is the you have a poor track record with respect to things >>>>>>> you claim are 'blatantly obvious'.
It really seems that some people in these forums have an established >>>>>> track record of arguing against me when they know that I am right.
The Intel x86 instruction set defines a fixed address space.
That limitation is not simply an instance of an infinite abstraction; >>>>> it is the abstraction.
No addressing mode provides unlimited memory.
I.e. you're dead wrong here about something that some elementary school >>>>> kids know about computers.
computation defined by the x86 language with its typical Intel
architecture physical implementation.
Physical implementation details are never a part of an abstract model.
You keep talking about the "x86 language" as something distinct from
the x86 physical architecture. Where is this "x86 language" defined?
Is there a published document that specifies it, independently of
the hardware implementation? Is there some specific document that
you're relying on for your assertions? (This is a serious question.)
That is a very good question.
The documentation that directly pertains how each individual
instruction is encoded as a sequence of bytes and the primary change
of state that executing the instruction is defined to cause specifies
the abstract model of computation that the x86 language defines.
Jumping to a machine address is defined to have a primary change of
state of changing the instruction pointer to the target address.
https://c9x.me/x86/html/file_module_x86_id_147.html
If the jump happens to be in the middle of the operating system or
somewhere outside of physical memory these are implemtation details
that are not defined by the abstract model of the x86 language itself.
c9x.me appears to be a personal web site owned by one person.
(I think I've found that person's name, but it's not particularly
easy to find so I won't mention it here.)
I would have thought that a definitive document defining the "x86
language" would have been published by Intel, or perhaps AMD, IEEE,
or some other organization. I have no particular reason to doubt
the accuracy of the web page you linked to, but it doesn't appear
to be definitive.
Do you have a link to a *definitive* description of x86 (either the architecture or what you call the "x86 language")? I'm looking for
something that I or someone else can look at to confirm or refute
your claims about infinite addressibility.
I rarely deal with assembly or machine language, and x86 isn't
the architecture I'm most familiar with, but my understanding
(supplemented by a quick Google search just now) is that the
architecture includes an "instruction pointer" or "IP", a register
that old the memory address of the next instruction that will be
generated. Depending on the mode, the IP can be 16 bits ("ip"),
32 bits ("eip"), or 64 bits ("rip"). The IP as I understand it, is
part of the model of computation defined by the x86 architecture.
A jump instruction, whether it's absolute or relative, can only
specify one of 2**N possible targets, where N is no greater than 64.
A program running on a x86 in 32-bit mode could in principle execute
a trillion jump instructions, but those instructions cannot possibly
refer to more than 2**32 distinct target addresses.
It's entirely possible that I'm missing something.
What I'm looking for is (a) definitive documentation of the "x86
language" and (b) an explanation for why the fixed size of the
instruction pointer is not part of the abstract model of computation.
Now of course someone could define such an abstract model in which
jumps are *not* defined in terms of a fixed-width instruction
pointer, and that could provide a basis for accessing unbounded
storage. (Then again, defining a paper tape interface could do
the same thing more straightforwardly.)
But as far as I can tell, such an abstract model would necessarily
be inconsistent with the x86 architecture. My hunch is that your
assertion that the fixed width of the IP is not part of the abstract
model is arbitrary and not supported by any definitive document,
but I'm prepared to be proven wrong.
On 8/23/2020 7:08 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/23/2020 4:49 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/17/2020 1:12 PM, Kaz Kylheku wrote:You keep talking about the "x86 language" as something distinct from
On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:You made the rookie mistake of conflating the abstract model of
On 8/17/2020 1:44 AM, André G. Isaak wrote:The Intel x86 instruction set defines a fixed address space.
On 2020-08-16 17:17, olcott wrote:
That the relative addressing of the x86 language specifies an abstract
model of computation with access to unlimited memory really is >>>>>>>>> blatantly obvious.
The problem is the you have a poor track record with respect to things >>>>>>>> you claim are 'blatantly obvious'.
It really seems that some people in these forums have an established >>>>>>> track record of arguing against me when they know that I am right. >>>>>>
That limitation is not simply an instance of an infinite abstraction; >>>>>> it is the abstraction.
No addressing mode provides unlimited memory.
I.e. you're dead wrong here about something that some elementary school >>>>>> kids know about computers.
computation defined by the x86 language with its typical Intel
architecture physical implementation.
Physical implementation details are never a part of an abstract model. >>>>
the x86 physical architecture. Where is this "x86 language" defined?
Is there a published document that specifies it, independently of
the hardware implementation? Is there some specific document that
you're relying on for your assertions? (This is a serious question.)
That is a very good question.
The documentation that directly pertains how each individual
instruction is encoded as a sequence of bytes and the primary change
of state that executing the instruction is defined to cause specifies
the abstract model of computation that the x86 language defines.
Jumping to a machine address is defined to have a primary change of
state of changing the instruction pointer to the target address.
https://c9x.me/x86/html/file_module_x86_id_147.html
If the jump happens to be in the middle of the operating system or
somewhere outside of physical memory these are implemtation details
that are not defined by the abstract model of the x86 language itself.
c9x.me appears to be a personal web site owned by one person.
(I think I've found that person's name, but it's not particularly
easy to find so I won't mention it here.)
I would have thought that a definitive document defining the "x86
language" would have been published by Intel, or perhaps AMD, IEEE,
or some other organization. I have no particular reason to doubt
the accuracy of the web page you linked to, but it doesn't appear
to be definitive.
Do you have a link to a *definitive* description of x86 (either the
architecture or what you call the "x86 language")? I'm looking for
something that I or someone else can look at to confirm or refute
your claims about infinite addressibility.
It is like the definitive guide of arithmetic they all say exactly the
same thing. The crucial thing is where I drew the line between the
language and its implementation.
The x86 language is defined by its instructions:
(a) Exactly how each instruction is encoded as a sequence of bytes.
(b) The primary change to the machine state caused by executing the instruction. (AKA the most basic semantics of the instruction).
I rarely deal with assembly or machine language, and x86 isn't
the architecture I'm most familiar with, but my understanding
(supplemented by a quick Google search just now) is that the
architecture includes an "instruction pointer" or "IP", a register
that old the memory address of the next instruction that will be
generated. Depending on the mode, the IP can be 16 bits ("ip"),
32 bits ("eip"), or 64 bits ("rip"). The IP as I understand it, is
part of the model of computation defined by the x86 architecture.
A jump instruction, whether it's absolute or relative, can only
specify one of 2**N possible targets, where N is no greater than 64.
A program running on a x86 in 32-bit mode could in principle execute
a trillion jump instructions, but those instructions cannot possibly
refer to more than 2**32 distinct target addresses.
Sure they can because each one of them can jump 0x7fffffff bytes
forward. This is documented on the link that I provided.
What happens when they exceed 0xffffffff is an implementation detail.
It's entirely possible that I'm missing something.
What I'm looking for is (a) definitive documentation of the "x86
language" and (b) an explanation for why the fixed size of the
instruction pointer is not part of the abstract model of computation.
Now of course someone could define such an abstract model in which
jumps are *not* defined in terms of a fixed-width instruction
pointer, and that could provide a basis for accessing unbounded
storage. (Then again, defining a paper tape interface could do
the same thing more straightforwardly.)
But as far as I can tell, such an abstract model would necessarily
be inconsistent with the x86 architecture. My hunch is that your
assertion that the fixed width of the IP is not part of the abstract
model is arbitrary and not supported by any definitive document,
but I'm prepared to be proven wrong.
Study the link that I provided.
olcott <NoOne@NoWhere.com> writes:
On 8/23/2020 7:08 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/23/2020 4:49 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:That is a very good question.
On 8/17/2020 1:12 PM, Kaz Kylheku wrote:You keep talking about the "x86 language" as something distinct from >>>>> the x86 physical architecture. Where is this "x86 language" defined? >>>>> Is there a published document that specifies it, independently of
On 2020-08-17, olcott <NoOne@NoWhere.com> wrote:You made the rookie mistake of conflating the abstract model of
On 8/17/2020 1:44 AM, André G. Isaak wrote:The Intel x86 instruction set defines a fixed address space.
On 2020-08-16 17:17, olcott wrote:
That the relative addressing of the x86 language specifies an abstract
model of computation with access to unlimited memory really is >>>>>>>>>> blatantly obvious.
The problem is the you have a poor track record with respect to things
you claim are 'blatantly obvious'.
It really seems that some people in these forums have an established >>>>>>>> track record of arguing against me when they know that I am right. >>>>>>>
That limitation is not simply an instance of an infinite abstraction; >>>>>>> it is the abstraction.
No addressing mode provides unlimited memory.
I.e. you're dead wrong here about something that some elementary school >>>>>>> kids know about computers.
computation defined by the x86 language with its typical Intel
architecture physical implementation.
Physical implementation details are never a part of an abstract model. >>>>>
the hardware implementation? Is there some specific document that
you're relying on for your assertions? (This is a serious question.) >>>>
The documentation that directly pertains how each individual
instruction is encoded as a sequence of bytes and the primary change
of state that executing the instruction is defined to cause specifies
the abstract model of computation that the x86 language defines.
Jumping to a machine address is defined to have a primary change of
state of changing the instruction pointer to the target address.
https://c9x.me/x86/html/file_module_x86_id_147.html
If the jump happens to be in the middle of the operating system or
somewhere outside of physical memory these are implemtation details
that are not defined by the abstract model of the x86 language itself.
c9x.me appears to be a personal web site owned by one person.
(I think I've found that person's name, but it's not particularly
easy to find so I won't mention it here.)
I would have thought that a definitive document defining the "x86
language" would have been published by Intel, or perhaps AMD, IEEE,
or some other organization. I have no particular reason to doubt
the accuracy of the web page you linked to, but it doesn't appear
to be definitive.
Do you have a link to a *definitive* description of x86 (either the
architecture or what you call the "x86 language")? I'm looking for
something that I or someone else can look at to confirm or refute
your claims about infinite addressibility.
It is like the definitive guide of arithmetic they all say exactly the
same thing. The crucial thing is where I drew the line between the
language and its implementation.
Right, the crucial thing is where *you* draw the line.
I haven't seen any x86 documentation referring to it as a "language".
There is a language (machine code) that runs on an x86 system, but it's
the system architecture itself that defines the behavior.
Surely you can provide a more definitive source of information than some guy's web site. (No disrespect to the author of that site.)
Why is the width of the EIP register an implementation detail?
Explain how you decided to draw that line.
The x86 language is defined by its instructions:
(a) Exactly how each instruction is encoded as a sequence of bytes.
(b) The primary change to the machine state caused by executing the
instruction. (AKA the most basic semantics of the instruction).
Only by its instructions? Not by, say, the sizes of its registers?
I rarely deal with assembly or machine language, and x86 isn't
the architecture I'm most familiar with, but my understanding
(supplemented by a quick Google search just now) is that the
architecture includes an "instruction pointer" or "IP", a register
that old the memory address of the next instruction that will be
generated. Depending on the mode, the IP can be 16 bits ("ip"),
32 bits ("eip"), or 64 bits ("rip"). The IP as I understand it, is
part of the model of computation defined by the x86 architecture.
A jump instruction, whether it's absolute or relative, can only
specify one of 2**N possible targets, where N is no greater than 64.
A program running on a x86 in 32-bit mode could in principle execute
a trillion jump instructions, but those instructions cannot possibly
refer to more than 2**32 distinct target addresses.
Sure they can because each one of them can jump 0x7fffffff bytes
forward. This is documented on the link that I provided.
Circular reasoning. You say it can do it because it can do it.
What happens when they exceed 0xffffffff is an implementation detail.
Hardly.
It's entirely possible that I'm missing something.
What I'm looking for is (a) definitive documentation of the "x86
language" and (b) an explanation for why the fixed size of the
instruction pointer is not part of the abstract model of computation.
Now of course someone could define such an abstract model in which
jumps are *not* defined in terms of a fixed-width instruction
pointer, and that could provide a basis for accessing unbounded
storage. (Then again, defining a paper tape interface could do
the same thing more straightforwardly.)
But as far as I can tell, such an abstract model would necessarily
be inconsistent with the x86 architecture. My hunch is that your
assertion that the fixed width of the IP is not part of the abstract
model is arbitrary and not supported by any definitive document,
but I'm prepared to be proven wrong.
Study the link that I provided.
From that link (and I'd still like to see something *definitive*):
A relative offset (rel8, rel16, or rel32) is generally specified
as a label in assembly code, but at the machine code level, it
is encoded as a signed 8-, 16-, or 32-bit immediate value. This
value is added to the value in the EIP register. (Here, the
EIP register contains the address of the instruction following
the JMP instruction). When using relative offsets, the opcode
(for short vs. near jumps) and the operand-size attribute (for
near relative jumps) determines the size of the target operand
(8, 16, or 32 bits).
The 8-, 16-, or 32-bit value is added to the value in the EIP register.
Do you claim that the EIP register can be infinitely wide? Do you claim
that it can old an address whose value exceeds 2**64?
(I know you've already acknowledged that the relative offset can't
exceed 32 bits, so don't bother arguing about that.)
Again, I can imagine that there might be some abstract x86-based
architecture *with significant changes* that could support an
arbitrarily large address range. But it wouldn't be "the x86 language"
as you call it.
The abstract model of computation specified by the x86 language provides access to unlimited memory.
The x86 language is defined by its instructions:
(a) Exactly how each instruction is encoded as a sequence of bytes.
(b) The primary change to the machine state caused by executing the instruction. (AKA the most basic semantics of the instruction).
The syntax of this language specifies that relative jumps can be made to 0x7FFFFFFF bytes above the maximum value that the EIP register can hold.
When this syntax is implemented having semantics that it specifies then:
(a) Jumps could be made to memory location addresses that would not fit
in the EIP register. This entails that the EIP register would have to
roll over like an odometer.
(b) This higher memory would have to exist.
(c) This process could be repeated an unlimited number of times.
On 8/23/2020 9:57 PM, olcott wrote:
The abstract model of computation specified by the x86 language
provides access to unlimited memory.
The x86 language is defined by its instructions:
(a) Exactly how each instruction is encoded as a sequence of bytes.
(b) The primary change to the machine state caused by executing the
instruction. (AKA the most basic semantics of the instruction).
The syntax of this language specifies that relative jumps can be made
to 0x7FFFFFFF bytes above the maximum value that the EIP register can
hold.
When this syntax is implemented having semantics that it specifies then:
(a) Jumps could be made to memory location addresses that would not
fit in the EIP register. This entails that the EIP register would have
to roll over like an odometer.
(b) This higher memory would have to exist.
(c) This process could be repeated an unlimited number of times.
I see that the prior version lasted for less than 20 minutes. You really shouldn't masturbate and type at the same time. It breaks your limited ability to concentrate. You keep flipping crap out there without a clue.
Try thinking about what you are typing. You are a joke - no one can take
you seriously when you keep spinning out one mistake laden email after another. I suppose those copyright symbols you include are meant to
protect your readers. If someone quoted you as a serious source, they
would be declared to be a nut case. So my serious advise to you: get
your hand out of your pants and think. Also don't type until thought penetrates your skull.
On 8/23/2020 10:47 PM, Keith Thompson wrote:[...]
olcott <NoOne@NoWhere.com> writes:
Study the link that I provided.
From that link (and I'd still like to see something *definitive*):
A relative offset (rel8, rel16, or rel32) is generally specified
as a label in assembly code, but at the machine code level, it
is encoded as a signed 8-, 16-, or 32-bit immediate value. This
value is added to the value in the EIP register. (Here, the
EIP register contains the address of the instruction following
the JMP instruction). When using relative offsets, the opcode
(for short vs. near jumps) and the operand-size attribute (for
near relative jumps) determines the size of the target operand
(8, 16, or 32 bits).
The 8-, 16-, or 32-bit value is added to the value in the EIP register.
Do you claim that the EIP register can be infinitely wide? Do you claim
that it can old an address whose value exceeds 2**64?
The syntax of the language specifies that the EIP would roll over like
an odometer and reference memory in the next adjacent 4GB block.
0FFFFFFFFh + 07FFFFFFFh = 07FFFFFFEh
The above arithmetic shows that it has to roll over like an odometer.
The semantics of a memory address that is 07FFFFFFF bytes above
0FFFFFFFF specifies that this address is reachable.
olcott <NoOne@NoWhere.com> writes:
On 8/23/2020 10:47 PM, Keith Thompson wrote:[...]
olcott <NoOne@NoWhere.com> writes:
Study the link that I provided.
From that link (and I'd still like to see something *definitive*):
A relative offset (rel8, rel16, or rel32) is generally specified
as a label in assembly code, but at the machine code level, it
is encoded as a signed 8-, 16-, or 32-bit immediate value. This
value is added to the value in the EIP register. (Here, the
EIP register contains the address of the instruction following
the JMP instruction). When using relative offsets, the opcode
(for short vs. near jumps) and the operand-size attribute (for
near relative jumps) determines the size of the target operand
(8, 16, or 32 bits).
The 8-, 16-, or 32-bit value is added to the value in the EIP register.
Do you claim that the EIP register can be infinitely wide? Do you claim >>> that it can old an address whose value exceeds 2**64?
The syntax of the language specifies that the EIP would roll over like
an odometer and reference memory in the next adjacent 4GB block.
0FFFFFFFFh + 07FFFFFFFh = 07FFFFFFEh
The above arithmetic shows that it has to roll over like an odometer.
The semantics of a memory address that is 07FFFFFFF bytes above
0FFFFFFFF specifies that this address is reachable.
Explain to me how the "syntax of the language" specifies that.
Cite a source that says that's what it means. I already cited the source. It is the same source.That is how fixed width arithmetic works.
Why does it *have to* roll over like an odometer?
If I have an actual x86 chip where it *doesn't* rollThe hardware implementation overrides the syntax of the language.
over and address 17FFFFFFE *isn't* accessible, does that chip violate
the "x86 language"?
On 8/24/2020 12:47 AM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/23/2020 10:47 PM, Keith Thompson wrote:[...]
olcott <NoOne@NoWhere.com> writes:
Study the link that I provided.
From that link (and I'd still like to see something *definitive*):
A relative offset (rel8, rel16, or rel32) is generally specified >>>> as a label in assembly code, but at the machine code level, it
is encoded as a signed 8-, 16-, or 32-bit immediate value. This
value is added to the value in the EIP register. (Here, the
EIP register contains the address of the instruction following
the JMP instruction). When using relative offsets, the opcode
(for short vs. near jumps) and the operand-size attribute (for
near relative jumps) determines the size of the target operand
(8, 16, or 32 bits).
The 8-, 16-, or 32-bit value is added to the value in the EIP register. >>>> Do you claim that the EIP register can be infinitely wide? Do you claim >>>> that it can old an address whose value exceeds 2**64?
The syntax of the language specifies that the EIP would roll over like
an odometer and reference memory in the next adjacent 4GB block.
0FFFFFFFFh + 07FFFFFFFh = 07FFFFFFEh
The above arithmetic shows that it has to roll over like an odometer.
The semantics of a memory address that is 07FFFFFFF bytes above
0FFFFFFFF specifies that this address is reachable.
Explain to me how the "syntax of the language" specifies that.
; This specifies a jump 2GB above the maximum value of EIP
fffffff0: EBFFFFFF7F
; Since it specifies a jump 2GB above the maximum value of EIP
; then according to the syntax of the language this memory exists.
Cite a source that says that's what it means. I already cited the source. It is the same source.
Why does it *have to* roll over like an odometer?That is how fixed width arithmetic works.
fffffff0 + 5 + 7FFFFFFF = 7FFFFFF4
If I have an actual x86 chip where it *doesn't* rollThe hardware implementation overrides the syntax of the language.
over and address 17FFFFFFE *isn't* accessible, does that chip violate
the "x86 language"?
Here is the architecture that the x86 language specifies:
If the implementation detail of the memory architecture of the x86
language was an unlimited sequence of contiguous 4GB blocks and the
absolute addressing modes of the 86 language only referred to
addresses within the current 4GB block then it becomes rather obvious
that the x86 language exactly as it currently exists specifies access
to unlimited memory.
olcott <NoOne@NoWhere.com> writes:
On 8/24/2020 12:47 AM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/23/2020 10:47 PM, Keith Thompson wrote:[...]
olcott <NoOne@NoWhere.com> writes:
Study the link that I provided.
From that link (and I'd still like to see something *definitive*): >>>>>
A relative offset (rel8, rel16, or rel32) is generally specified >>>>> as a label in assembly code, but at the machine code level, it >>>>> is encoded as a signed 8-, 16-, or 32-bit immediate value. This >>>>> value is added to the value in the EIP register. (Here, the
EIP register contains the address of the instruction following >>>>> the JMP instruction). When using relative offsets, the opcode >>>>> (for short vs. near jumps) and the operand-size attribute (for >>>>> near relative jumps) determines the size of the target operand >>>>> (8, 16, or 32 bits).
The 8-, 16-, or 32-bit value is added to the value in the EIP register. >>>>> Do you claim that the EIP register can be infinitely wide? Do you claim >>>>> that it can old an address whose value exceeds 2**64?
The syntax of the language specifies that the EIP would roll over like >>>> an odometer and reference memory in the next adjacent 4GB block.
0FFFFFFFFh + 07FFFFFFFh = 07FFFFFFEh
The above arithmetic shows that it has to roll over like an odometer.
The semantics of a memory address that is 07FFFFFFF bytes above
0FFFFFFFF specifies that this address is reachable.
Explain to me how the "syntax of the language" specifies that.
; This specifies a jump 2GB above the maximum value of EIP
fffffff0: E9FFFFFF7F
On 8/24/2020 12:15 AM, Jeff Barnett wrote:
On 8/23/2020 9:57 PM, olcott wrote:
The abstract model of computation specified by the x86 language
provides access to unlimited memory.
The x86 language is defined by its instructions:
(a) Exactly how each instruction is encoded as a sequence of bytes.
(b) The primary change to the machine state caused by executing the
instruction. (AKA the most basic semantics of the instruction).
The syntax of this language specifies that relative jumps can be made
to 0x7FFFFFFF bytes above the maximum value that the EIP register can
hold.
When this syntax is implemented having semantics that it specifies then: >>>
(a) Jumps could be made to memory location addresses that would not
fit in the EIP register. This entails that the EIP register would
have to roll over like an odometer.
(b) This higher memory would have to exist.
(c) This process could be repeated an unlimited number of times.
I see that the prior version lasted for less than 20 minutes. You
really shouldn't masturbate and type at the same time. It breaks your
limited ability to concentrate. You keep flipping crap out there
without a clue. Try thinking about what you are typing. You are a joke
- no one can take you seriously when you keep spinning out one mistake
laden email after another. I suppose those copyright symbols you
include are meant to protect your readers. If someone quoted you as a
serious source, they would be declared to be a nut case. So my serious
advise to you: get your hand out of your pants and think. Also don't
type until thought penetrates your skull.
Those that get pleasure out of denigrating others prove how tiny their
soul is. I am pleased to see that you actually really understand some of
this stuff.
If the implementation detail of the memory architecture of the x86
language was an unlimited sequence of contiguous 4GB blocks and the
absolute addressing modes of the 86 language only referred to addresses within the current 4GB block then it becomes rather obvious that the x86 language exactly as it currently exists specifies access to unlimited
memory.
On 8/23/2020 11:35 PM, olcott wrote:
On 8/24/2020 12:15 AM, Jeff Barnett wrote:
On 8/23/2020 9:57 PM, olcott wrote:
The abstract model of computation specified by the x86 language
provides access to unlimited memory.
The x86 language is defined by its instructions:
(a) Exactly how each instruction is encoded as a sequence of bytes.
(b) The primary change to the machine state caused by executing the
instruction. (AKA the most basic semantics of the instruction).
The syntax of this language specifies that relative jumps can be
made to 0x7FFFFFFF bytes above the maximum value that the EIP
register can hold.
When this syntax is implemented having semantics that it specifies
then:
(a) Jumps could be made to memory location addresses that would not
fit in the EIP register. This entails that the EIP register would
have to roll over like an odometer.
(b) This higher memory would have to exist.
(c) This process could be repeated an unlimited number of times.
I see that the prior version lasted for less than 20 minutes. You
really shouldn't masturbate and type at the same time. It breaks your
limited ability to concentrate. You keep flipping crap out there
without a clue. Try thinking about what you are typing. You are a
joke - no one can take you seriously when you keep spinning out one
mistake laden email after another. I suppose those copyright symbols
you include are meant to protect your readers. If someone quoted you
as a serious source, they would be declared to be a nut case. So my
serious advise to you: get your hand out of your pants and think.
Also don't type until thought penetrates your skull.
Those that get pleasure out of denigrating others prove how tiny their
soul is. I am pleased to see that you actually really understand some
of this stuff.
If you did, these threads would be far fewer and far shorter. You type
and assert without knowledge or thought. That's just a form of mental masturbation. You are so certain and so wrong so often that I'm sure
records have been set by you for display of total ignorance of a subject
in which you wish to excel. Whether a troll or an ignoramus makes little difference; if the former, you are starting to believe your nonsense.
On 2020-08-24 00:52, olcott wrote:
If the implementation detail of the memory architecture of the x86
language was an unlimited sequence of contiguous 4GB blocks and the
absolute addressing modes of the 86 language only referred to
addresses within the current 4GB block then it becomes rather obvious
that the x86 language exactly as it currently exists specifies access
to unlimited memory.
If all absolute addresses refer to the 'current block', how exactly does
one identify which block is 'current'? AFAIK the x86 does not have a
'current block pointer'.
André
On 8/24/2020 12:40 PM, André G. Isaak wrote:
On 2020-08-24 00:52, olcott wrote:
If the implementation detail of the memory architecture of the x86
language was an unlimited sequence of contiguous 4GB blocks and the
absolute addressing modes of the 86 language only referred to
addresses within the current 4GB block then it becomes rather obvious
that the x86 language exactly as it currently exists specifies access
to unlimited memory.
If all absolute addresses refer to the 'current block', how exactly
does one identify which block is 'current'? AFAIK the x86 does not
have a 'current block pointer'.
André
The idea is that it is exactly like a Turing Machine tape head.
The EIP points to the current byte address in the current 4GB block.
There are an unlimited number of 4GB blocks before and after the current
4GB block.
On 2020-08-24 12:26, olcott wrote:
On 8/24/2020 12:40 PM, André G. Isaak wrote:
On 2020-08-24 00:52, olcott wrote:
If the implementation detail of the memory architecture of the x86
language was an unlimited sequence of contiguous 4GB blocks and the
absolute addressing modes of the 86 language only referred to
addresses within the current 4GB block then it becomes rather
obvious that the x86 language exactly as it currently exists
specifies access to unlimited memory.
If all absolute addresses refer to the 'current block', how exactly
does one identify which block is 'current'? AFAIK the x86 does not
have a 'current block pointer'.
André
The idea is that it is exactly like a Turing Machine tape head.
The EIP points to the current byte address in the current 4GB block.
There are an unlimited number of 4GB blocks before and after the
current 4GB block.
So how do I know which of those blocks the EIP is pointing at? Talking
about the 'current' 4GB block is meaningless unless you have some way of differentiating between these blocks.
What changes in the system when it moves from one of these 4GB blocks to
the next or previous block?
André
olcott <NoOne@NoWhere.com> writes:
On 8/23/2020 4:14 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/23/2020 10:42 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
In other words you continue to refrain from agreeing this this is true: >>>>>>
If the implementation detail of the memory architecture of the x86 >>>>>> language was an unlimited sequence of contiguous 4GB blocks and the >>>>>> absolute addressing modes of the 86 language only referred to
addresses within the current 4GB block then it becomes rather obvious >>>>>> that the x86 language exactly as it currently exists specifies access >>>>>> to unlimited memory.
The antecedent is false,
You and I both know that implementation details are not a part of an
abstract model of computation itself.
We do.
You and I both know that the capability of moving a tape head forward
relative to its current location an unlimited number of times would
define access to unlimited storage.
Provided you can read and write the data, and go back as well as
forward. But given your disliking of details, that's quibbling.
Those extra details are required.
That the abstract model of computation defined by the x86 language
allows its instruction pointer and its data pointer to move forward
relative to their current location an unlimited number of times also
very obviously defines access to unlimited storage.
No, we don't know that. The architecture of the x86 is very clear about >>> what relative addresses are and how they are calculated. RIP is a
fixed-width register.
-- You and I both know that implementation details are not a part of an
-- abstract model of computation itself.
The Intel architecture of the x86 is a physical implementatation
detail that is not any part of the abstract model of computation
defined by the x86 language.
Don't be silly. That registers have a fixed width is not an
implementation detail.
(the alternative was to admit you can't)You and I also both know that when the physical hardware
implementations of this abstract model place limits on the access to
storage that these limits are not a part of the abstract model itself.
Yes, we do.
Good.
Thus we can conclude that the abstract model of computation defined by >>>> the x86 language is fully Turing complete pertaining the requirement
that a Turing machine has access to unlimited memory.
No we can't conclude that.
You might not understand the abstract model of computaton defined by
the x86 language well enough to conclude that.
Write the loop I suggested and I will admit I was wrong.
Alternatively,
Writing an x86 program that would fill an unlimited amount of memory
with zeroes requires understanding the details of how the abstract
model of computation defined by the x86 language overcomes memory
limits with relative addressing.
Until you understand this you won't be able to understand the program.
That is a cowardly answer and I struggle to believe that you being
sincere because it is such a childish excuse. The truth is you know you can't do as you boast. It's as plain as day from the description of how
the x86 architecture works.
But suit yourself. As I said, boasting and insulting is the main
pleasure for a crank like you, and you won't stop just because I call
you out on it. Have fun.
olcott <NoOne@NoWhere.com> writes:
On 8/23/2020 7:38 PM, Ben Bacarisse wrote:<cut>
But suit yourself. As I said, boasting and insulting is the main
pleasure for a crank like you, and you won't stop just because I call
you out on it. Have fun.
I don't disrespect you. I disrespect your disrespecting me.
The code is completed, it is only 65 bytes of x86 machine code.
Another empty boast -- still no code. You can't seem to stop doing it.
Or maybe repeating the excuse for not posting code -- that it would be "pearls before swine" -- would not look good next to your claim that you don't disrespect me!
The code begins @ machine address 0x100. Fills the 256 bytes preceding
it with “@@@@”. Dynamically modifies its own first instruction to
refer to its new code address. Copies itself to a new address that is
exactly 0x100 bytes above its current code address and then does a
relative jump to this new address.
Instead of 65 bytes of unambiguous code, you post 312 bytes of poorly
worded description. Despite the poor wording, this is clearly not the
code you said you could produce (it's not in any way a linked list) but that's OK. You say enough for me to know that even this more limited
loop does not meet your objective. Post the code and you can find out
why it can only access a finite amount memory. Don't post the code and
you can boast away to year heart's content. It's up to you.
I had the emulator terminate its execution after the first four full
cycles. I output the first 1024 bytes of memory before and after
execution and output complete execution trace of all of the execution.
This is 23 pages long.
Wow! You wrote code and got 23 pages of output. You must be right.
I'll call the editor of the JACM right away.
According to the meaning of the syntax of the x86 language this
program would continue to operate after it completed its execution @
machine address: FFFFFF00.
By studiously hiding the code you can avoid anyone being able to be
specific, but assuming a reasonable interpretation of your non-code
waffle, the jumps will always be to addresses within the current code segment. I'm pretty sure there are other problems with it (hint: you
talk of EIP and not RIP so data access is also a problem) but your
hiding the code protects it from more specific analysis.
The only reason that it would not continue to operate would be that
the physical implementation of the abstract model did not implement
the semantics specified by the language syntax.
No. For more details, post the code.
On 8/24/2020 7:59 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/23/2020 7:38 PM, Ben Bacarisse wrote:<cut>
But suit yourself. As I said, boasting and insulting is the main
pleasure for a crank like you, and you won't stop just because I call
you out on it. Have fun.
I don't disrespect you. I disrespect your disrespecting me.
The code is completed, it is only 65 bytes of x86 machine code.
Another empty boast -- still no code. You can't seem to stop doing it.
Or maybe repeating the excuse for not posting code -- that it would be
"pearls before swine" -- would not look good next to your claim that you
don't disrespect me!
The code begins @ machine address 0x100. Fills the 256 bytes preceding
it with “@@@@”. Dynamically modifies its own first instruction to
refer to its new code address. Copies itself to a new address that is
exactly 0x100 bytes above its current code address and then does a
relative jump to this new address.
Instead of 65 bytes of unambiguous code, you post 312 bytes of poorly
worded description. Despite the poor wording, this is clearly not the
code you said you could produce (it's not in any way a linked list) but
that's OK. You say enough for me to know that even this more limited
loop does not meet your objective. Post the code and you can find out
why it can only access a finite amount memory. Don't post the code and
you can boast away to year heart's content. It's up to you.
I had the emulator terminate its execution after the first four full
cycles. I output the first 1024 bytes of memory before and after
execution and output complete execution trace of all of the execution.
This is 23 pages long.
Wow! You wrote code and got 23 pages of output. You must be right.
I'll call the editor of the JACM right away.
According to the meaning of the syntax of the x86 language this
program would continue to operate after it completed its execution @
machine address: FFFFFF00.
By studiously hiding the code you can avoid anyone being able to be
specific, but assuming a reasonable interpretation of your non-code
waffle, the jumps will always be to addresses within the current code
segment. I'm pretty sure there are other problems with it (hint: you
talk of EIP and not RIP so data access is also a problem) but your
hiding the code protects it from more specific analysis.
The only reason that it would not continue to operate would be that
the physical implementation of the abstract model did not implement
the semantics specified by the language syntax.
No. For more details, post the code.
You have not yet shown that you understand the subject matter well
enough that you would not baselessly denigrate it. You have shown that
you are capable of understanding the subject matter well enough by your reference to RIP relative addressing.
I do have to admit that your idea of creating code such as this was a
great idea. By the fully executing code it is conclusively proven that
the unlimited memory access provided by the x86 model of computation is
fully operational for actual coding and not merely a hypothetical
theoretical construct that does not work in practice.
RIP relative addressing is much simpler, data can be directly read and written to relative to the instruction pointer. It is much easier to see
that RIP relative addressing more directly provides an abstract model of computation with unlimited memory access.
On 8/24/2020 2:23 PM, André G. Isaak wrote:
On 2020-08-24 12:26, olcott wrote:
On 8/24/2020 12:40 PM, André G. Isaak wrote:
On 2020-08-24 00:52, olcott wrote:
If the implementation detail of the memory architecture of the x86
language was an unlimited sequence of contiguous 4GB blocks and the
absolute addressing modes of the 86 language only referred to
addresses within the current 4GB block then it becomes rather
obvious that the x86 language exactly as it currently exists
specifies access to unlimited memory.
If all absolute addresses refer to the 'current block', how exactly
does one identify which block is 'current'? AFAIK the x86 does not
have a 'current block pointer'.
André
The idea is that it is exactly like a Turing Machine tape head.
The EIP points to the current byte address in the current 4GB block.
There are an unlimited number of 4GB blocks before and after the
current 4GB block.
So how do I know which of those blocks the EIP is pointing at? Talking
about the 'current' 4GB block is meaningless unless you have some way
of differentiating between these blocks.
The current 4GB block is exactly the same thing as the current location
of the Turing machine tape head.
What changes in the system when it moves from one of these 4GB blocks
to the next or previous block?
André
A relative jump into a next 4GB memory block or a prior 4GB memory block
only changes the value of the EIP register. Relative memory addressing
allows code and data to be copied between 4GB boundaries.
On 8/24/2020 9:30 PM, André G. Isaak wrote:
On 2020-08-24 15:02, olcott wrote:
On 8/24/2020 2:23 PM, André G. Isaak wrote:
On 2020-08-24 12:26, olcott wrote:
On 8/24/2020 12:40 PM, André G. Isaak wrote:
On 2020-08-24 00:52, olcott wrote:
If the implementation detail of the memory architecture of the
x86 language was an unlimited sequence of contiguous 4GB blocks
and the absolute addressing modes of the 86 language only
referred to addresses within the current 4GB block then it
becomes rather obvious that the x86 language exactly as it
currently exists specifies access to unlimited memory.
If all absolute addresses refer to the 'current block', how
exactly does one identify which block is 'current'? AFAIK the x86
does not have a 'current block pointer'.
André
The idea is that it is exactly like a Turing Machine tape head.
The EIP points to the current byte address in the current 4GB block.
There are an unlimited number of 4GB blocks before and after the
current 4GB block.
So how do I know which of those blocks the EIP is pointing at?
Talking about the 'current' 4GB block is meaningless unless you have
some way of differentiating between these blocks.
The current 4GB block is exactly the same thing as the current
location of the Turing machine tape head.
I have absolutely no idea how to interpret that analogy.
--- So how do I know which of those blocks the EIP is pointing at?
Like a Turing Machine tape head has no uniquely defined index identifier neither do the sequence of 4GB blocks. Where everywhere you are is only relatitve to where you have been before.
If you made a Turing Machine that counted to infinity and stored the
result the relative index of your current location would eventually wrap across 4GB boundaries.
We could use the typical Turing Machine convention of delimiting the
numeric digit strings with a space character. Whether we used base 10
ACSII digits or a base 256 number system we will eventually have a
googleplex of 4GB blocks filled with the representation of a single
integer value.
RIP relative addressing of 64-bit architectures physically implements
this abstract model of computation directly.
The abstract model itself
specifies that an unlimited number of 0x7FFFFFFF jumps can be made to
higher memory. When one of these jumps exceeds installed memory a
On 2020-08-24 15:02, olcott wrote:
On 8/24/2020 2:23 PM, André G. Isaak wrote:
On 2020-08-24 12:26, olcott wrote:
On 8/24/2020 12:40 PM, André G. Isaak wrote:
On 2020-08-24 00:52, olcott wrote:
If the implementation detail of the memory architecture of the x86 >>>>>> language was an unlimited sequence of contiguous 4GB blocks and
the absolute addressing modes of the 86 language only referred to
addresses within the current 4GB block then it becomes rather
obvious that the x86 language exactly as it currently exists
specifies access to unlimited memory.
If all absolute addresses refer to the 'current block', how exactly
does one identify which block is 'current'? AFAIK the x86 does not
have a 'current block pointer'.
André
The idea is that it is exactly like a Turing Machine tape head.
The EIP points to the current byte address in the current 4GB block.
There are an unlimited number of 4GB blocks before and after the
current 4GB block.
So how do I know which of those blocks the EIP is pointing at?
Talking about the 'current' 4GB block is meaningless unless you have
some way of differentiating between these blocks.
The current 4GB block is exactly the same thing as the current
location of the Turing machine tape head.
I have absolutely no idea how to interpret that analogy.
olcott <NoOne@NoWhere.com> writes:
On 8/24/2020 7:59 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/23/2020 7:38 PM, Ben Bacarisse wrote:<cut>
But suit yourself. As I said, boasting and insulting is the main
pleasure for a crank like you, and you won't stop just because I call >>>>> you out on it. Have fun.
I don't disrespect you. I disrespect your disrespecting me.
The code is completed, it is only 65 bytes of x86 machine code.
Another empty boast -- still no code. You can't seem to stop doing it.
Or maybe repeating the excuse for not posting code -- that it would be
"pearls before swine" -- would not look good next to your claim that you >>> don't disrespect me!
The code begins @ machine address 0x100. Fills the 256 bytes preceding >>>> it with “@@@@”. Dynamically modifies its own first instruction to
refer to its new code address. Copies itself to a new address that is
exactly 0x100 bytes above its current code address and then does a
relative jump to this new address.
Instead of 65 bytes of unambiguous code, you post 312 bytes of poorly
worded description. Despite the poor wording, this is clearly not the
code you said you could produce (it's not in any way a linked list) but
that's OK. You say enough for me to know that even this more limited
loop does not meet your objective. Post the code and you can find out
why it can only access a finite amount memory. Don't post the code and
you can boast away to year heart's content. It's up to you.
I had the emulator terminate its execution after the first four full
cycles. I output the first 1024 bytes of memory before and after
execution and output complete execution trace of all of the execution. >>>> This is 23 pages long.
Wow! You wrote code and got 23 pages of output. You must be right.
I'll call the editor of the JACM right away.
According to the meaning of the syntax of the x86 language this
program would continue to operate after it completed its execution @
machine address: FFFFFF00.
By studiously hiding the code you can avoid anyone being able to be
specific, but assuming a reasonable interpretation of your non-code
waffle, the jumps will always be to addresses within the current code
segment. I'm pretty sure there are other problems with it (hint: you
talk of EIP and not RIP so data access is also a problem) but your
hiding the code protects it from more specific analysis.
The only reason that it would not continue to operate would be that
the physical implementation of the abstract model did not implement
the semantics specified by the language syntax.
No. For more details, post the code.
You have not yet shown that you understand the subject matter well
enough that you would not baselessly denigrate it.
Oh dear. Why are you are afraid of baseless criticism? I'm not.
No, you don't want to show the code because you know it's behaviour has finite limits imposed by the silly choice of architecture.
Anyway, I see it's business as usual at crack HQ: boast, boast and boast
some more, but never post anything technical.
Certainly not the
impossible TMs that you lied about having in 2018. And now not even a
short loop in x86 code that has been your main excuse for not posting
the TMs you didn't have.
But why choose this to be wrong about? Everyone accepts that a single
conceptual shift (such as providing unbounded registers) will make an x86-like notation Turing complete. And if you have to make that shift,
you could just as well use a very much simpler architecture making
whatever you want to do with it much clearer. And surely designing and emulating such a thing would enable you to delay having to post any
evidence of your non-existent H and H^ just as much as whatever it is
you are doing with the x86?
At least you are managing to deflect attention away from the big lie and
the big question you will never answer -- what was it you really had
when you claimed to have those two impossible Turing machines in Dec
2018. So well done on that front.
On 8/25/2020 9:56 AM, André G. Isaak wrote:
On 2020-08-25 08:34, olcott wrote:
On 8/24/2020 9:30 PM, André G. Isaak wrote:
On 2020-08-24 15:02, olcott wrote:
On 8/24/2020 2:23 PM, André G. Isaak wrote:
On 2020-08-24 12:26, olcott wrote:
On 8/24/2020 12:40 PM, André G. Isaak wrote:
On 2020-08-24 00:52, olcott wrote:
If the implementation detail of the memory architecture of the >>>>>>>>> x86 language was an unlimited sequence of contiguous 4GB blocks >>>>>>>>> and the absolute addressing modes of the 86 language only
referred to addresses within the current 4GB block then it
becomes rather obvious that the x86 language exactly as it
currently exists specifies access to unlimited memory.
If all absolute addresses refer to the 'current block', how
exactly does one identify which block is 'current'? AFAIK the
x86 does not have a 'current block pointer'.
André
The idea is that it is exactly like a Turing Machine tape head.
The EIP points to the current byte address in the current 4GB block. >>>>>>
There are an unlimited number of 4GB blocks before and after the >>>>>>> current 4GB block.
So how do I know which of those blocks the EIP is pointing at?
Talking about the 'current' 4GB block is meaningless unless you
have some way of differentiating between these blocks.
The current 4GB block is exactly the same thing as the current
location of the Turing machine tape head.
I have absolutely no idea how to interpret that analogy.
--- So how do I know which of those blocks the EIP is pointing at?
Like a Turing Machine tape head has no uniquely defined index
identifier neither do the sequence of 4GB blocks. Where everywhere
you are is only relatitve to where you have been before.
There's no analog to a Turing Machine head in the x86. The x86 uses
random access memory unlike a Turing Machine. So whatever it is you
are trying to claim here is entirely unclear.
So say I have access to one of your modified x86s which has been
running for some unspecified amount of time and I can examine any
aspect of it. How do I determine *which* 4GB block the contents of the
EIP is referring to?
You have no way of knowing that and no need to know that.
The values in
the current 4GB tell you everything that you need to know. One of the
things that you might need to know is that the result of a computation
spans however many 4GB blocks until space character is encountered. Such
a TM could count to infinity and store the result as space delimited
ASCII digit strings.
If you made a Turing Machine that counted to infinity and stored the
result the relative index of your current location would eventually
wrap across 4GB boundaries.
We could use the typical Turing Machine convention of delimiting the
numeric digit strings with a space character. Whether we used base 10
ACSII digits or a base 256 number system we will eventually have a
googleplex of 4GB blocks filled with the representation of a single
integer value.
RIP relative addressing of 64-bit architectures physically implements
this abstract model of computation directly.
No it doesn't. RIP is a 64-bit register which imposes an upper limit
on the number of memory locations which can be addressed (and AFAIK no
x86 chip actually has a 64-bit address bus so the real limit will be
considerably less).
An abstract machine having a tape head that can be advanced in 0 to 0x7FFFFFFF increments an unlimited number of times specifies a model of computation that has access to unlimited memory. The technical name for memory addressing based on displacement from the current memory address
is relative addressing.
RIP relative addressing directly implements this in hardware. The
abstract model of computation specified by RIP relative addressing
allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When a
jump is made outside of the bounds of installed memory a hardware fault occurs. Hardware faults are not part of any abstract model.
That that RIP relative addressing happens to be currently implemented on
a 64-bit architecture is an implementation detail that is not a part of
olcott <NoOne@NoWhere.com> writes:
[...]
The abstract model of computation defined by RIP relative addressing
clearly specifies access to unlimited memory. Your first step in
seeing that I am correct is understanding RIP relative addressing.
Clearly? How is that clear?
As far as I can tell, all you can reasonably is that relative addressing doesn't specify a limit. If you try to do a relative addressing
operation that goes beyond the range of addresses the fixed-width
instruction pointer can hold, there is as far as I know no specification
in the x86 architecture of what happens. (Or perhaps there is such a specification. If so, perhaps you can find it in some document.)
That's not at all the same thing as specifying unlimited memory.
Why would a specification for a real-world CPU specify unlimited memory
when no actual hardware has unlimited memory?
Can you cite some x86 document that *actually says* anything about
access to unlimited memory? I'm asking for something other than your infererence of unlimited memory from the lack of a specified limit.
I understand it would be convenient *for you* if the "x86 language"
specified access to unlimited memory. That doesn't imply that it does.
BTW, this would all be irrelevant if you simply stated that, for your purposes, you were working with an extended x86 that allows for
unlimited memory.
On 2020-08-25 08:34, olcott wrote:
On 8/24/2020 9:30 PM, André G. Isaak wrote:
On 2020-08-24 15:02, olcott wrote:
On 8/24/2020 2:23 PM, André G. Isaak wrote:
On 2020-08-24 12:26, olcott wrote:
On 8/24/2020 12:40 PM, André G. Isaak wrote:
On 2020-08-24 00:52, olcott wrote:
If the implementation detail of the memory architecture of the >>>>>>>> x86 language was an unlimited sequence of contiguous 4GB blocks >>>>>>>> and the absolute addressing modes of the 86 language only
referred to addresses within the current 4GB block then it
becomes rather obvious that the x86 language exactly as it
currently exists specifies access to unlimited memory.
If all absolute addresses refer to the 'current block', how
exactly does one identify which block is 'current'? AFAIK the x86 >>>>>>> does not have a 'current block pointer'.
André
The idea is that it is exactly like a Turing Machine tape head.
The EIP points to the current byte address in the current 4GB block. >>>>>
There are an unlimited number of 4GB blocks before and after the
current 4GB block.
So how do I know which of those blocks the EIP is pointing at?
Talking about the 'current' 4GB block is meaningless unless you
have some way of differentiating between these blocks.
The current 4GB block is exactly the same thing as the current
location of the Turing machine tape head.
I have absolutely no idea how to interpret that analogy.
--- So how do I know which of those blocks the EIP is pointing at?
Like a Turing Machine tape head has no uniquely defined index
identifier neither do the sequence of 4GB blocks. Where everywhere you
are is only relatitve to where you have been before.
There's no analog to a Turing Machine head in the x86. The x86 uses
random access memory unlike a Turing Machine. So whatever it is you are trying to claim here is entirely unclear.
So say I have access to one of your modified x86s which has been running
for some unspecified amount of time and I can examine any aspect of it.
How do I determine *which* 4GB block the contents of the EIP is
referring to?
If you made a Turing Machine that counted to infinity and stored the
result the relative index of your current location would eventually
wrap across 4GB boundaries.
We could use the typical Turing Machine convention of delimiting the
numeric digit strings with a space character. Whether we used base 10
ACSII digits or a base 256 number system we will eventually have a
googleplex of 4GB blocks filled with the representation of a single
integer value.
RIP relative addressing of 64-bit architectures physically implements
this abstract model of computation directly.
No it doesn't. RIP is a 64-bit register which imposes an upper limit on
the number of memory locations which can be addressed (and AFAIK no x86
chip actually has a 64-bit address bus so the real limit will be
considerably less).
The abstract model itself specifies that an unlimited number of
0x7FFFFFFF jumps can be made to higher memory. When one of these jumps
exceeds installed memory a
Only if your relative addressing is done in something wider than
32-bits. Relative addressing simply adds an offset to the EIP. Since
this arithmetic is performed in 32-bits, it can never get to this hypothetical next block. It would wrap around to the beginning of the
current block (except I still have no idea what 'current block' means).
André
The abstract model of computation defined by RIP relative addressing
clearly specifies access to unlimited memory. Your first step in
seeing that I am correct is understanding RIP relative addressing.
On 2020-08-25 09:34, olcott wrote:
On 8/25/2020 9:56 AM, André G. Isaak wrote:
On 2020-08-25 08:34, olcott wrote:
On 8/24/2020 9:30 PM, André G. Isaak wrote:
On 2020-08-24 15:02, olcott wrote:
On 8/24/2020 2:23 PM, André G. Isaak wrote:
On 2020-08-24 12:26, olcott wrote:
On 8/24/2020 12:40 PM, André G. Isaak wrote:
On 2020-08-24 00:52, olcott wrote:
If the implementation detail of the memory architecture of the >>>>>>>>>> x86 language was an unlimited sequence of contiguous 4GB
blocks and the absolute addressing modes of the 86 language >>>>>>>>>> only referred to addresses within the current 4GB block then >>>>>>>>>> it becomes rather obvious that the x86 language exactly as it >>>>>>>>>> currently exists specifies access to unlimited memory.
If all absolute addresses refer to the 'current block', how
exactly does one identify which block is 'current'? AFAIK the >>>>>>>>> x86 does not have a 'current block pointer'.
André
The idea is that it is exactly like a Turing Machine tape head. >>>>>>>> The EIP points to the current byte address in the current 4GB
block.
There are an unlimited number of 4GB blocks before and after the >>>>>>>> current 4GB block.
So how do I know which of those blocks the EIP is pointing at?
Talking about the 'current' 4GB block is meaningless unless you
have some way of differentiating between these blocks.
The current 4GB block is exactly the same thing as the current
location of the Turing machine tape head.
I have absolutely no idea how to interpret that analogy.
--- So how do I know which of those blocks the EIP is pointing at?
Like a Turing Machine tape head has no uniquely defined index
identifier neither do the sequence of 4GB blocks. Where everywhere
you are is only relatitve to where you have been before.
There's no analog to a Turing Machine head in the x86. The x86 uses
random access memory unlike a Turing Machine. So whatever it is you
are trying to claim here is entirely unclear.
So say I have access to one of your modified x86s which has been
running for some unspecified amount of time and I can examine any
aspect of it. How do I determine *which* 4GB block the contents of
the EIP is referring to?
You have no way of knowing that and no need to know that.
If there is no way of knowing this, then how does this hypothetical
computer know which block it is working with?
The values in the current 4GB tell you everything that you need to
know. One of the things that you might need to know is that the result
of a computation spans however many 4GB blocks until space character
is encountered. Such a TM could count to infinity and store the result
as space delimited
Don't call this a TM. You are not describing a TM, you are describing
some vaguely defined modification of x86.
Your goal here seemed to be to somehow demonstrate that x86 architecture
is Turing Complete (which it isn't), or that you could pretend there was
some Turing Complete 'language' based on x86 assembly language. But
being written in a language which is Turing Complete doesn't make
something into a Turing Machine.
And since your whole reason for doing this was, as far as I can tell, to somehow justify your misuse of the term 'Turing Machine', then Turing Completeness isn't even the property you want. A language is Turing
Complete if it is capable of performing any computation that a Turing
Machine can perform. But if you want to claim that there is some Turing Machine that corresponds to your still-not-presented code, then you
would need to show that every computation which can be performed by an
X86 can be performed by a Turing Machine. That's not Turing
Completeness. That's something else.
And it still wouldn't make your code a Turing Machine.
ASCII digit strings.
If you made a Turing Machine that counted to infinity and stored the
result the relative index of your current location would eventually
wrap across 4GB boundaries.
We could use the typical Turing Machine convention of delimiting the
numeric digit strings with a space character. Whether we used base
10 ACSII digits or a base 256 number system we will eventually have
a googleplex of 4GB blocks filled with the representation of a
single integer value.
RIP relative addressing of 64-bit architectures physically
implements this abstract model of computation directly.
No it doesn't. RIP is a 64-bit register which imposes an upper limit
on the number of memory locations which can be addressed (and AFAIK
no x86 chip actually has a 64-bit address bus so the real limit will
be considerably less).
An abstract machine having a tape head that can be advanced in 0 to
0x7FFFFFFF increments an unlimited number of times specifies a model
of computation that has access to unlimited memory. The technical name
for memory addressing based on displacement from the current memory
address is relative addressing.
Relative addressing is always relative to something. If it is relative
to the current block, you need a way of specifying which block is the
current block,
You also need a way of specifying which block is the next block.
And it should be noted that relative addresses are translated into
absolute addresses whenever memory is actually accessed. So all you're
doing here is imaging a system in which there are unbounded address
registers which you don't talk about that somehow pick out each 4GB
block. Then you try to pretend this unbounded address doesn't exist by invoking all sorts of needless relative offsets. Wouldn't it be much
simpler to simply state you are talking about some hypothetical system
which is like the x86 except it has no bounds on addressing?
RIP relative addressing directly implements this in hardware. The
RIP implements nothing of the sort. RIP is simply a 64-bit instruction pointer register.
abstract model of computation specified by RIP relative addressing
allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When
a jump is made outside of the bounds of installed memory a hardware
fault occurs. Hardware faults are not part of any abstract model.
That that RIP relative addressing happens to be currently implemented
on a 64-bit architecture is an implementation detail that is not a
part of
Um. The 'R' in RIP specifies a 64-bit register. That's what 'R' means.
There is no x86 architecture that is defined in some register-width independent fashion.
André
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 10:34 AM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
[...]
The abstract model of computation defined by RIP relative addressing
clearly specifies access to unlimited memory. Your first step in
seeing that I am correct is understanding RIP relative addressing.
Clearly? How is that clear?
An abstract machine having a tape head that can be advanced in 0 to
0x7FFFFFFF increments an unlimited number of times specifies a model
of computation that has access to unlimited memory. The technical name
for memory addressing based on displacement from the current memory
address is relative addressing.
RIP relative addressing directly implements this in hardware. The
abstract model of computation specified by RIP relative addressing
allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When
a jump is made outside of the bounds of installed memory a hardware
fault occurs. Hardware faults are not part of any abstract model.
That that RIP relative addressing happens to be currently implemented
on a 64-bit architecture is an implementation detail that is not a
part of the abstract model of computation itself.
Yes, I read all that the first N times you posted it. I'm not
failing to understand in a way that would be solved by repetition.
I just disagree.
As far as I can tell, all you can reasonably is that relative addressing >>> doesn't specify a limit. If you try to do a relative addressing
operation that goes beyond the range of addresses the fixed-width
instruction pointer can hold, there is as far as I know no specification >>> in the x86 architecture of what happens. (Or perhaps there is such a
specification. If so, perhaps you can find it in some document.)
To implement the semantics specified by the x86 language would require
memory organzied as an unlimited sequence of contiguous 4GB
blocks. Absolute addressing modes of the 86 language would then only
refer to addresses within the current 4GB block. The concept of a
current 4GB block would be analogous to the current location of a
Turing Machine tape head.
That's not at all the same thing as specifying unlimited memory.
The abstract model of computation specified by the language does
define unlimited memory. Only the physical implementations of this
abstract model place limits on memory access.
Why would a specification for a real-world CPU specify unlimited memory
when no actual hardware has unlimited memory?
RIP relative addressing is a key example of this. Code written for the
RIP relative addressing model will run exactly the same on every
memory architecture that supports it.
If we suddenly have a technical breakthrough such that googleplex
sized RAM chips can be produced RIP relative addressing machine code
would run from this RAM unmodified.
Can you cite some x86 document that *actually says* anything about
access to unlimited memory? I'm asking for something other than your
infererence of unlimited memory from the lack of a specified limit.
Look into RIP relative addressing as a simpler way of understanding
the same thing that I am saying.
I understand it would be convenient *for you* if the "x86 language"
specified access to unlimited memory. That doesn't imply that it does.
It will make my Halting Problem refutation provided as x86 code have
much higher credibility and complete applicability to actual Turing
Machines.
BTW, this would all be irrelevant if you simply stated that, for your
purposes, you were working with an extended x86 that allows for
unlimited memory.
Not really, not at all. As long as it is understood that the abstract
model of computation defined by the x86 language is fully Turing
Complete physical implementations of this model that are not fully
Turing complete would still be understood to be totally applicable.
We might for example create a higher level language defined in terms
of 32-bit signed RIP relative addressing that is itself 100% Turing
Complete and then translate this higher level language into specific
hardware platforms.
Any such program needing less that 4GB RAM would run on existing
32-bit platforms. Any such program needing less than 1.0 TB of RAM
would run on existing 64-bit platforms.
The key advantage of this is that theory of computation concepts can
be empirically validated concretely using a much higher level
abstraction. The "C" programming language could be mapped to such a
32-bit signed RIP relative addressing model such that Turing complete
machines could be written directly in "C".
Your interpretation of somebody's web page describing the x86 JMP
instruction is that it implies access to unlimited memory. Since I have asked you several times to cite some document that states that same conclusion and you have failed to do so, I conclude that you cannot.
From https://c9x.me/x86/html/file_module_x86_id_147.html :
A relative offset (rel8, rel16, or rel32) is generally specified as
a label in assembly code, but at the machine code level, it is
encoded as a signed 8-, 16-, or 32-bit immediate value. This value
is added to the value in the EIP register. (Here, the EIP register
contains the address of the instruction following the JMP
instruction).
The EIP register is 64 bits. That's part of the x86 architecture. The source you cite contradicts your claim.
Now if you want to assume an x86-like architecture that can access
unlimited memory, that's fine.
On 8/25/2020 10:34 AM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
[...]
The abstract model of computation defined by RIP relative addressing
clearly specifies access to unlimited memory. Your first step in
seeing that I am correct is understanding RIP relative addressing.
Clearly? How is that clear?
An abstract machine having a tape head that can be advanced in 0 to 0x7FFFFFFF increments an unlimited number of times specifies a model
of computation that has access to unlimited memory. The technical name
for memory addressing based on displacement from the current memory
address is relative addressing.
RIP relative addressing directly implements this in hardware. The
abstract model of computation specified by RIP relative addressing
allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When
a jump is made outside of the bounds of installed memory a hardware
fault occurs. Hardware faults are not part of any abstract model.
That that RIP relative addressing happens to be currently implemented
on a 64-bit architecture is an implementation detail that is not a
part of the abstract model of computation itself.
As far as I can tell, all you can reasonably is that relative addressing
doesn't specify a limit. If you try to do a relative addressing
operation that goes beyond the range of addresses the fixed-width
instruction pointer can hold, there is as far as I know no specification
in the x86 architecture of what happens. (Or perhaps there is such a
specification. If so, perhaps you can find it in some document.)
To implement the semantics specified by the x86 language would require
memory organzied as an unlimited sequence of contiguous 4GB
blocks. Absolute addressing modes of the 86 language would then only
refer to addresses within the current 4GB block. The concept of a
current 4GB block would be analogous to the current location of a
Turing Machine tape head.
That's not at all the same thing as specifying unlimited memory.
The abstract model of computation specified by the language does
define unlimited memory. Only the physical implementations of this
abstract model place limits on memory access.
Why would a specification for a real-world CPU specify unlimited memory
when no actual hardware has unlimited memory?
RIP relative addressing is a key example of this. Code written for the
RIP relative addressing model will run exactly the same on every
memory architecture that supports it.
If we suddenly have a technical breakthrough such that googleplex
sized RAM chips can be produced RIP relative addressing machine code
would run from this RAM unmodified.
Can you cite some x86 document that *actually says* anything about
access to unlimited memory? I'm asking for something other than your
infererence of unlimited memory from the lack of a specified limit.
Look into RIP relative addressing as a simpler way of understanding
the same thing that I am saying.
I understand it would be convenient *for you* if the "x86 language"
specified access to unlimited memory. That doesn't imply that it does.
It will make my Halting Problem refutation provided as x86 code have
much higher credibility and complete applicability to actual Turing
Machines.
BTW, this would all be irrelevant if you simply stated that, for your
purposes, you were working with an extended x86 that allows for
unlimited memory.
Not really, not at all. As long as it is understood that the abstract
model of computation defined by the x86 language is fully Turing
Complete physical implementations of this model that are not fully
Turing complete would still be understood to be totally applicable.
We might for example create a higher level language defined in terms
of 32-bit signed RIP relative addressing that is itself 100% Turing
Complete and then translate this higher level language into specific
hardware platforms.
Any such program needing less that 4GB RAM would run on existing
32-bit platforms. Any such program needing less than 1.0 TB of RAM
would run on existing 64-bit platforms.
The key advantage of this is that theory of computation concepts can
be empirically validated concretely using a much higher level
abstraction. The "C" programming language could be mapped to such a
32-bit signed RIP relative addressing model such that Turing complete machines could be written directly in "C".
On 8/25/2020 10:59 AM, André G. Isaak wrote:
On 2020-08-25 09:34, olcott wrote:
On 8/25/2020 9:56 AM, André G. Isaak wrote:
On 2020-08-25 08:34, olcott wrote:
On 8/24/2020 9:30 PM, André G. Isaak wrote:
On 2020-08-24 15:02, olcott wrote:
On 8/24/2020 2:23 PM, André G. Isaak wrote:
On 2020-08-24 12:26, olcott wrote:
On 8/24/2020 12:40 PM, André G. Isaak wrote:
On 2020-08-24 00:52, olcott wrote:
If the implementation detail of the memory architecture of >>>>>>>>>>> the x86 language was an unlimited sequence of contiguous 4GB >>>>>>>>>>> blocks and the absolute addressing modes of the 86 language >>>>>>>>>>> only referred to addresses within the current 4GB block then >>>>>>>>>>> it becomes rather obvious that the x86 language exactly as it >>>>>>>>>>> currently exists specifies access to unlimited memory.
If all absolute addresses refer to the 'current block', how >>>>>>>>>> exactly does one identify which block is 'current'? AFAIK the >>>>>>>>>> x86 does not have a 'current block pointer'.
André
The idea is that it is exactly like a Turing Machine tape head. >>>>>>>>> The EIP points to the current byte address in the current 4GB >>>>>>>>> block.
There are an unlimited number of 4GB blocks before and after >>>>>>>>> the current 4GB block.
So how do I know which of those blocks the EIP is pointing at? >>>>>>>> Talking about the 'current' 4GB block is meaningless unless you >>>>>>>> have some way of differentiating between these blocks.
The current 4GB block is exactly the same thing as the current
location of the Turing machine tape head.
I have absolutely no idea how to interpret that analogy.
--- So how do I know which of those blocks the EIP is pointing at?
Like a Turing Machine tape head has no uniquely defined index
identifier neither do the sequence of 4GB blocks. Where everywhere
you are is only relatitve to where you have been before.
There's no analog to a Turing Machine head in the x86. The x86 uses
random access memory unlike a Turing Machine. So whatever it is you
are trying to claim here is entirely unclear.
So say I have access to one of your modified x86s which has been
running for some unspecified amount of time and I can examine any
aspect of it. How do I determine *which* 4GB block the contents of
the EIP is referring to?
You have no way of knowing that and no need to know that.
If there is no way of knowing this, then how does this hypothetical
computer know which block it is working with?
The values in the current 4GB tell you everything that you need to
know. One of the things that you might need to know is that the
result of a computation spans however many 4GB blocks until space
character is encountered. Such a TM could count to infinity and store
the result as space delimited
Don't call this a TM. You are not describing a TM, you are describing
some vaguely defined modification of x86.
Your goal here seemed to be to somehow demonstrate that x86
architecture is Turing Complete (which it isn't), or that you could
pretend there was some Turing Complete 'language' based on x86
assembly language. But being written in a language which is Turing
Complete doesn't make something into a Turing Machine.
And since your whole reason for doing this was, as far as I can tell,
to somehow justify your misuse of the term 'Turing Machine', then
Turing Completeness isn't even the property you want. A language is
Turing Complete if it is capable of performing any computation that a
Turing Machine can perform. But if you want to claim that there is
some Turing Machine that corresponds to your still-not-presented code,
then you would need to show that every computation which can be
performed by an X86 can be performed by a Turing Machine. That's not
Turing Completeness. That's something else.
And it still wouldn't make your code a Turing Machine.
ASCII digit strings.
If you made a Turing Machine that counted to infinity and stored
the result the relative index of your current location would
eventually wrap across 4GB boundaries.
We could use the typical Turing Machine convention of delimiting
the numeric digit strings with a space character. Whether we used
base 10 ACSII digits or a base 256 number system we will eventually
have a googleplex of 4GB blocks filled with the representation of a
single integer value.
RIP relative addressing of 64-bit architectures physically
implements this abstract model of computation directly.
No it doesn't. RIP is a 64-bit register which imposes an upper limit
on the number of memory locations which can be addressed (and AFAIK
no x86 chip actually has a 64-bit address bus so the real limit will
be considerably less).
An abstract machine having a tape head that can be advanced in 0 to
0x7FFFFFFF increments an unlimited number of times specifies a model
of computation that has access to unlimited memory. The technical
name for memory addressing based on displacement from the current
memory address is relative addressing.
Relative addressing is always relative to something. If it is relative
to the current block, you need a way of specifying which block is the
current block,
You also need a way of specifying which block is the next block.
And it should be noted that relative addresses are translated into
absolute addresses whenever memory is actually accessed. So all you're
doing here is imaging a system in which there are unbounded address
registers which you don't talk about that somehow pick out each 4GB
block. Then you try to pretend this unbounded address doesn't exist by
invoking all sorts of needless relative offsets. Wouldn't it be much
simpler to simply state you are talking about some hypothetical system
which is like the x86 except it has no bounds on addressing?
RIP relative addressing directly implements this in hardware. The
RIP implements nothing of the sort. RIP is simply a 64-bit instruction
pointer register.
abstract model of computation specified by RIP relative addressing
allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When
a jump is made outside of the bounds of installed memory a hardware
fault occurs. Hardware faults are not part of any abstract model.
That that RIP relative addressing happens to be currently implemented
on a 64-bit architecture is an implementation detail that is not a
part of
Um. The 'R' in RIP specifies a 64-bit register. That's what 'R' means.
There is no x86 architecture that is defined in some register-width
independent fashion.
André
"C" is not Turing Complete, but it could be: If we mapped the "C"
programming language to the abstract model of computation of 32-bit
signed RIP relative addressing that is implemented in the 64-bit machine architecture then "C" would have Turing Complete access to unlimited
memory.
On 8/25/2020 12:28 PM, Keith Thompson wrote:
Your interpretation of somebody's web page describing the x86 JMP
instruction is that it implies access to unlimited memory. Since I have
asked you several times to cite some document that states that same
conclusion and you have failed to do so, I conclude that you cannot.
http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF
page 25-192
E9 cd JMP rel32 1 Jump near, displacement relative to next
instruction
I said it is common knowledge and every source agrees that it is a
verified fact.
On 2020-08-25 11:51, olcott wrote:
On 8/25/2020 12:28 PM, Keith Thompson wrote:
Your interpretation of somebody's web page describing the x86 JMP
instruction is that it implies access to unlimited memory. Since I have >>> asked you several times to cite some document that states that same
conclusion and you have failed to do so, I conclude that you cannot.
http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF
page 25-192
E9 cd JMP rel32 1 Jump near, displacement relative to next
instruction
p25-192 of that document describes the jump instruction. It most
definitely *does not* claim that this instruction gives access to
unlimited memory.
I said it is common knowledge and every source agrees that it is a
verified fact.
The above one certainly does not. If you think otherwise, please provide
an actual quote from the above which talks about access to unlimited
memory.
André
On 2020-08-25 11:00, olcott wrote:
On 8/25/2020 10:59 AM, André G. Isaak wrote:
On 2020-08-25 09:34, olcott wrote:
On 8/25/2020 9:56 AM, André G. Isaak wrote:
On 2020-08-25 08:34, olcott wrote:
On 8/24/2020 9:30 PM, André G. Isaak wrote:
On 2020-08-24 15:02, olcott wrote:
On 8/24/2020 2:23 PM, André G. Isaak wrote:
On 2020-08-24 12:26, olcott wrote:
On 8/24/2020 12:40 PM, André G. Isaak wrote:
On 2020-08-24 00:52, olcott wrote:
If the implementation detail of the memory architecture of >>>>>>>>>>>> the x86 language was an unlimited sequence of contiguous 4GB >>>>>>>>>>>> blocks and the absolute addressing modes of the 86 language >>>>>>>>>>>> only referred to addresses within the current 4GB block then >>>>>>>>>>>> it becomes rather obvious that the x86 language exactly as >>>>>>>>>>>> it currently exists specifies access to unlimited memory. >>>>>>>>>>>If all absolute addresses refer to the 'current block', how >>>>>>>>>>> exactly does one identify which block is 'current'? AFAIK the >>>>>>>>>>> x86 does not have a 'current block pointer'.
André
The idea is that it is exactly like a Turing Machine tape head. >>>>>>>>>> The EIP points to the current byte address in the current 4GB >>>>>>>>>> block.
There are an unlimited number of 4GB blocks before and after >>>>>>>>>> the current 4GB block.
So how do I know which of those blocks the EIP is pointing at? >>>>>>>>> Talking about the 'current' 4GB block is meaningless unless you >>>>>>>>> have some way of differentiating between these blocks.
The current 4GB block is exactly the same thing as the current >>>>>>>> location of the Turing machine tape head.
I have absolutely no idea how to interpret that analogy.
--- So how do I know which of those blocks the EIP is pointing at? >>>>>> Like a Turing Machine tape head has no uniquely defined index
identifier neither do the sequence of 4GB blocks. Where everywhere >>>>>> you are is only relatitve to where you have been before.
There's no analog to a Turing Machine head in the x86. The x86 uses
random access memory unlike a Turing Machine. So whatever it is you
are trying to claim here is entirely unclear.
So say I have access to one of your modified x86s which has been
running for some unspecified amount of time and I can examine any
aspect of it. How do I determine *which* 4GB block the contents of
the EIP is referring to?
You have no way of knowing that and no need to know that.
If there is no way of knowing this, then how does this hypothetical
computer know which block it is working with?
The values in the current 4GB tell you everything that you need to
know. One of the things that you might need to know is that the
result of a computation spans however many 4GB blocks until space
character is encountered. Such a TM could count to infinity and
store the result as space delimited
Don't call this a TM. You are not describing a TM, you are describing
some vaguely defined modification of x86.
Your goal here seemed to be to somehow demonstrate that x86
architecture is Turing Complete (which it isn't), or that you could
pretend there was some Turing Complete 'language' based on x86
assembly language. But being written in a language which is Turing
Complete doesn't make something into a Turing Machine.
And since your whole reason for doing this was, as far as I can tell,
to somehow justify your misuse of the term 'Turing Machine', then
Turing Completeness isn't even the property you want. A language is
Turing Complete if it is capable of performing any computation that a
Turing Machine can perform. But if you want to claim that there is
some Turing Machine that corresponds to your still-not-presented
code, then you would need to show that every computation which can be
performed by an X86 can be performed by a Turing Machine. That's not
Turing Completeness. That's something else.
And it still wouldn't make your code a Turing Machine.
ASCII digit strings.
If you made a Turing Machine that counted to infinity and stored
the result the relative index of your current location would
eventually wrap across 4GB boundaries.
We could use the typical Turing Machine convention of delimiting
the numeric digit strings with a space character. Whether we used
base 10 ACSII digits or a base 256 number system we will
eventually have a googleplex of 4GB blocks filled with the
representation of a single integer value.
RIP relative addressing of 64-bit architectures physically
implements this abstract model of computation directly.
No it doesn't. RIP is a 64-bit register which imposes an upper
limit on the number of memory locations which can be addressed (and
AFAIK no x86 chip actually has a 64-bit address bus so the real
limit will be considerably less).
An abstract machine having a tape head that can be advanced in 0 to
0x7FFFFFFF increments an unlimited number of times specifies a model
of computation that has access to unlimited memory. The technical
name for memory addressing based on displacement from the current
memory address is relative addressing.
Relative addressing is always relative to something. If it is
relative to the current block, you need a way of specifying which
block is the current block,
You also need a way of specifying which block is the next block.
And it should be noted that relative addresses are translated into
absolute addresses whenever memory is actually accessed. So all
you're doing here is imaging a system in which there are unbounded
address registers which you don't talk about that somehow pick out
each 4GB block. Then you try to pretend this unbounded address
doesn't exist by invoking all sorts of needless relative offsets.
Wouldn't it be much simpler to simply state you are talking about
some hypothetical system which is like the x86 except it has no
bounds on addressing?
RIP relative addressing directly implements this in hardware. The
RIP implements nothing of the sort. RIP is simply a 64-bit
instruction pointer register.
abstract model of computation specified by RIP relative addressing
allows an unlimited number of 0x7FFFFFFF jumps to higher memory.
When a jump is made outside of the bounds of installed memory a
hardware fault occurs. Hardware faults are not part of any abstract
model.
That that RIP relative addressing happens to be currently
implemented on a 64-bit architecture is an implementation detail
that is not a part of
Um. The 'R' in RIP specifies a 64-bit register. That's what 'R'
means. There is no x86 architecture that is defined in some
register-width independent fashion.
André
"C" is not Turing Complete, but it could be: If we mapped the "C"
programming language to the abstract model of computation of 32-bit
signed RIP relative addressing that is implemented in the 64-bit
machine architecture then "C" would have Turing Complete access to
unlimited memory.
Why are you suddenly talking about C?
Before you were talking about x86.
Is this a change of topic? If so, why are you giving it as a reply to my post?
And why on earth would C be related to RIP relative addressing? C,
unlike x86 assembly, is not tied to any specific processor. It makes no assumptions about which registers or addressing modes exist.
André
On 8/25/2020 12:28 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 10:34 AM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
[...]
The abstract model of computation defined by RIP relative addressing >>>>> clearly specifies access to unlimited memory. Your first step in
seeing that I am correct is understanding RIP relative addressing.
Clearly? How is that clear?
An abstract machine having a tape head that can be advanced in 0 to
0x7FFFFFFF increments an unlimited number of times specifies a model
of computation that has access to unlimited memory. The technical name
for memory addressing based on displacement from the current memory
address is relative addressing.
RIP relative addressing directly implements this in hardware. The
abstract model of computation specified by RIP relative addressing
allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When
a jump is made outside of the bounds of installed memory a hardware
fault occurs. Hardware faults are not part of any abstract model.
That that RIP relative addressing happens to be currently implemented
on a 64-bit architecture is an implementation detail that is not a
part of the abstract model of computation itself.
Yes, I read all that the first N times you posted it. I'm not
failing to understand in a way that would be solved by repetition.
I just disagree.
Valid disagreement always has a basis provided.
As far as I can tell, all you can reasonably is that relative addressing >>>> doesn't specify a limit. If you try to do a relative addressing
operation that goes beyond the range of addresses the fixed-width
instruction pointer can hold, there is as far as I know no specification >>>> in the x86 architecture of what happens. (Or perhaps there is such a
specification. If so, perhaps you can find it in some document.)
To implement the semantics specified by the x86 language would require
memory organzied as an unlimited sequence of contiguous 4GB
blocks. Absolute addressing modes of the 86 language would then only
refer to addresses within the current 4GB block. The concept of a
current 4GB block would be analogous to the current location of a
Turing Machine tape head.
That's not at all the same thing as specifying unlimited memory.
The abstract model of computation specified by the language does
define unlimited memory. Only the physical implementations of this
abstract model place limits on memory access.
Why would a specification for a real-world CPU specify unlimited memory >>>> when no actual hardware has unlimited memory?
RIP relative addressing is a key example of this. Code written for the
RIP relative addressing model will run exactly the same on every
memory architecture that supports it.
If we suddenly have a technical breakthrough such that googleplex
sized RAM chips can be produced RIP relative addressing machine code
would run from this RAM unmodified.
Can you cite some x86 document that *actually says* anything about
access to unlimited memory? I'm asking for something other than your
infererence of unlimited memory from the lack of a specified limit.
Look into RIP relative addressing as a simpler way of understanding
the same thing that I am saying.
I understand it would be convenient *for you* if the "x86 language"It will make my Halting Problem refutation provided as x86 code have
specified access to unlimited memory. That doesn't imply that it does. >>>
much higher credibility and complete applicability to actual Turing
Machines.
BTW, this would all be irrelevant if you simply stated that, for your
purposes, you were working with an extended x86 that allows for
unlimited memory.
Not really, not at all. As long as it is understood that the abstract
model of computation defined by the x86 language is fully Turing
Complete physical implementations of this model that are not fully
Turing complete would still be understood to be totally applicable.
We might for example create a higher level language defined in terms
of 32-bit signed RIP relative addressing that is itself 100% Turing
Complete and then translate this higher level language into specific
hardware platforms.
Any such program needing less that 4GB RAM would run on existing
32-bit platforms. Any such program needing less than 1.0 TB of RAM
would run on existing 64-bit platforms.
The key advantage of this is that theory of computation concepts can
be empirically validated concretely using a much higher level
abstraction. The "C" programming language could be mapped to such a
32-bit signed RIP relative addressing model such that Turing complete
machines could be written directly in "C".
Your interpretation of somebody's web page describing the x86 JMP
instruction is that it implies access to unlimited memory. Since I have
asked you several times to cite some document that states that same
conclusion and you have failed to do so, I conclude that you cannot.
http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF
page 25-192
E9 cd JMP rel32 1 Jump near, displacement relative to next
instruction
I said it is common knowledge and every source agrees that it is a
verified fact.
This is a 1995 manual thus 32-bit architecture. https://en.wikipedia.org/wiki/List_of_Intel_microprocessors#Original_Pentium
From https://c9x.me/x86/html/file_module_x86_id_147.html :
A relative offset (rel8, rel16, or rel32) is generally specified as
a label in assembly code, but at the machine code level, it is
encoded as a signed 8-, 16-, or 32-bit immediate value. This value
is added to the value in the EIP register. (Here, the EIP register
contains the address of the instruction following the JMP
instruction).
The EIP register is 64 bits. That's part of the x86 architecture. The
source you cite contradicts your claim.
You are confusing the 32-bit EIP register with the 64-bit RIP register.
Now if you want to assume an x86-like architecture that can access
unlimited memory, that's fine.
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 12:28 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 10:34 AM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
[...]
The abstract model of computation defined by RIP relative addressing >>>>>> clearly specifies access to unlimited memory. Your first step in
seeing that I am correct is understanding RIP relative addressing.
Clearly? How is that clear?
An abstract machine having a tape head that can be advanced in 0 to
0x7FFFFFFF increments an unlimited number of times specifies a model
of computation that has access to unlimited memory. The technical name >>>> for memory addressing based on displacement from the current memory
address is relative addressing.
RIP relative addressing directly implements this in hardware. The
abstract model of computation specified by RIP relative addressing
allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When
a jump is made outside of the bounds of installed memory a hardware
fault occurs. Hardware faults are not part of any abstract model.
That that RIP relative addressing happens to be currently implemented
on a 64-bit architecture is an implementation detail that is not a
part of the abstract model of computation itself.
Yes, I read all that the first N times you posted it. I'm not
failing to understand in a way that would be solved by repetition.
I just disagree.
Valid disagreement always has a basis provided.
I disagree.
As far as I can tell, all you can reasonably is that relative addressing >>>>> doesn't specify a limit. If you try to do a relative addressing
operation that goes beyond the range of addresses the fixed-width
instruction pointer can hold, there is as far as I know no specification >>>>> in the x86 architecture of what happens. (Or perhaps there is such a >>>>> specification. If so, perhaps you can find it in some document.)
To implement the semantics specified by the x86 language would require >>>> memory organzied as an unlimited sequence of contiguous 4GB
blocks. Absolute addressing modes of the 86 language would then only
refer to addresses within the current 4GB block. The concept of a
current 4GB block would be analogous to the current location of a
Turing Machine tape head.
That's not at all the same thing as specifying unlimited memory.
The abstract model of computation specified by the language does
define unlimited memory. Only the physical implementations of this
abstract model place limits on memory access.
Why would a specification for a real-world CPU specify unlimited memory >>>>> when no actual hardware has unlimited memory?
RIP relative addressing is a key example of this. Code written for the >>>> RIP relative addressing model will run exactly the same on every
memory architecture that supports it.
If we suddenly have a technical breakthrough such that googleplex
sized RAM chips can be produced RIP relative addressing machine code
would run from this RAM unmodified.
Can you cite some x86 document that *actually says* anything about
access to unlimited memory? I'm asking for something other than your >>>>> infererence of unlimited memory from the lack of a specified limit.
Look into RIP relative addressing as a simpler way of understanding
the same thing that I am saying.
I understand it would be convenient *for you* if the "x86 language"It will make my Halting Problem refutation provided as x86 code have
specified access to unlimited memory. That doesn't imply that it does. >>>>
much higher credibility and complete applicability to actual Turing
Machines.
BTW, this would all be irrelevant if you simply stated that, for your >>>>> purposes, you were working with an extended x86 that allows for
unlimited memory.
Not really, not at all. As long as it is understood that the abstract
model of computation defined by the x86 language is fully Turing
Complete physical implementations of this model that are not fully
Turing complete would still be understood to be totally applicable.
We might for example create a higher level language defined in terms
of 32-bit signed RIP relative addressing that is itself 100% Turing
Complete and then translate this higher level language into specific
hardware platforms.
Any such program needing less that 4GB RAM would run on existing
32-bit platforms. Any such program needing less than 1.0 TB of RAM
would run on existing 64-bit platforms.
The key advantage of this is that theory of computation concepts can
be empirically validated concretely using a much higher level
abstraction. The "C" programming language could be mapped to such a
32-bit signed RIP relative addressing model such that Turing complete
machines could be written directly in "C".
Your interpretation of somebody's web page describing the x86 JMP
instruction is that it implies access to unlimited memory. Since I have >>> asked you several times to cite some document that states that same
conclusion and you have failed to do so, I conclude that you cannot.
http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF
Great, that's actually a document from Intel. I wonder why you couldn't
have cited that earlier when I asked you to.
page 25-192
E9 cd JMP rel32 1 Jump near, displacement relative to next
instruction
IF instruction = relative JMP (* i.e. operand is rel8, rel16, or rel32 *) THEN
EIP ← EIP + rel8/16/32;
IF OperandSize = 16
THEN EIP ← EIP AND 0000FFFFH;
FI;
FI;
A relative JMP is defined in terms of its effect on EIP, which is
specifed in that document as a 32-bit register. This directly
contradicts your claim.
I said it is common knowledge and every source agrees that it is a
verified fact.
No source that you've cited says that x86 can access unlimited memory.
This one in particular seems clear that a relative JMP cannot do so.
Is that a good enough basis for valid disagreement?
This is a 1995 manual thus 32-bit architecture.
https://en.wikipedia.org/wiki/List_of_Intel_microprocessors#Original_Pentium >>
From https://c9x.me/x86/html/file_module_x86_id_147.html :
A relative offset (rel8, rel16, or rel32) is generally specified as >>> a label in assembly code, but at the machine code level, it is
encoded as a signed 8-, 16-, or 32-bit immediate value. This value >>> is added to the value in the EIP register. (Here, the EIP register >>> contains the address of the instruction following the JMP
instruction).
The EIP register is 64 bits. That's part of the x86 architecture. The
source you cite contradicts your claim.
You are confusing the 32-bit EIP register with the 64-bit RIP register.
Quite possibly. In any case, the instruction pointer has a fixed width (whatever it is) and can only point to one of a finite number of memory locations.
Now if you want to assume an x86-like architecture that can access
unlimited memory, that's fine.
On 8/25/2020 1:30 PM, André G. Isaak wrote:
On 2020-08-25 11:51, olcott wrote:
On 8/25/2020 12:28 PM, Keith Thompson wrote:
Your interpretation of somebody's web page describing the x86 JMP
instruction is that it implies access to unlimited memory. Since I have >>>> asked you several times to cite some document that states that same
conclusion and you have failed to do so, I conclude that you cannot.
http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF
page 25-192
E9 cd JMP rel32 1 Jump near, displacement relative to next
instruction
p25-192 of that document describes the jump instruction. It most
definitely *does not* claim that this instruction gives access to
unlimited memory.
I said it is common knowledge and every source agrees that it is a
verified fact.
The above one certainly does not. If you think otherwise, please
provide an actual quote from the above which talks about access to
unlimited memory.
The syntax of the language specifies this:
Jumps to an address that is 7FFFFFFF greater than FFFFFFF5
fffffff0: e9ffffff7f jmp 0xfffffff5 + 0x7fffffff
fffffff5: 90 nop
In any case the abstract model of computation defined by RIP relative addressing much more directly provides access to unlimited memory.
An abstract model of computation having all memory access for both
code and data specified as signed 32-bit offsets relative to the
instruction pointer would define Turing complete access to memory when
the size of the instruction pointer is unspecified.
The "C" programming language could be mapped to such an abstract model
thus allowing Turing complete programs to be written directly in "C".
olcott <NoOne@NoWhere.com> writes:
On 8/23/2020 7:38 PM, Ben Bacarisse wrote:<cut>
But suit yourself. As I said, boasting and insulting is the main
pleasure for a crank like you, and you won't stop just because I call
you out on it. Have fun.
I don't disrespect you. I disrespect your disrespecting me.
The code is completed, it is only 65 bytes of x86 machine code.
Another empty boast -- still no code. You can't seem to stop doing it.
Or maybe repeating the excuse for not posting code -- that it would be "pearls before swine" -- would not look good next to your claim that you don't disrespect me!
The code begins @ machine address 0x100. Fills the 256 bytes preceding
it with “@@@@”. Dynamically modifies its own first instruction to
refer to its new code address. Copies itself to a new address that is
exactly 0x100 bytes above its current code address and then does a
relative jump to this new address.
Instead of 65 bytes of unambiguous code, you post 312 bytes of poorly
worded description. Despite the poor wording, this is clearly not the
code you said you could produce (it's not in any way a linked list) but that's OK. You say enough for me to know that even this more limited
loop does not meet your objective. Post the code and you can find out
why it can only access a finite amount memory. Don't post the code and
you can boast away to year heart's content. It's up to you.
I had the emulator terminate its execution after the first four full
cycles. I output the first 1024 bytes of memory before and after
execution and output complete execution trace of all of the execution.
This is 23 pages long.
Wow! You wrote code and got 23 pages of output. You must be right.
I'll call the editor of the JACM right away.
According to the meaning of the syntax of the x86 language this
program would continue to operate after it completed its execution @
machine address: FFFFFF00.
By studiously hiding the code you can avoid anyone being able to be
specific, but assuming a reasonable interpretation of your non-code
waffle, the jumps will always be to addresses within the current code segment. I'm pretty sure there are other problems with it (hint: you
talk of EIP and not RIP so data access is also a problem) but your
hiding the code protects it from more specific analysis.
The only reason that it would not continue to operate would be that
the physical implementation of the abstract model did not implement
the semantics specified by the language syntax.
No. For more details, post the code.
On 8/25/2020 2:29 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 12:28 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 10:34 AM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
[...]
The abstract model of computation defined by RIP relative addressing >>>>>>> clearly specifies access to unlimited memory. Your first step in >>>>>>> seeing that I am correct is understanding RIP relative addressing. >>>>>>Clearly? How is that clear?
An abstract machine having a tape head that can be advanced in 0 to
0x7FFFFFFF increments an unlimited number of times specifies a model >>>>> of computation that has access to unlimited memory. The technical name >>>>> for memory addressing based on displacement from the current memory
address is relative addressing.
RIP relative addressing directly implements this in hardware. The
abstract model of computation specified by RIP relative addressing
allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When >>>>> a jump is made outside of the bounds of installed memory a hardware
fault occurs. Hardware faults are not part of any abstract model.
That that RIP relative addressing happens to be currently implemented >>>>> on a 64-bit architecture is an implementation detail that is not a
part of the abstract model of computation itself.
Yes, I read all that the first N times you posted it. I'm not
failing to understand in a way that would be solved by repetition.
I just disagree.
Valid disagreement always has a basis provided.
I disagree.
As far as I can tell, all you can reasonably is that relative addressing >>>>>> doesn't specify a limit. If you try to do a relative addressing
operation that goes beyond the range of addresses the fixed-width
instruction pointer can hold, there is as far as I know no specification >>>>>> in the x86 architecture of what happens. (Or perhaps there is such a >>>>>> specification. If so, perhaps you can find it in some document.)
To implement the semantics specified by the x86 language would require >>>>> memory organzied as an unlimited sequence of contiguous 4GB
blocks. Absolute addressing modes of the 86 language would then only >>>>> refer to addresses within the current 4GB block. The concept of a
current 4GB block would be analogous to the current location of a
Turing Machine tape head.
That's not at all the same thing as specifying unlimited memory.
The abstract model of computation specified by the language does
define unlimited memory. Only the physical implementations of this
abstract model place limits on memory access.
Why would a specification for a real-world CPU specify unlimited memory >>>>>> when no actual hardware has unlimited memory?
RIP relative addressing is a key example of this. Code written for the >>>>> RIP relative addressing model will run exactly the same on every
memory architecture that supports it.
If we suddenly have a technical breakthrough such that googleplex
sized RAM chips can be produced RIP relative addressing machine code >>>>> would run from this RAM unmodified.
Can you cite some x86 document that *actually says* anything about >>>>>> access to unlimited memory? I'm asking for something other than your >>>>>> infererence of unlimited memory from the lack of a specified limit. >>>>>>
Look into RIP relative addressing as a simpler way of understanding
the same thing that I am saying.
I understand it would be convenient *for you* if the "x86 language" >>>>>> specified access to unlimited memory. That doesn't imply that it does. >>>>>It will make my Halting Problem refutation provided as x86 code have >>>>> much higher credibility and complete applicability to actual Turing
Machines.
BTW, this would all be irrelevant if you simply stated that, for your >>>>>> purposes, you were working with an extended x86 that allows for
unlimited memory.
Not really, not at all. As long as it is understood that the abstract >>>>> model of computation defined by the x86 language is fully Turing
Complete physical implementations of this model that are not fully
Turing complete would still be understood to be totally applicable.
We might for example create a higher level language defined in terms >>>>> of 32-bit signed RIP relative addressing that is itself 100% Turing
Complete and then translate this higher level language into specific >>>>> hardware platforms.
Any such program needing less that 4GB RAM would run on existing
32-bit platforms. Any such program needing less than 1.0 TB of RAM
would run on existing 64-bit platforms.
The key advantage of this is that theory of computation concepts can >>>>> be empirically validated concretely using a much higher level
abstraction. The "C" programming language could be mapped to such a
32-bit signed RIP relative addressing model such that Turing complete >>>>> machines could be written directly in "C".
Your interpretation of somebody's web page describing the x86 JMP
instruction is that it implies access to unlimited memory. Since I have >>>> asked you several times to cite some document that states that same
conclusion and you have failed to do so, I conclude that you cannot.
http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF
Great, that's actually a document from Intel. I wonder why you couldn't
have cited that earlier when I asked you to.
page 25-192
E9 cd JMP rel32 1 Jump near, displacement relative to next
instruction
IF instruction = relative JMP (* i.e. operand is rel8, rel16, or rel32 *)
THEN
EIP ← EIP + rel8/16/32;
IF OperandSize = 16
THEN EIP ← EIP AND 0000FFFFH;
FI;
FI;
A relative JMP is defined in terms of its effect on EIP, which is
specifed in that document as a 32-bit register. This directly
contradicts your claim.
I said it is common knowledge and every source agrees that it is a
verified fact.
No source that you've cited says that x86 can access unlimited memory.
This one in particular seems clear that a relative JMP cannot do so.
Is that a good enough basis for valid disagreement?
This is a 1995 manual thus 32-bit architecture.
https://en.wikipedia.org/wiki/List_of_Intel_microprocessors#Original_Pentium
From https://c9x.me/x86/html/file_module_x86_id_147.html :
A relative offset (rel8, rel16, or rel32) is generally specified as >>>> a label in assembly code, but at the machine code level, it is
encoded as a signed 8-, 16-, or 32-bit immediate value. This value >>>> is added to the value in the EIP register. (Here, the EIP register >>>> contains the address of the instruction following the JMP
instruction).
The EIP register is 64 bits. That's part of the x86 architecture. The >>>> source you cite contradicts your claim.
You are confusing the 32-bit EIP register with the 64-bit RIP register.
Quite possibly. In any case, the instruction pointer has a fixed width
(whatever it is) and can only point to one of a finite number of memory
locations.
Now if you want to assume an x86-like architecture that can access
unlimited memory, that's fine.
I have switched gears now.
I have a much more effective approach.
To make the abstract model of computation implied by RIP relative
addressing code have Turing complete memory access for both code and
data we merely fail to assume any specific size of the instruction
pointer.
The abstract model of computation itself does not assume that the RIP register is 64-bits. Details such as this are not actually required
for instruction pointer relative addressing.
Because we know that the generic instruction pointer relative
addressing model of computation is Turing complete any "C" code that
could be translated into this model would also be Turing complete.
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 1:30 PM, André G. Isaak wrote:
On 2020-08-25 11:51, olcott wrote:
On 8/25/2020 12:28 PM, Keith Thompson wrote:
Your interpretation of somebody's web page describing the x86 JMP
instruction is that it implies access to unlimited memory. Since I have >>>>> asked you several times to cite some document that states that same
conclusion and you have failed to do so, I conclude that you cannot. >>>>>
http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF
page 25-192
E9 cd JMP rel32 1 Jump near, displacement relative to next
instruction
p25-192 of that document describes the jump instruction. It most
definitely *does not* claim that this instruction gives access to
unlimited memory.
I said it is common knowledge and every source agrees that it is a
verified fact.
The above one certainly does not. If you think otherwise, please
provide an actual quote from the above which talks about access to
unlimited memory.
The syntax of the language specifies this:
Jumps to an address that is 7FFFFFFF greater than FFFFFFF5
fffffff0: e9ffffff7f jmp 0xfffffff5 + 0x7fffffff
fffffff5: 90 nop
The syntax of the language specifies no such thing. You're talking
about semantics.
In any case the abstract model of computation defined by RIP relative
addressing much more directly provides access to unlimited memory.
An abstract model of computation having all memory access for both
code and data specified as signed 32-bit offsets relative to the
instruction pointer would define Turing complete access to memory when
the size of the instruction pointer is unspecified.
You've decided, with no particular basis, which parts of the x86
architecture are part of your "abstract model" and which are not.
You accept that offsets are 32 bits, but you ignore the fact that
the instruction pointer is no more than 64 bits.
If you want to call that *your* abstract model, that's fine. Your
assertion that the x86 language/architecture/whatever implies access
to unlimited memory is disproven by the very links you've posted.
Please find and post a link to a definitive x86 description that
actually *says* that it can access unlimited memory. Something that
merely doesn't specify a limit does not count. You've claimed
repeatedly that this is "common knowledge and every source agrees
that it is a verified fact". If so, surely you can find *something*
that actually says so.
The "C" programming language could be mapped to such an abstract model
thus allowing Turing complete programs to be written directly in "C".
So Turing completeness is an attribute of programs? I thought it was an attribute of languages and similar entities. Are you talking about a
program that implements some Turing-complete system?
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 2:29 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 12:28 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 10:34 AM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
[...]
The abstract model of computation defined by RIP relative addressing >>>>>>>> clearly specifies access to unlimited memory. Your first step in >>>>>>>> seeing that I am correct is understanding RIP relative addressing. >>>>>>>Clearly? How is that clear?
An abstract machine having a tape head that can be advanced in 0 to >>>>>> 0x7FFFFFFF increments an unlimited number of times specifies a model >>>>>> of computation that has access to unlimited memory. The technical name >>>>>> for memory addressing based on displacement from the current memory >>>>>> address is relative addressing.
RIP relative addressing directly implements this in hardware. The
abstract model of computation specified by RIP relative addressing >>>>>> allows an unlimited number of 0x7FFFFFFF jumps to higher memory. When >>>>>> a jump is made outside of the bounds of installed memory a hardware >>>>>> fault occurs. Hardware faults are not part of any abstract model.
That that RIP relative addressing happens to be currently implemented >>>>>> on a 64-bit architecture is an implementation detail that is not a >>>>>> part of the abstract model of computation itself.
Yes, I read all that the first N times you posted it. I'm not
failing to understand in a way that would be solved by repetition.
I just disagree.
Valid disagreement always has a basis provided.
I disagree.
As far as I can tell, all you can reasonably is that relative addressingTo implement the semantics specified by the x86 language would require >>>>>> memory organzied as an unlimited sequence of contiguous 4GB
doesn't specify a limit. If you try to do a relative addressing >>>>>>> operation that goes beyond the range of addresses the fixed-width >>>>>>> instruction pointer can hold, there is as far as I know no specification
in the x86 architecture of what happens. (Or perhaps there is such a >>>>>>> specification. If so, perhaps you can find it in some document.) >>>>>>
blocks. Absolute addressing modes of the 86 language would then only >>>>>> refer to addresses within the current 4GB block. The concept of a
current 4GB block would be analogous to the current location of a
Turing Machine tape head.
That's not at all the same thing as specifying unlimited memory.
The abstract model of computation specified by the language does
define unlimited memory. Only the physical implementations of this >>>>>> abstract model place limits on memory access.
Why would a specification for a real-world CPU specify unlimited memory >>>>>>> when no actual hardware has unlimited memory?
RIP relative addressing is a key example of this. Code written for the >>>>>> RIP relative addressing model will run exactly the same on every
memory architecture that supports it.
If we suddenly have a technical breakthrough such that googleplex
sized RAM chips can be produced RIP relative addressing machine code >>>>>> would run from this RAM unmodified.
Can you cite some x86 document that *actually says* anything about >>>>>>> access to unlimited memory? I'm asking for something other than your >>>>>>> infererence of unlimited memory from the lack of a specified limit. >>>>>>>
Look into RIP relative addressing as a simpler way of understanding >>>>>> the same thing that I am saying.
I understand it would be convenient *for you* if the "x86 language" >>>>>>> specified access to unlimited memory. That doesn't imply that it does. >>>>>>It will make my Halting Problem refutation provided as x86 code have >>>>>> much higher credibility and complete applicability to actual Turing >>>>>> Machines.
BTW, this would all be irrelevant if you simply stated that, for your >>>>>>> purposes, you were working with an extended x86 that allows for
unlimited memory.
Not really, not at all. As long as it is understood that the abstract >>>>>> model of computation defined by the x86 language is fully Turing
Complete physical implementations of this model that are not fully >>>>>> Turing complete would still be understood to be totally applicable. >>>>>>
We might for example create a higher level language defined in terms >>>>>> of 32-bit signed RIP relative addressing that is itself 100% Turing >>>>>> Complete and then translate this higher level language into specific >>>>>> hardware platforms.
Any such program needing less that 4GB RAM would run on existing
32-bit platforms. Any such program needing less than 1.0 TB of RAM >>>>>> would run on existing 64-bit platforms.
The key advantage of this is that theory of computation concepts can >>>>>> be empirically validated concretely using a much higher level
abstraction. The "C" programming language could be mapped to such a >>>>>> 32-bit signed RIP relative addressing model such that Turing complete >>>>>> machines could be written directly in "C".
Your interpretation of somebody's web page describing the x86 JMP
instruction is that it implies access to unlimited memory. Since I have >>>>> asked you several times to cite some document that states that same
conclusion and you have failed to do so, I conclude that you cannot.
http://datasheets.chipdb.org/Intel/x86/Pentium/24143004.PDF
Great, that's actually a document from Intel. I wonder why you couldn't >>> have cited that earlier when I asked you to.
page 25-192
E9 cd JMP rel32 1 Jump near, displacement relative to next
instruction
IF instruction = relative JMP (* i.e. operand is rel8, rel16, or rel32 *) >>> THEN
EIP ← EIP + rel8/16/32;
IF OperandSize = 16
THEN EIP ← EIP AND 0000FFFFH;
FI;
FI;
A relative JMP is defined in terms of its effect on EIP, which is
specifed in that document as a 32-bit register. This directly
contradicts your claim.
I said it is common knowledge and every source agrees that it is a
verified fact.
No source that you've cited says that x86 can access unlimited memory.
This one in particular seems clear that a relative JMP cannot do so.
Is that a good enough basis for valid disagreement?
This is a 1995 manual thus 32-bit architecture.Quite possibly. In any case, the instruction pointer has a fixed width
https://en.wikipedia.org/wiki/List_of_Intel_microprocessors#Original_Pentium
From https://c9x.me/x86/html/file_module_x86_id_147.html :
A relative offset (rel8, rel16, or rel32) is generally specified as
a label in assembly code, but at the machine code level, it is >>>>> encoded as a signed 8-, 16-, or 32-bit immediate value. This value >>>>> is added to the value in the EIP register. (Here, the EIP register >>>>> contains the address of the instruction following the JMP
instruction).
The EIP register is 64 bits. That's part of the x86 architecture. The >>>>> source you cite contradicts your claim.
You are confusing the 32-bit EIP register with the 64-bit RIP register. >>>
(whatever it is) and can only point to one of a finite number of memory
locations.
Now if you want to assume an x86-like architecture that can access
unlimited memory, that's fine.
I have switched gears now.
I have a much more effective approach.
To make the abstract model of computation implied by RIP relative
addressing code have Turing complete memory access for both code and
data we merely fail to assume any specific size of the instruction
pointer.
The abstract model of computation itself does not assume that the RIP
register is 64-bits. Details such as this are not actually required
for instruction pointer relative addressing.
Because we know that the generic instruction pointer relative
addressing model of computation is Turing complete any "C" code that
could be translated into this model would also be Turing complete.
That *almost* sounds like an admission that you were wrong.
Is it? If so, will you acknowledge the time and effort I and others
have spent convincing you of that? Do you finally realize that the
sources you've cited in support of your claims actually contradict them?
Will you acknowledge that what you call "The abstract model of
computation" is not something that follows from the x86 architecture,
but that you've invented yourself?
On 8/25/2020 2:54 PM, Keith Thompson wrote:[...]
olcott <NoOne@NoWhere.com> writes:
I have switched gears now.
I have a much more effective approach.
To make the abstract model of computation implied by RIP relative
addressing code have Turing complete memory access for both code and
data we merely fail to assume any specific size of the instruction
pointer.
The abstract model of computation itself does not assume that the RIP
register is 64-bits. Details such as this are not actually required
for instruction pointer relative addressing.
Because we know that the generic instruction pointer relative
addressing model of computation is Turing complete any "C" code that
could be translated into this model would also be Turing complete.
That *almost* sounds like an admission that you were wrong.
Is it? If so, will you acknowledge the time and effort I and others
have spent convincing you of that? Do you finally realize that the
sources you've cited in support of your claims actually contradict them?
I am not wrong. The only reason that anyone thought that I was wrong
is that they did not divide the line between the abstract model and
its physical implementation correctly. They may persist in doing that intentionally because they really like to say that I am wrong.
Will you acknowledge that what you call "The abstract model of
computation" is not something that follows from the x86 architecture,
but that you've invented yourself?
It is exactly as I have always said it:
The defined byte sequence of the machine code and the primary change
of machine state that the instruction is defined to accomplish.
Jumps to an address that is 7FFFFFFF greater than FFFFFFF5
fffffff0: e9ffffff7f jmp 0xfffffff5 + 0x7fffffff
fffffff5: 90 nop
The above machine code is syntactically correct and says that it will
jump to machine address: 0xfffffff5 + 0x7fffffff in the 32-bit model.
When it does not actually jump to this address it is a failure of the hardware to implement what the machine language clearly specifies.
If the hardware did jump to 0xfffffff5 + 0x7fffffff in the 32-bit
model then there is no reason why it could not do this again and
again.
The abstract machine that the 32-bit model of the x86 language
specifies really does have unlimited memory access.
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 2:54 PM, Keith Thompson wrote:[...]
olcott <NoOne@NoWhere.com> writes:
I have switched gears now.
I have a much more effective approach.
To make the abstract model of computation implied by RIP relative
addressing code have Turing complete memory access for both code and
data we merely fail to assume any specific size of the instruction
pointer.
The abstract model of computation itself does not assume that the RIP
register is 64-bits. Details such as this are not actually required
for instruction pointer relative addressing.
Because we know that the generic instruction pointer relative
addressing model of computation is Turing complete any "C" code that
could be translated into this model would also be Turing complete.
That *almost* sounds like an admission that you were wrong.
Is it? If so, will you acknowledge the time and effort I and others
have spent convincing you of that? Do you finally realize that the
sources you've cited in support of your claims actually contradict them?
I am not wrong. The only reason that anyone thought that I was wrong
is that they did not divide the line between the abstract model and
its physical implementation correctly. They may persist in doing that
intentionally because they really like to say that I am wrong.
And what makes the way you drew that line correct?
Will you acknowledge that what you call "The abstract model of
computation" is not something that follows from the x86 architecture,
but that you've invented yourself?
It is exactly as I have always said it:
The defined byte sequence of the machine code and the primary change
of machine state that the instruction is defined to accomplish.
Jumps to an address that is 7FFFFFFF greater than FFFFFFF5
fffffff0: e9ffffff7f jmp 0xfffffff5 + 0x7fffffff
fffffff5: 90 nop
The above machine code is syntactically correct and says that it will
jump to machine address: 0xfffffff5 + 0x7fffffff in the 32-bit model.
When it does not actually jump to this address it is a failure of the
hardware to implement what the machine language clearly specifies.
No, the link you posted makes it clear that the effect of a JMP
instruction is defined by its effect on the instruction pointer
register, which has a fixed size and cannot address more than 2**N
memory locations.
If the hardware did jump to 0xfffffff5 + 0x7fffffff in the 32-bit
model then there is no reason why it could not do this again and
again.
The abstract machine that the 32-bit model of the x86 language
specifies really does have unlimited memory access.
And anyone to whom this isn't obvious just isn't intelligent enough to understand your wisdom, and your decision that the 32-bit limit on
relative jumps is part of the abstract machine while the fixed width of
the instruction pointer is merely an implementation detail isn't at all arbitrary, even though you can't cite a reference to anyone who actually agrees with you.
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 5:35 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 2:29 PM, Keith Thompson wrote:<cut>
<cut>A relative JMP is defined in terms of its effect on EIP, which is
specifed in that document as a 32-bit register. This directly
contradicts your claim.
I have switched gears now.
I have a much more effective approach.
To make the abstract model of computation implied by RIP relative
addressing code have Turing complete memory access for both code and
data we merely fail to assume any specific size of the instruction
pointer.
Finally! (Bravo Keith.) You will never admit that this change of gears >>> means you were wrong, but at least there is now no need for you to keep
saying the same wrong thing over and over again.
I was not wrong when I defined the line or demarcation between the
abstract model and the physical implementation where I did.
That you refuse to draw this line of demaracation where I specified
does not make me wrong.
I never said that drawing the line where you did was wrong. What you
were wrong about was your claim that your choice of where to draw the
line was forced by the "x86 language" that you never fully defined.
You defined (or tried to define) an abstract model based on the x86.
Nothing wrong with that. You defined some aspects of the x86
definition as part of the abstraction and others as implementation
details. Nothing wrong with that either.
But you insisted that the
specific line between abstraction and implementation details would
be obvious to anyone who read the references you cited -- when in
fact those references made it clear that the instruction pointer
has a fixed width of no more than 64 bits, and that jumping to an
address exceeding 2**64 could not make sense given the published
x86 specifications.
It's great that you're no longer making that claim, but I'm annoyed
that I spent so much time and effort arguing with you about it,
you've effectively changed your argument to be consistent with what
I've been saying, and you pretend that that never happened.
Now, what was it you had when falsely claimed to have Linz's H and H^?
That's the only interesting part of this whole sorry mess -- your claim
to have something impossible. That unbounded register machines are
Turing complete, unlike the x86, is just computability 101, despite it
taking you weeks to get there.
A lie requires an intentional falsehood. Human communication generally
allows some subjective leeway of interpretation. I have always
construed everything computationally identical to a Turing machine as
one-and-the-same thing as a Turing Machine. That you do not define it
this way does not make me a liar.
Now that I have provided a way to directly map the "C" programming
language to an abstract model of computation that is Turing complete
there is a direct bridge to show that programs specified in the "C"
programming language specify Turing complete virtual machines.
On 8/25/2020 7:07 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 5:35 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 2:29 PM, Keith Thompson wrote:<cut>
<cut>A relative JMP is defined in terms of its effect on EIP, which is
specifed in that document as a 32-bit register. This directly
contradicts your claim.
I have switched gears now.
I have a much more effective approach.
To make the abstract model of computation implied by RIP relative
addressing code have Turing complete memory access for both code and >>>>> data we merely fail to assume any specific size of the instruction
pointer.
Finally! (Bravo Keith.) You will never admit that this change of
gears
means you were wrong, but at least there is now no need for you to keep >>>> saying the same wrong thing over and over again.
I was not wrong when I defined the line or demarcation between the
abstract model and the physical implementation where I did.
That you refuse to draw this line of demaracation where I specified
does not make me wrong.
I never said that drawing the line where you did was wrong. What you
were wrong about was your claim that your choice of where to draw the
line was forced by the "x86 language" that you never fully defined.
I defined exactly how to interpret the hundreds of pages of existing documentation and provide a link.
You defined (or tried to define) an abstract model based on the x86.
Nothing wrong with that. You defined some aspects of the x86
definition as part of the abstraction and others as implementation
details. Nothing wrong with that either.
OK
But you insisted that the
specific line between abstraction and implementation details would
be obvious to anyone who read the references you cited -- when in
fact those references made it clear that the instruction pointer
has a fixed width of no more than 64 bits, and that jumping to an
address exceeding 2**64 could not make sense given the published
x86 specifications.
You got these basic facts incorrectly it is 32-bits not 64-bits. I don't think any machine an Earth has 2^64 bytes of RAM.
When I drew the line of demarcation at the functional effect that the instruction was designed to achieve not including any of the details of
how it achieve this functional effect I am still correct that the 32-bit
x86 language does specify unlimited memory access.
When you say that this simply will not work in terms of how the
processor is defined to achieve those functional effects you step
outside of the line of demarcation that I drew.
fffffff0: e9ffffff7f jmp 0xfffffff5 + 0x7fffffff
fffffff5: 90 nop
That says that it will jump to memory address: 0xfffffff5 + 0x7fffffff
That it does not actually do this is an implementation detail that is
outside of the scope of the abstraction.
That it cannot do this because the 32-bit EIP cannot hold a memory
address of that size is also an implementation detail that is outside of
the scope of the line of demarcation that I drew.
If we define the abstract model of computation of the x86 language only
by the functional results that its instructions were designed to achieve
then this abstract model does specify unlimited memory access.
If we also include how the x86 architecture achieves these functional
results then the x86 language does not specify unlimited memory access.
On 8/25/2020 8:07 PM, olcott wrote:[...]
If we define the abstract model of computation of the x86 language
only by the functional results that its instructions were designed
to achieve then this abstract model does specify unlimited memory
access.
If we also include how the x86 architecture achieves these
functional results then the x86 language does not specify unlimited
memory access.
I will take the lack of response as an indication that I finally found
words clear enough to prove this point.
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 8:07 PM, olcott wrote:[...]
If we define the abstract model of computation of the x86 language
only by the functional results that its instructions were designed
to achieve then this abstract model does specify unlimited memory
access.
If we also include how the x86 architecture achieves these
functional results then the x86 language does not specify unlimited
memory access.
I will take the lack of response as an indication that I finally found
words clear enough to prove this point.
You can take the lack of response as anything you like. You'd be
wrong, but I'm ok with that.
In fact I didn't respond because I didn't think there would be any point
in doing so. You have not proven anything. Since you've now "switched gears", I decided not to spend any more time on it.
olcott <NoOne@NoWhere.com> writes:
On 8/26/2020 5:40 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 8:07 PM, olcott wrote:[...]
If we define the abstract model of computation of the x86 language
only by the functional results that its instructions were designed
to achieve then this abstract model does specify unlimited memory
access.
If we also include how the x86 architecture achieves these
functional results then the x86 language does not specify unlimited
memory access.
I will take the lack of response as an indication that I finally found >>>> words clear enough to prove this point.
You can take the lack of response as anything you like. You'd be
wrong, but I'm ok with that.
In fact I didn't respond because I didn't think there would be any point >>> in doing so. You have not proven anything. Since you've now "switched
gears", I decided not to spend any more time on it.
So you either implicitly acknowledge that there is nothing more to say
about it because I did prove my point and you know it.
Nope.
Or the distinction between the functional definition of the x86
language (what each instruction was designed to achieve and nothing
more) and alternative definitions that include how the functional
result is achieved is still too subtle for you to grasp.
Nope.
On 8/26/2020 5:40 PM, Keith Thompson wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 8:07 PM, olcott wrote:[...]
If we define the abstract model of computation of the x86 language
only by the functional results that its instructions were designed
to achieve then this abstract model does specify unlimited memory
access.
If we also include how the x86 architecture achieves these
functional results then the x86 language does not specify unlimited
memory access.
I will take the lack of response as an indication that I finally found
words clear enough to prove this point.
You can take the lack of response as anything you like. You'd be
wrong, but I'm ok with that.
In fact I didn't respond because I didn't think there would be any point
in doing so. You have not proven anything. Since you've now "switched
gears", I decided not to spend any more time on it.
So you either implicitly acknowledge that there is nothing more to say
about it because I did prove my point and you know it.
Or the distinction between the functional definition of the x86
language (what each instruction was designed to achieve and nothing
more) and alternative definitions that include how the functional
result is achieved is still too subtle for you to grasp.
On 8/28/2020 10:39 AM, André G. Isaak wrote:
On 2020-08-28 09:12, olcott wrote:
On 8/25/2020 10:12 PM, olcott wrote:
On 8/25/2020 9:37 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 5:35 PM, Ben Bacarisse wrote:<cut>
Now, what was it you had when falsely claimed to have Linz's H
and H^?
Note: unanswered.
That's the only interesting part of this whole sorry mess -- your >>>>>>> claim
to have something impossible. That unbounded register machines are >>>>>>> Turing complete, unlike the x86, is just computability 101,
despite it
taking you weeks to get there.
A lie requires an intentional falsehood. Human communication
generally
allows some subjective leeway of interpretation. I have always
construed everything computationally identical to a Turing machine as >>>>>> one-and-the-same thing as a Turing Machine.
No, you did not always do that. That excuse came later. At the time >>>>> you made every effort to persuade anyone reading your posts that you >>>>> meant a Turing machine as defined by the book you kept referencing
(and,
for that matter, by everyone else):
"The key thing is the H and Ĥ are 100% fully encoded as actual >>>>> Turing
machines."
"it is exactly and precisely the Peter Linz H and Ĥ"
Yes and the halting decider aspect of this was fully encoded as
pseudo-code essentially identical to "C". The inessential non-halt
deciding aspect of this was not encoded.
That you do not define it
this way does not make me a liar.
Indeed. You started out either deluded or wrong. What makes you a >>>>> liar
is the deception that followed your deluded post of Dec 2018. On Jan >>>>> 4th you had only "a pair of virtual machines functionally identical
to H
and Ĥ" and by 12th Jan 2019 you claimed to have had only "an
algorithm"
So you don't understand that an algorithm and a virtual machine are
one-and-the-same thing?
in Dec 2018 and that it was not encoded until "about a week ago"
It was encoded the very same night that my little dog died.
(i.e. some time after having "fully encoded actual Turning machines"). >>>>A "C" program that is computationally identical to a Turing machine
implementing the same algorithm is a fully encoded virtual machine.
Every post altering the story a little but there was never any
admission
that you did not have what you had claimed to have. Event to this day >>>>> you have not admitted that you did not have "exactly and precisely the >>>>> Peter Linz H and Ĥ ... fully encoded as actual Turing machines".
I only had the key halt deciding aspect fully encoded.
If you think it is an exaggeration to say that I had it fully
encoded on the basis that I did not have the conventional ordinary
skill in the art aspects encoded, I would say that you are being
unreasonable.
If you say that I did not have a Turing machine because I did not
have a five-tuple machine I say that you and I define our terms
differently.
Any code that is functionally identical to a Turing machine is for
all practical purposes one-and-the-same as a Turing machine.
The story has continued to unravel, including the more recent idea
that
pretty much anything can be called a Turing machine, but the clinching >>>>> evidence is that you refuse, to this day, to say what you really had >>>>> when made that deluded post in Dec 2018. I invite you, yet again, to >>>>> come clean. What did you really have on that day?
I had the halt deciding aspect of H_Hat correctly deciding halting
on itself fully encoded in pseudo-code that was essentially
identical to "C". No other aspect of H_Hat was encoded at all. It
was not a five-tuple machine it was a snippet of "C". Since I was
doing it with pen and paper and I only had two pages of paper some
of the syntactical requirements of "C" may have been skipped.
Only because the snippet of "c" equivalent code contained every
single detail of exactly how H_Hat would correctly decide halting on
itself and all the remaining details of converting this snippet of
code into a complete virtual machine required no more than ordinary
skill-of-the-art of software engineering can my claim be construed as
reasonable and not a wild exaggeration.
So why not just post that snippet?
André
In its current state it would not be accurately reviewed.
Only by seeing this snippet in the context of its execution in an x86
based UTM will it be able to be accurately reviewed. Since I created one
single instance of H_Hat correctly deciding halting on itself I created
two whole classes of such instances. This update is enormously more
robust than my 2018-12-13 7:00 PM work.
It has consistently been the case that all of my work posted on these
forums has been very consistently dismissed out-of-hand until I address
every single detail of such a dismissal proving point-by-point,
item-by-item, detail-by-detail that every possible dismissal is baseless.
Normally people never provide any details what-so-ever of why my work is dismissed. I have to very rigorously challenge them to provide a basis
or no basis what-so-ever is provided. In the final analysis the ultimate "reason" that my work is dismissed is that people really really don't
believe that I am correct. They can never pin down an exhaustive set of complete reasoning supporting this belief.
To say this in other words all of the dismissal of my work has been
entirely based on my lack of credibility it has never been on the basis
of pointing out actual errors in the essence my reasoning.
With my halting problem work the code totally speaks for itself. My incompleteness theorem work requires a complete paradigm shift to be understood.
On 2020-08-28 09:12, olcott wrote:
On 8/25/2020 10:12 PM, olcott wrote:
On 8/25/2020 9:37 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 5:35 PM, Ben Bacarisse wrote:<cut>
Now, what was it you had when falsely claimed to have Linz's H and >>>>>> H^?
Note: unanswered.
That's the only interesting part of this whole sorry mess -- your
claim
to have something impossible. That unbounded register machines are >>>>>> Turing complete, unlike the x86, is just computability 101,
despite it
taking you weeks to get there.
A lie requires an intentional falsehood. Human communication generally >>>>> allows some subjective leeway of interpretation. I have always
construed everything computationally identical to a Turing machine as >>>>> one-and-the-same thing as a Turing Machine.
No, you did not always do that. That excuse came later. At the time >>>> you made every effort to persuade anyone reading your posts that you
meant a Turing machine as defined by the book you kept referencing
(and,
for that matter, by everyone else):
"The key thing is the H and Ĥ are 100% fully encoded as actual
Turing
machines."
"it is exactly and precisely the Peter Linz H and Ĥ"
Yes and the halting decider aspect of this was fully encoded as
pseudo-code essentially identical to "C". The inessential non-halt
deciding aspect of this was not encoded.
So you don't understand that an algorithm and a virtual machine areThat you do not define it
this way does not make me a liar.
Indeed. You started out either deluded or wrong. What makes you a
liar
is the deception that followed your deluded post of Dec 2018. On Jan >>>> 4th you had only "a pair of virtual machines functionally identical
to H
and Ĥ" and by 12th Jan 2019 you claimed to have had only "an algorithm" >>>
one-and-the-same thing?
in Dec 2018 and that it was not encoded until "about a week ago"
It was encoded the very same night that my little dog died.
(i.e. some time after having "fully encoded actual Turning machines").
A "C" program that is computationally identical to a Turing machine
implementing the same algorithm is a fully encoded virtual machine.
Every post altering the story a little but there was never any
admission
that you did not have what you had claimed to have. Event to this day >>>> you have not admitted that you did not have "exactly and precisely the >>>> Peter Linz H and Ĥ ... fully encoded as actual Turing machines".
I only had the key halt deciding aspect fully encoded.
If you think it is an exaggeration to say that I had it fully encoded
on the basis that I did not have the conventional ordinary skill in
the art aspects encoded, I would say that you are being unreasonable.
If you say that I did not have a Turing machine because I did not
have a five-tuple machine I say that you and I define our terms
differently.
Any code that is functionally identical to a Turing machine is for
all practical purposes one-and-the-same as a Turing machine.
The story has continued to unravel, including the more recent idea that >>>> pretty much anything can be called a Turing machine, but the clinching >>>> evidence is that you refuse, to this day, to say what you really had
when made that deluded post in Dec 2018. I invite you, yet again, to >>>> come clean. What did you really have on that day?
I had the halt deciding aspect of H_Hat correctly deciding halting on
itself fully encoded in pseudo-code that was essentially identical to
"C". No other aspect of H_Hat was encoded at all. It was not a
five-tuple machine it was a snippet of "C". Since I was doing it with
pen and paper and I only had two pages of paper some of the
syntactical requirements of "C" may have been skipped.
Only because the snippet of "c" equivalent code contained every single
detail of exactly how H_Hat would correctly decide halting on itself
and all the remaining details of converting this snippet of code into
a complete virtual machine required no more than ordinary
skill-of-the-art of software engineering can my claim be construed as
reasonable and not a wild exaggeration.
So why not just post that snippet?
André
On 8/28/2020 10:39 AM, André G. Isaak wrote:
On 2020-08-28 09:12, olcott wrote:
On 8/25/2020 10:12 PM, olcott wrote:
On 8/25/2020 9:37 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 5:35 PM, Ben Bacarisse wrote:<cut>
Now, what was it you had when falsely claimed to have Linz's H
and H^?
Note: unanswered.
That's the only interesting part of this whole sorry mess -- your >>>>>>> claim
to have something impossible. That unbounded register machines are >>>>>>> Turing complete, unlike the x86, is just computability 101,
despite it
taking you weeks to get there.
A lie requires an intentional falsehood. Human communication
generally
allows some subjective leeway of interpretation. I have always
construed everything computationally identical to a Turing machine as >>>>>> one-and-the-same thing as a Turing Machine.
No, you did not always do that. That excuse came later. At the time >>>>> you made every effort to persuade anyone reading your posts that you >>>>> meant a Turing machine as defined by the book you kept referencing
(and,
for that matter, by everyone else):
"The key thing is the H and Ĥ are 100% fully encoded as actual >>>>> Turing
machines."
"it is exactly and precisely the Peter Linz H and Ĥ"
Yes and the halting decider aspect of this was fully encoded as
pseudo-code essentially identical to "C". The inessential non-halt
deciding aspect of this was not encoded.
That you do not define it
this way does not make me a liar.
Indeed. You started out either deluded or wrong. What makes you a >>>>> liar
is the deception that followed your deluded post of Dec 2018. On Jan >>>>> 4th you had only "a pair of virtual machines functionally identical
to H
and Ĥ" and by 12th Jan 2019 you claimed to have had only "an
algorithm"
So you don't understand that an algorithm and a virtual machine are
one-and-the-same thing?
in Dec 2018 and that it was not encoded until "about a week ago"
It was encoded the very same night that my little dog died.
(i.e. some time after having "fully encoded actual Turning machines"). >>>>A "C" program that is computationally identical to a Turing machine
implementing the same algorithm is a fully encoded virtual machine.
Every post altering the story a little but there was never any
admission
that you did not have what you had claimed to have. Event to this day >>>>> you have not admitted that you did not have "exactly and precisely the >>>>> Peter Linz H and Ĥ ... fully encoded as actual Turing machines".
I only had the key halt deciding aspect fully encoded.
If you think it is an exaggeration to say that I had it fully
encoded on the basis that I did not have the conventional ordinary
skill in the art aspects encoded, I would say that you are being
unreasonable.
If you say that I did not have a Turing machine because I did not
have a five-tuple machine I say that you and I define our terms
differently.
Any code that is functionally identical to a Turing machine is for
all practical purposes one-and-the-same as a Turing machine.
The story has continued to unravel, including the more recent idea
that
pretty much anything can be called a Turing machine, but the clinching >>>>> evidence is that you refuse, to this day, to say what you really had >>>>> when made that deluded post in Dec 2018. I invite you, yet again, to >>>>> come clean. What did you really have on that day?
I had the halt deciding aspect of H_Hat correctly deciding halting
on itself fully encoded in pseudo-code that was essentially
identical to "C". No other aspect of H_Hat was encoded at all. It
was not a five-tuple machine it was a snippet of "C". Since I was
doing it with pen and paper and I only had two pages of paper some
of the syntactical requirements of "C" may have been skipped.
Only because the snippet of "c" equivalent code contained every
single detail of exactly how H_Hat would correctly decide halting on
itself and all the remaining details of converting this snippet of
code into a complete virtual machine required no more than ordinary
skill-of-the-art of software engineering can my claim be construed as
reasonable and not a wild exaggeration.
So why not just post that snippet?
André
In its current state it would not be accurately reviewed.
Only by seeing this snippet in the context of its execution in an x86
based UTM will it be able to be accurately reviewed. Since I created one single instance of H_Hat correctly deciding halting on itself I created
two whole classes of such instances. This update is enormously more
robust than my 2018-12-13 7:00 PM work.
It has consistently been the case that all of my work posted on these
forums has been very consistently dismissed out-of-hand until I address
every single detail of such a dismissal proving point-by-point,
item-by-item, detail-by-detail that every possible dismissal is baseless.
Normally people never provide any details what-so-ever of why my work is dismissed. I have to very rigorously challenge them to provide a basis
or no basis what-so-ever is provided. In the final analysis the ultimate "reason" that my work is dismissed is that people really really don't
believe that I am correct. They can never pin down an exhaustive set of complete reasoning supporting this belief.
To say this in other words all of the dismissal of my work has been
entirely based on my lack of credibility it has never been on the basis
of pointing out actual errors in the essence my reasoning.
With my halting problem work the code totally speaks for itself. My incompleteness theorem work requires a complete paradigm shift to be understood.
On 2020-08-28 10:25, olcott wrote:
On 8/28/2020 10:39 AM, André G. Isaak wrote:
On 2020-08-28 09:12, olcott wrote:
On 8/25/2020 10:12 PM, olcott wrote:
On 8/25/2020 9:37 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/25/2020 5:35 PM, Ben Bacarisse wrote:<cut>
Now, what was it you had when falsely claimed to have Linz's H >>>>>>>> and H^?
Note: unanswered.
That's the only interesting part of this whole sorry mess --
your claim
to have something impossible. That unbounded register machines are >>>>>>>> Turing complete, unlike the x86, is just computability 101,
despite it
taking you weeks to get there.
A lie requires an intentional falsehood. Human communication
generally
allows some subjective leeway of interpretation. I have always >>>>>>> construed everything computationally identical to a Turing
machine as
one-and-the-same thing as a Turing Machine.
No, you did not always do that. That excuse came later. At the time >>>>>> you made every effort to persuade anyone reading your posts that you >>>>>> meant a Turing machine as defined by the book you kept referencing >>>>>> (and,
for that matter, by everyone else):
"The key thing is the H and Ĥ are 100% fully encoded as actual >>>>>> Turing
machines."
"it is exactly and precisely the Peter Linz H and Ĥ"
Yes and the halting decider aspect of this was fully encoded as
pseudo-code essentially identical to "C". The inessential non-halt
deciding aspect of this was not encoded.
That you do not define it
this way does not make me a liar.
Indeed. You started out either deluded or wrong. What makes you >>>>>> a liar
is the deception that followed your deluded post of Dec 2018. On Jan >>>>>> 4th you had only "a pair of virtual machines functionally
identical to H
and Ĥ" and by 12th Jan 2019 you claimed to have had only "an
algorithm"
So you don't understand that an algorithm and a virtual machine are
one-and-the-same thing?
in Dec 2018 and that it was not encoded until "about a week ago"
It was encoded the very same night that my little dog died.
(i.e. some time after having "fully encoded actual Turning
machines").
A "C" program that is computationally identical to a Turing machine
implementing the same algorithm is a fully encoded virtual machine.
Every post altering the story a little but there was never any
admission
that you did not have what you had claimed to have. Event to this >>>>>> day
you have not admitted that you did not have "exactly and precisely >>>>>> the
Peter Linz H and Ĥ ... fully encoded as actual Turing machines".
I only had the key halt deciding aspect fully encoded.
If you think it is an exaggeration to say that I had it fully
encoded on the basis that I did not have the conventional ordinary
skill in the art aspects encoded, I would say that you are being
unreasonable.
If you say that I did not have a Turing machine because I did not
have a five-tuple machine I say that you and I define our terms
differently.
Any code that is functionally identical to a Turing machine is for
all practical purposes one-and-the-same as a Turing machine.
The story has continued to unravel, including the more recent idea >>>>>> that
pretty much anything can be called a Turing machine, but the
clinching
evidence is that you refuse, to this day, to say what you really had >>>>>> when made that deluded post in Dec 2018. I invite you, yet again, to >>>>>> come clean. What did you really have on that day?
I had the halt deciding aspect of H_Hat correctly deciding halting
on itself fully encoded in pseudo-code that was essentially
identical to "C". No other aspect of H_Hat was encoded at all. It
was not a five-tuple machine it was a snippet of "C". Since I was
doing it with pen and paper and I only had two pages of paper some
of the syntactical requirements of "C" may have been skipped.
Only because the snippet of "c" equivalent code contained every
single detail of exactly how H_Hat would correctly decide halting on
itself and all the remaining details of converting this snippet of
code into a complete virtual machine required no more than ordinary
skill-of-the-art of software engineering can my claim be construed
as reasonable and not a wild exaggeration.
So why not just post that snippet?
André
In its current state it would not be accurately reviewed.
If the snippet really does contain "every single detail of exactly how
H_Hat would correctly decide halting on itself", then how could it not
be accurately reviewed in its current state?
Only by seeing this snippet in the context of its execution in an x86
based UTM will it be able to be accurately reviewed. Since I created one
Once again, you need to stop using the term 'UTM' when clearly you do
not mean UTM. A UTM is a Turing Machine. Turing Machines are described
by a septuple as has been previously discussed. What you have (or claim
to have) is an x86-based computer program.
Why would your snippet require such a program to be accurately reviewed?
Or at all, for that matter?
André
single instance of H_Hat correctly deciding halting on itself I
created two whole classes of such instances. This update is enormously
more robust than my 2018-12-13 7:00 PM work.
It has consistently been the case that all of my work posted on these
forums has been very consistently dismissed out-of-hand until I
address every single detail of such a dismissal proving
point-by-point, item-by-item, detail-by-detail that every possible
dismissal is baseless.
Normally people never provide any details what-so-ever of why my work
is dismissed. I have to very rigorously challenge them to provide a
basis or no basis what-so-ever is provided. In the final analysis the
ultimate "reason" that my work is dismissed is that people really
really don't believe that I am correct. They can never pin down an
exhaustive set of complete reasoning supporting this belief.
To say this in other words all of the dismissal of my work has been
entirely based on my lack of credibility it has never been on the
basis of pointing out actual errors in the essence my reasoning.
With my halting problem work the code totally speaks for itself. My
incompleteness theorem work requires a complete paradigm shift to be
understood.
On 2020-08-28 11:50, olcott wrote:
On 8/28/2020 12:19 PM, André G. Isaak wrote:
On 2020-08-28 11:08, olcott wrote:
Any virtual machine that is computationally identical to a Turing
machine is for all practical purposes an actual Turing machine. Now
that I have shown how the abstract machine defined by the x86
language has access to unlimited memory I have shown that this
abstract machine is computationally identical to a Turing machine.
https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access
Before you claimed to have shown that x86 was Turing Complete. (You
didn't, but that was your claim).
But above you are now claiming that it is Turing Equivalent. Those
are not the same thing.
If you could show that x86 was Turing Equivalent, then you could
argue that for every x86 program, a corresponding Turing Machine must
exist. This, however, does not follow from x86 being Turing Complete.
Turing completeness
A computational system that can compute every Turing-computable
function is called Turing-complete (or Turing-powerful).
Alternatively, such a system is one that can simulate a universal
Turing machine.
Turing equivalence
A Turing-complete system is called Turing-equivalent if every function
it can compute is also Turing-computable; i.e., it computes precisely
the same class of functions as do Turing machines. Alternatively, a
Turing-equivalent system is one that can simulate, and be simulated
by, a universal Turing machine.
Within the Church-Turing thesis we get essentially that a Turing
machine can compute any damn thing that can possibly be computed.
There are various equivalent formulations of the
Church-Turing thesis. A common one is that every
effective computation can be carried out by a
Turing machine. https://plato.stanford.edu/entries/church-turing/
A Turing complete language is defined to compute this same set.
This makes the conventional term of the art: "Turing equivalence"
redundant, and thus "Turing completeness" and "Turing equivalence" are
merely different words for the exact same thing.
https://en.wikipedia.org/wiki/Turing_completeness#Formal_definitions
You seem to be missing my point.
It is possible that Turing Complete and Turing Equivalent have the same extension, but they mean different things.
I am pointing out that you
are being sloppy with your terminology, just as you are being sloppy
with your use of terms like 'Turing Machine' and UTM.
You claim that you want to eventually publish your work. Even if it
turns out that you have something interesting to show, no journal is
going to take you seriously if your presentation is so rife with misuse
of terminology since it won't be clear what you are trying to say.
When people here point out your terminological abuse, they are trying to
be *helpful*. They are pointing out that you need to ensure that what
you say is what you actually mean, and that the only way to actually
achieve this is to master the terminology of the field. When errors in terminology are pointed out, you are far better off to acknowledge those errors and correct your usage than to go to absurd lengths to try to
justify your redefinition of accepted terms.
André
And even if you could demonstrate the existence of a Turing Machine
which performs the same computation as your x86 program, that
wouldn't make your x86 program a Turing Machine.
If you want to be understood, why not invest the time in learning
existing terminology and then using it correctly (by which I mean the
same way the rest of the world uses it)? Then people wouldn't
constantly argue with you over all those things which you consider to
be pesky details.
André
On 8/28/2020 11:34 PM, André G. Isaak wrote:
On 2020-08-28 11:50, olcott wrote:
On 8/28/2020 12:19 PM, André G. Isaak wrote:
On 2020-08-28 11:08, olcott wrote:
Any virtual machine that is computationally identical to a Turing
machine is for all practical purposes an actual Turing machine. Now
that I have shown how the abstract machine defined by the x86
language has access to unlimited memory I have shown that this
abstract machine is computationally identical to a Turing machine.
https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access
Before you claimed to have shown that x86 was Turing Complete. (You
didn't, but that was your claim).
But above you are now claiming that it is Turing Equivalent. Those
are not the same thing.
If you could show that x86 was Turing Equivalent, then you could
argue that for every x86 program, a corresponding Turing Machine
must exist. This, however, does not follow from x86 being Turing
Complete.
Turing completeness
A computational system that can compute every Turing-computable
function is called Turing-complete (or Turing-powerful).
Alternatively, such a system is one that can simulate a universal
Turing machine.
Turing equivalence
A Turing-complete system is called Turing-equivalent if every
function it can compute is also Turing-computable; i.e., it computes
precisely the same class of functions as do Turing machines.
Alternatively, a Turing-equivalent system is one that can simulate,
and be simulated by, a universal Turing machine.
Within the Church-Turing thesis we get essentially that a Turing
machine can compute any damn thing that can possibly be computed.
There are various equivalent formulations of the
Church-Turing thesis. A common one is that every
effective computation can be carried out by a
Turing machine. https://plato.stanford.edu/entries/church-turing/ >>>
A Turing complete language is defined to compute this same set.
This makes the conventional term of the art: "Turing equivalence"
redundant, and thus "Turing completeness" and "Turing equivalence"
are merely different words for the exact same thing.
https://en.wikipedia.org/wiki/Turing_completeness#Formal_definitions
You seem to be missing my point.
It is possible that Turing Complete and Turing Equivalent have the
same extension, but they mean different things.
So a Turing machine can accept the entire set of recursively enumerable languages and another machine can accept this same entire set of
recursively enumerable languages and some how through the magic of
ambiguity one set is larger than the other?
On 8/29/2020 8:44 AM, André G. Isaak wrote:
On 2020-08-29 07:27, olcott wrote:
On 8/28/2020 11:34 PM, André G. Isaak wrote:
On 2020-08-28 11:50, olcott wrote:
On 8/28/2020 12:19 PM, André G. Isaak wrote:
On 2020-08-28 11:08, olcott wrote:
Any virtual machine that is computationally identical to a Turing >>>>>>> machine is for all practical purposes an actual Turing machine.
Now that I have shown how the abstract machine defined by the x86 >>>>>>> language has access to unlimited memory I have shown that this
abstract machine is computationally identical to a Turing machine. >>>>>>>
https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access
Before you claimed to have shown that x86 was Turing Complete.
(You didn't, but that was your claim).
But above you are now claiming that it is Turing Equivalent. Those >>>>>> are not the same thing.
If you could show that x86 was Turing Equivalent, then you could
argue that for every x86 program, a corresponding Turing Machine
must exist. This, however, does not follow from x86 being Turing
Complete.
Turing completeness
A computational system that can compute every Turing-computable
function is called Turing-complete (or Turing-powerful).
Alternatively, such a system is one that can simulate a universal
Turing machine.
Turing equivalence
A Turing-complete system is called Turing-equivalent if every
function it can compute is also Turing-computable; i.e., it
computes precisely the same class of functions as do Turing
machines. Alternatively, a Turing-equivalent system is one that can
simulate, and be simulated by, a universal Turing machine.
Within the Church-Turing thesis we get essentially that a Turing
machine can compute any damn thing that can possibly be computed.
There are various equivalent formulations of the
Church-Turing thesis. A common one is that every
effective computation can be carried out by a
Turing machine. https://plato.stanford.edu/entries/church-turing/ >>>>>
A Turing complete language is defined to compute this same set.
This makes the conventional term of the art: "Turing equivalence"
redundant, and thus "Turing completeness" and "Turing equivalence"
are merely different words for the exact same thing.
https://en.wikipedia.org/wiki/Turing_completeness#Formal_definitions
You seem to be missing my point.
It is possible that Turing Complete and Turing Equivalent have the
same extension, but they mean different things.
So a Turing machine can accept the entire set of recursively
enumerable languages and another machine can accept this same entire
set of recursively enumerable languages and some how through the
magic of ambiguity one set is larger than the other?
Something is Turing Equivalent if it the set of problems it can
compute is identical to a Turing Machine.
Something is Turing Complete if the set of problems it can compute is
identical to *or greater than* that of a Turing Machine.
I knew that it said that yet:
Yet within the Church-Turing thesis this is impossible.
If it could be shown that everything that can be computed can be
computed by a Turing Machine, then those two would have the same
extension. But this is *not* something that has been proven.
If you somehow managed to prove that your (still undefined) 'x86
language' was Turing complete, if your program happened to be in that
'greater than' portion of Turing Complete, then there would be no
corresponding Turing Machine.
I'm not saying this is likely. I am pointing out that you are using
your terms incorrectly.
If we start with Church-Turing as a "given" then the terms themselves
are incorrect.
And even if there is a Turing Machine which does the same computation
as your program, that still doesn't make your program a Turing Machine.
Two machines are functionally identical if these two machines compute
the exact same set of functions and no more.
On 2020-08-29 07:27, olcott wrote:
On 8/28/2020 11:34 PM, André G. Isaak wrote:
On 2020-08-28 11:50, olcott wrote:
On 8/28/2020 12:19 PM, André G. Isaak wrote:
On 2020-08-28 11:08, olcott wrote:
Any virtual machine that is computationally identical to a Turing
machine is for all practical purposes an actual Turing machine.
Now that I have shown how the abstract machine defined by the x86
language has access to unlimited memory I have shown that this
abstract machine is computationally identical to a Turing machine. >>>>>>
https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access
Before you claimed to have shown that x86 was Turing Complete. (You
didn't, but that was your claim).
But above you are now claiming that it is Turing Equivalent. Those
are not the same thing.
If you could show that x86 was Turing Equivalent, then you could
argue that for every x86 program, a corresponding Turing Machine
must exist. This, however, does not follow from x86 being Turing
Complete.
Turing completeness
A computational system that can compute every Turing-computable
function is called Turing-complete (or Turing-powerful).
Alternatively, such a system is one that can simulate a universal
Turing machine.
Turing equivalence
A Turing-complete system is called Turing-equivalent if every
function it can compute is also Turing-computable; i.e., it computes
precisely the same class of functions as do Turing machines.
Alternatively, a Turing-equivalent system is one that can simulate,
and be simulated by, a universal Turing machine.
Within the Church-Turing thesis we get essentially that a Turing
machine can compute any damn thing that can possibly be computed.
There are various equivalent formulations of the
Church-Turing thesis. A common one is that every
effective computation can be carried out by a
Turing machine. https://plato.stanford.edu/entries/church-turing/ >>>>
A Turing complete language is defined to compute this same set.
This makes the conventional term of the art: "Turing equivalence"
redundant, and thus "Turing completeness" and "Turing equivalence"
are merely different words for the exact same thing.
https://en.wikipedia.org/wiki/Turing_completeness#Formal_definitions
You seem to be missing my point.
It is possible that Turing Complete and Turing Equivalent have the
same extension, but they mean different things.
So a Turing machine can accept the entire set of recursively
enumerable languages and another machine can accept this same entire
set of recursively enumerable languages and some how through the magic
of ambiguity one set is larger than the other?
Something is Turing Equivalent if it the set of problems it can compute
is identical to a Turing Machine.
Something is Turing Complete if the set of problems it can compute is identical to *or greater than* that of a Turing Machine.
If it could be shown that everything that can be computed can be
computed by a Turing Machine, then those two would have the same
extension. But this is *not* something that has been proven.
If you somehow managed to prove that your (still undefined) 'x86
language' was Turing complete, if your program happened to be in that 'greater than' portion of Turing Complete, then there would be no corresponding Turing Machine.
I'm not saying this is likely. I am pointing out that you are using your terms incorrectly.
And even if there is a Turing Machine which does the same computation as
your program, that still doesn't make your program a Turing Machine.
Again, you are using the terms incorrectly. If you don't want to deal
with Turing Machines, then talk about C programs or x86 programs, not
Turing Machines. A program that decided halting in C would be no less interesting (and no less impossible), so why do you insist on calling it
a Turing Machine?
André
On 2020-08-29 08:26, olcott wrote:
On 8/29/2020 8:44 AM, André G. Isaak wrote:
On 2020-08-29 07:27, olcott wrote:
On 8/28/2020 11:34 PM, André G. Isaak wrote:
On 2020-08-28 11:50, olcott wrote:
On 8/28/2020 12:19 PM, André G. Isaak wrote:You seem to be missing my point.
On 2020-08-28 11:08, olcott wrote:
Any virtual machine that is computationally identical to a
Turing machine is for all practical purposes an actual Turing
machine. Now that I have shown how the abstract machine defined >>>>>>>> by the x86 language has access to unlimited memory I have shown >>>>>>>> that this abstract machine is computationally identical to a
Turing machine.
https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access
Before you claimed to have shown that x86 was Turing Complete.
(You didn't, but that was your claim).
But above you are now claiming that it is Turing Equivalent.
Those are not the same thing.
If you could show that x86 was Turing Equivalent, then you could >>>>>>> argue that for every x86 program, a corresponding Turing Machine >>>>>>> must exist. This, however, does not follow from x86 being Turing >>>>>>> Complete.
Turing completeness
A computational system that can compute every Turing-computable
function is called Turing-complete (or Turing-powerful).
Alternatively, such a system is one that can simulate a universal
Turing machine.
Turing equivalence
A Turing-complete system is called Turing-equivalent if every
function it can compute is also Turing-computable; i.e., it
computes precisely the same class of functions as do Turing
machines. Alternatively, a Turing-equivalent system is one that
can simulate, and be simulated by, a universal Turing machine.
Within the Church-Turing thesis we get essentially that a Turing
machine can compute any damn thing that can possibly be computed.
There are various equivalent formulations of the
Church-Turing thesis. A common one is that every
effective computation can be carried out by a
Turing machine. https://plato.stanford.edu/entries/church-turing/ >>>>>>
A Turing complete language is defined to compute this same set.
This makes the conventional term of the art: "Turing equivalence"
redundant, and thus "Turing completeness" and "Turing equivalence" >>>>>> are merely different words for the exact same thing.
https://en.wikipedia.org/wiki/Turing_completeness#Formal_definitions >>>>>
It is possible that Turing Complete and Turing Equivalent have the
same extension, but they mean different things.
So a Turing machine can accept the entire set of recursively
enumerable languages and another machine can accept this same entire
set of recursively enumerable languages and some how through the
magic of ambiguity one set is larger than the other?
Something is Turing Equivalent if it the set of problems it can
compute is identical to a Turing Machine.
Something is Turing Complete if the set of problems it can compute is
identical to *or greater than* that of a Turing Machine.
I knew that it said that yet:
Yet within the Church-Turing thesis this is impossible.
Again, you are missing my point.
The Church-Turing thesis is a *conjecture*. It is not something that has
been proven.
And even if it were proven, it still makes sense to use the term which
more accurately captures what you mean to say. Just because two terms
have the same extension doesn't make them interchangeable.
If I want to claim that something moves at the speed of light, I would
claim that v = c. I would not claim that v >= c and then justify this on
the grounds that it isn't actually possible to travel faster than light, therefore the two mean the same thing. They don't mean the same thing.
If it could be shown that everything that can be computed can be
computed by a Turing Machine, then those two would have the same
extension. But this is *not* something that has been proven.
If you somehow managed to prove that your (still undefined) 'x86
language' was Turing complete, if your program happened to be in that
'greater than' portion of Turing Complete, then there would be no
corresponding Turing Machine.
I'm not saying this is likely. I am pointing out that you are using
your terms incorrectly.
If we start with Church-Turing as a "given" then the terms themselves
are incorrect.
And even if there is a Turing Machine which does the same computation
as your program, that still doesn't make your program a Turing Machine.
Two machines are functionally identical if these two machines compute
the exact same set of functions and no more.
Just because two things compute the same function doesn't make them the
same thing.
The following two programs might be considered to be functionally
equivalent:
#include <stdio.h>
int main(void) {
printf("hello, world!\n");
}
PROGRAM HELLO;
BEGIN
WRITELN('hello, world!');
END.
That doesn't mean one can refer to the second as a C program, nor the
first as a Pascal program. And neither can be referred to as a Turing.
I don't understand your resistance to simply using terminology in a way
that conforms with how everyone else uses it. Is it your goal to be misunderstood and/or deliberately confuse?
André
On 2020-08-29 08:26, olcott wrote:
On 8/29/2020 8:44 AM, André G. Isaak wrote:
On 2020-08-29 07:27, olcott wrote:
On 8/28/2020 11:34 PM, André G. Isaak wrote:
On 2020-08-28 11:50, olcott wrote:
On 8/28/2020 12:19 PM, André G. Isaak wrote:You seem to be missing my point.
On 2020-08-28 11:08, olcott wrote:
Any virtual machine that is computationally identical to a
Turing machine is for all practical purposes an actual Turing
machine. Now that I have shown how the abstract machine defined >>>>>>>> by the x86 language has access to unlimited memory I have shown >>>>>>>> that this abstract machine is computationally identical to a
Turing machine.
https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access
Before you claimed to have shown that x86 was Turing Complete.
(You didn't, but that was your claim).
But above you are now claiming that it is Turing Equivalent.
Those are not the same thing.
If you could show that x86 was Turing Equivalent, then you could >>>>>>> argue that for every x86 program, a corresponding Turing Machine >>>>>>> must exist. This, however, does not follow from x86 being Turing >>>>>>> Complete.
Turing completeness
A computational system that can compute every Turing-computable
function is called Turing-complete (or Turing-powerful).
Alternatively, such a system is one that can simulate a universal
Turing machine.
Turing equivalence
A Turing-complete system is called Turing-equivalent if every
function it can compute is also Turing-computable; i.e., it
computes precisely the same class of functions as do Turing
machines. Alternatively, a Turing-equivalent system is one that
can simulate, and be simulated by, a universal Turing machine.
Within the Church-Turing thesis we get essentially that a Turing
machine can compute any damn thing that can possibly be computed.
There are various equivalent formulations of the
Church-Turing thesis. A common one is that every
effective computation can be carried out by a
Turing machine. https://plato.stanford.edu/entries/church-turing/ >>>>>>
A Turing complete language is defined to compute this same set.
This makes the conventional term of the art: "Turing equivalence"
redundant, and thus "Turing completeness" and "Turing equivalence" >>>>>> are merely different words for the exact same thing.
https://en.wikipedia.org/wiki/Turing_completeness#Formal_definitions >>>>>
It is possible that Turing Complete and Turing Equivalent have the
same extension, but they mean different things.
So a Turing machine can accept the entire set of recursively
enumerable languages and another machine can accept this same entire
set of recursively enumerable languages and some how through the
magic of ambiguity one set is larger than the other?
Something is Turing Equivalent if it the set of problems it can
compute is identical to a Turing Machine.
Something is Turing Complete if the set of problems it can compute is
identical to *or greater than* that of a Turing Machine.
I knew that it said that yet:
Yet within the Church-Turing thesis this is impossible.
Again, you are missing my point.
The Church-Turing thesis is a *conjecture*. It is not something that has
been proven.
And even if it were proven, it still makes sense to use the term which
more accurately captures what you mean to say. Just because two terms
have the same extension doesn't make them interchangeable.
If I want to claim that something moves at the speed of light, I would
claim that v = c. I would not claim that v >= c and then justify this on
the grounds that it isn't actually possible to travel faster than light, therefore the two mean the same thing. They don't mean the same thing.
If it could be shown that everything that can be computed can be
computed by a Turing Machine, then those two would have the same
extension. But this is *not* something that has been proven.
If you somehow managed to prove that your (still undefined) 'x86
language' was Turing complete, if your program happened to be in that
'greater than' portion of Turing Complete, then there would be no
corresponding Turing Machine.
I'm not saying this is likely. I am pointing out that you are using
your terms incorrectly.
If we start with Church-Turing as a "given" then the terms themselves
are incorrect.
And even if there is a Turing Machine which does the same computation
as your program, that still doesn't make your program a Turing Machine.
Two machines are functionally identical if these two machines compute
the exact same set of functions and no more.
Just because two things compute the same function doesn't make them the
same thing.
The following two programs might be considered to be functionally
equivalent:
#include <stdio.h>
int main(void) {
printf("hello, world!\n");
}
PROGRAM HELLO;
BEGIN
WRITELN('hello, world!');
END.
olcott <NoOne@NoWhere.com> writes:
<cut>
The "C" programming language can be construed to be functionally
identical to a Turing machine if it can be mapped to an abstract model
of computation that is functionally identical to a Turing machine.
I am not sure what this conditional, hedged and vague remark is intended
to say, but the C programming language (without the standard IO library)
is not Turing complete[1]. I suspect it isn't Turing complete even
/with/ the standard IO library, but I'd have to think about that. It
all comes down to what parts of the language are bounded by definition.
[1] I am not 100% sure of this. I have seen no proof, but I can't see
any way round the bounded nature of the language. C appears to be too low-level to be formally Turing complete, but I'd love to find out that
I'm wrong about this.
On 8/29/2020 11:47 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
The "C" programming language can be construed to be functionally
identical to a Turing machine if it can be mapped to an abstract model
of computation that is functionally identical to a Turing machine.
I am not sure what this conditional, hedged and vague remark is intended
to say, but the C programming language (without the standard IO library)
is not Turing complete[1]. I suspect it isn't Turing complete even
/with/ the standard IO library, but I'd have to think about that. It
all comes down to what parts of the language are bounded by definition.
[1] I am not 100% sure of this. I have seen no proof, but I can't see
any way round the bounded nature of the language. C appears to be too
low-level to be formally Turing complete, but I'd love to find out that
I'm wrong about this.
When the "C" programming language is mapped to this abstract model of computation it becomes functionally identical to a Turing Machine if the following abstract model is functionally identical to a Turing Machine:
https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access
This is a concrete every 4GB RAM machine within each 4GB block.
"C" already perfectly maps to a 4GB RAM machine.
We augment this concrete model transforming it into an abstract model by simply hypothesizing an unlimited number of adjacent 4GB blocks.
If necessary it can always drop into pure Turing machine mode and
essentially have a tape head that can move forward or backward in
increments of 1 to 7FFFFFFF bytes. In pure Turing machine mode it only
has unconditional jumps and conditional jumps.
To truly prove beyond all possible doubt that "C" maps to a machine that
is functionally identical to a Turing machine would most readily be accomplished by mapping "C" to the above machine when this machine is in
pure Turing machine mode.
I figured out how to easily make a linked list that would keep track of
2GB of information about every single atom in the whole universe updated every second since the beginning of time using ordinary x86-64 RIP
relative addressing.
What may be impossible even for a Turing machine is to make a truly
unlimited linked list that can have any of its pointers updated as new
data is inserted in sort order.
olcott <NoOne@NoWhere.com> writes:
On 8/29/2020 11:47 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
The "C" programming language can be construed to be functionally
identical to a Turing machine if it can be mapped to an abstract model >>>> of computation that is functionally identical to a Turing machine.
I am not sure what this conditional, hedged and vague remark is intended >>> to say, but the C programming language (without the standard IO library) >>> is not Turing complete[1]. I suspect it isn't Turing complete even
/with/ the standard IO library, but I'd have to think about that. It
all comes down to what parts of the language are bounded by definition.
[1] I am not 100% sure of this. I have seen no proof, but I can't see
any way round the bounded nature of the language. C appears to be too
low-level to be formally Turing complete, but I'd love to find out that
I'm wrong about this.
When the "C" programming language is mapped to this abstract model of
computation it becomes functionally identical to a Turing Machine if
the following abstract model is functionally identical to a Turing
Machine:
No. This is a technical question that can't be resolved with words like "mapped". Any non-Turing complete model such as a pushdown automaton
can be "mapped" to a Turing machine (which is certainly Turing
complete).
olcott <NoOne@NoWhere.com> writes:
On 8/29/2020 2:03 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/29/2020 11:47 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
The "C" programming language can be construed to be functionally
identical to a Turing machine if it can be mapped to an abstract model >>>>>> of computation that is functionally identical to a Turing machine.
I am not sure what this conditional, hedged and vague remark is intended >>>>> to say, but the C programming language (without the standard IO library) >>>>> is not Turing complete[1]. I suspect it isn't Turing complete even
/with/ the standard IO library, but I'd have to think about that. It >>>>> all comes down to what parts of the language are bounded by definition. >>>>>
[1] I am not 100% sure of this. I have seen no proof, but I can't see >>>>> any way round the bounded nature of the language. C appears to be too >>>>> low-level to be formally Turing complete, but I'd love to find out that >>>>> I'm wrong about this.
When the "C" programming language is mapped to this abstract model of
computation it becomes functionally identical to a Turing Machine if
the following abstract model is functionally identical to a Turing
Machine:
No. This is a technical question that can't be resolved with words like >>> "mapped". Any non-Turing complete model such as a pushdown automaton
can be "mapped" to a Turing machine (which is certainly Turing
complete).
You made that point.
Obviously, but did you understand it?
We map the "C" programming language to the tiny subset of the x64
programming language and then validate that this "C" language can
recognize the set of recursively enumerable languages.
I don't think you know what you are talking about. Sorry, I'll
re-phrase that. I am sure you don't know what you are talking about.
The "C" programming language can be construed to be functionally
identical to a Turing machine if it can be mapped to an abstract model
of computation that is functionally identical to a Turing machine.
https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access
When a computer program written in "C" is construed to be functionally identical to its Turing machine equivalent, this "C" program is (for the
purpose of studying the limits of algorithmic computation) one-and-the-same-thing as a Turing Machine.
The strong version of the Sapir–Whorf hypothesis says that language determines thought and that linguistic categories limit and determine cognitive categories.
https://en.wikipedia.org/wiki/Linguistic_relativity
Although linguists currently reject the strong version of the
Sapir–Whorf hypothesis as it applies to ordinary natural language communication, I have found that the strong version does indeed apply to
how sets of conventional terms-of-the-art apply to some technical
subject matter.
Some of the basic concepts at the foundation of logic, mathematics and computer science are pieced together incoherently yet their error is
utterly inexpressible within the conventional terms of the art in these fields.
There are errors at the foundation of logic, mathematics and computer
science that cannot be discerned because they have been defined out-of-existence. The only way to see these errors requires the paradigm shift of reconstructing the essential semantic meanings of these
conventional terms of the art from scratch.
On 2020-08-29 09:29, olcott wrote:
The "C" programming language can be construed to be functionally
identical to a Turing machine if it can be mapped to an abstract model
of computation that is functionally identical to a Turing machine.
https://www.researchgate.net/publication/343838609_The_x86_language_has_Turing_Complete_memory_access
When a computer program written in "C" is construed to be functionally
identical to its Turing machine equivalent, this "C" program is (for
the purpose of studying the limits of algorithmic computation)
one-and-the-same-thing as a Turing Machine.
To be blunt, the position you are taking here is patently ridiculous.
The fact that I can drive from 9th St Station to Hoboken doesn't make a
car one-and-the-same-thing as a PATH train.
One thousand pennies may be equivalent to a ten-dollar bill, but that
doesn't make them the same thing.
If you use 'Turing Machine' to refer to pretty much anything that can do something a Turing Machine can do, then how do you refer to a Turing
Machine?
André
olcott<NoOne@NoWhere.com> writes:
I think that the only higher level construct required to make "C"
truly Turing complete is the capability to create an unlimited number
of totally unlimited linked lists.
You need much less than this, but that would do it.
On 2020-08-29 09:12, olcott wrote:
The strong version of the Sapir–Whorf hypothesis says that language
determines thought and that linguistic categories limit and determine
cognitive categories.
https://en.wikipedia.org/wiki/Linguistic_relativity
Although linguists currently reject the strong version of the
Sapir–Whorf hypothesis as it applies to ordinary natural language
communication, I have found that the strong version does indeed apply
to how sets of conventional terms-of-the-art apply to some technical
subject matter.
Some of the basic concepts at the foundation of logic, mathematics and
computer science are pieced together incoherently yet their error is
utterly inexpressible within the conventional terms of the art in
these fields.
There are errors at the foundation of logic, mathematics and computer
science that cannot be discerned because they have been defined
out-of-existence. The only way to see these errors requires the
paradigm shift of reconstructing the essential semantic meanings of
these conventional terms of the art from scratch.
This is yet another patently ridiculous notion.
First of all, it's the Whorf Hypothesis. Calling it the Sapir-Whorf Hypothesis is a dead give-away that you are not familiar with it. That's
what you get from trying to learn everything from Wikipedia articles (or worse, from George Orwell).
The Whorf Hypothesis concerns the relation between thought and the grammatical categories of people's *native* language. Formal languages
don't have native speakers. Nor do they have anything analogous to the
types of grammatical categories with which Whorf was concerned.
Absolutely nothing precludes you from proposing your own terms if you
find existing ones inadequate *provided that you precisely define those terms*.
What you can't do is simply use existing terms to mean entirely
different things from their conventional meanings and expect to be understood.
André
olcott <NoOne@NoWhere.com> writes:
On 8/29/2020 8:35 PM, Ben Bacarisse wrote:<cut>
olcott <NoOne@NoWhere.com> writes:
The Turing complete stuff would be machine code that the "C" does not
know about or care about. If "C" needs unlimited integers it makes
them out of ASCII digits.
You are confused. "Implementing" a non-Turing complete language by
using a Turing complete notation (even a Turing machine itself) does not >>> make the original language Turing complete.
You already agreed that it does.
You are having trouble understanding what people (it's not just me) are saying to you. I'm not bothered if you don't know what I am saying, but
I am happy to answer questions about it if you want to find out.
Obviously it's simpler for me if you continue as you are.
On 8/29/2020 5:53 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
I think that the only higher level construct required to make "C"
truly Turing complete is the capability to create an unlimited number
of totally unlimited linked lists.
You need much less than this, but that would do it.
Would you like some help to understand how this stuff works?
Not interested?
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 67:52:31 |
Calls: | 6,654 |
Files: | 12,200 |
Messages: | 5,332,028 |