On 8/12/2020 6:39 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/12/2020 4:51 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/12/2020 10:04 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
The brand new concept of Turing Complete computations as opposed to >>>>>>> and contrast with Turing Complete language and Turing Complete
machines provides for this.
What are you babbling about? What new concept? Do you mean you've just >>>>>> heard about it? This is a settled topic. The details need more care >>>>>> than you are willing to take, but the equivalence of various notions of >>>>>> "effective procedure" is not in question here.
So it is commonly understood that a language or abstract machine can >>>>> be considered Turing complete for a subset of problems?
Can compute a subset of problems, yes. There's no need to add another >>>> misuse of the technical phrase "Turing complete". Misusing the term
does not create a "brand new concept", it just incorrectly describes a >>>> conventional one.
I really can't stress enough the value that education can have in cases >>>> like this. There are many books out there that might enable you to know >>>> what others already know.
I want to establish the theoretical foundation that an x86 based
refutation of the convetional halting problem counter-examples would
immediately and directy apply to Turing Machines.
I can't think of a better way to be sure no one takes you seriously.
Since (details aside) all programming notations are just as good for
this purpose,
OK great then everyone simply assumes Church-Turing.
One thing that other mere programming notations are not good for is
providing an actual execution trace.
int main()
{
uint32_t* S1;
S1 = (uint32_t*) 16; // size param passed to AllocateHeap
AllocateHeap(&S1); // Allocates 16 uint32_t
SaveState(&S1); // Saves 16 32-bit registers
RestoreState(&S1); // Process context switch
__asm hlt
}
On 8/12/2020 7:21 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/12/2020 5:00 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:I want to establish the theoretical foundation that an x86 based
On 8/12/2020 3:36 PM, Ben Bacarisse wrote:<cut>
You babbled. Throwing terms around that you don't understand is not a >>>>>> good look for anyone. That whole post was a dog's dinner.
Let me break it down for you using more precise terms:
No. I know what you are saying because it's not at all complex, unusual >>>> or new. Your response stems the same problem that leads you to think
people are mad or lying or playing mind games. Sometimes people think >>>> you are babbling, not because they don't understand, but because you are >>>> tossing a word salad.
A precise degree of partial Turing completeness can be defined as:
The subset of Turing-computable functions that are computable on an
abstract machine or programming language.
The second sentence is fine. It's standard fare in computability
theory, though we go a bit further by not limiting ourselves to Turing >>>> computable functions.
The babbling comes from using phrases like Turing completeness
inappropriately. The latest, "partial Turing completeness", is of
course an oxymoron. Were you aiming for some archaic rhetorical effect? >>>
refutation of the conventional halting problem counter-examples would
immediately and directly apply to Turing Machines.
<screeching of brakes> Reality check: what does what you call "partial
Turing completeness" have to do with your claim? Are you planning to
demonstrate something about a computational model that can't compute all
the functions a Turing machine can? Do you plan to extrapolate that up
to Turing computability?
I want to explicitly account for the cases that may be computable in
finite memory that are not computable in unlimited memory. It was
proposed that such cases can possibly exist.
To do this it must be universally understood that the computability of
an x86 based machine is identical to the computability of a Turing
machine on a subset of functions.
Two years ago, when you were still lying about having a Turing machine,
people (certainly me but I think others too) counselled you to use
anything but a Turing machine because, due to computational equivalence,
there was no need for the mess that is an actual Turing machine.
Apparently some people believe that there is a difference between
Turing complete and Turing equivalent
https://scienceblogs.com/goodmath/2007/01/05/turing-equivalent-vs-turing-co
But you, being you, have managed to pick the worst possible alternative.
I an not sure that this is an accident. A pile of x86 is a great place
to hide.
Not in this case. I did it this way to make my solution as easy as
possible to fully understand. I could have buried it in a few billion
Turing Machine interpreter instructions where simply accessing the
memory location @ 0xffff takes 65,0000 steps.
It boils down to this: If Turing, Gödel and Tarski are correct then
truth itself is fundamentally broken.
Since truth itself cannot possibly be broken human understanding of
the nature of truth is necessarily incorrect.
Since provability is an aspect of satisfiability true and provable
cannot possibly diverge.
On 8/12/2020 10:25 PM, Ben Bacarisse wrote:<cut>
olcott <NoOne@NoWhere.com> writes:
It boils down to this: If Turing, Gödel and Tarski are correct then
truth itself is fundamentally broken.
You mean mathematical truth is not what you always thought it was.
Since truth itself cannot possibly be broken human understanding of
the nature of truth is necessarily incorrect.
Mathematics does not bend to your pronouncements. Mathematical truth is
what it is, and your declaring it broken has no effect at all.
Since provability is an aspect of satisfiability true and provable
cannot possibly diverge.
PO-provability may be an aspect of PO-satisfiability and PO-truth and
PO-provable may not diverge, but the sentence as you wrote it (with
words you don't yet understand) is just wrong.
If a proposition is true you must have some basis to know that it is
true besides a wild guess or a hunch.
What-so-ever definitive basis that you have to show that a proposition
is definitely true is the proof of this proposition.
THERE IS NO WAY OUT OF THIS
On 8/13/2020 5:50 AM, Ben Bacarisse wrote:<cut>
olcott <NoOne@NoWhere.com> writes:
On 8/12/2020 10:25 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
Since provability is an aspect of satisfiability true and provable
cannot possibly diverge.
PO-provability may be an aspect of PO-satisfiability and PO-truth and
PO-provable may not diverge, but the sentence as you wrote it (with
words you don't yet understand) is just wrong.
If a proposition is true you must have some basis to know that it is
true besides a wild guess or a hunch.
Every arithmetical formula is either true or false. Not every
arithmetical formula is provable or refutable. Deal with it.
The only possible way to correctly conclude that a WFF is true or
false is some kind of proof.
If anyone concludes that a WFF is true or false without some kind of
proof this is necessarily mere presumption.
What-so-ever definitive basis that you have to show that a proposition
is definitely true is the proof of this proposition.
In a sound system, if you have a proof, the proposition is true. No one
doubts that. But in the absence of a proof every proposition P
corresponds to one as yet unknown truth: either P or ~P.
Sure that is correct, That that is not what is asserted by the Gödel
proof.
The Gödel proof asserts that G is definitely true and definitely
unprovable.
That is the same thing as calculating the sum of two
integers without using any numbers (integers are numbers).
THERE IS NO WAY OUT OF THIS
Yes there is: education. You don't have to keep use words you don't
understand.
The only thing that I can share is details about my x86 based UTM.
I will not share any details about my halt decider until a very long
time after I post the x86 based UTM to a code repository and have made
all of the changes suggested by peer review.
On 8/13/2020 1:34 PM, Keith Thompson wrote:<cut>
I presume you have reasons not to use the standard malloc function.
Yes. The master UTM is its own self-contained operating system and all
memory is a single contiguous block as if it was an actual UTM tape.
On 8/13/2020 2:38 PM, Ben Bacarisse wrote:<cut>
olcott <NoOne@NoWhere.com> writes:
On 8/13/2020 5:50 AM, Ben Bacarisse wrote:
In a sound system, if you have a proof, the proposition is true. No one >>>> doubts that. But in the absence of a proof every proposition P
corresponds to one as yet unknown truth: either P or ~P.
Sure that is correct, That that is not what is asserted by the Gödel
proof.
Yes it is. You've never read it, have you? You would fall at the first page.
The Gödel proof asserts that G is definitely true and definitely
unprovable.
The paper has an extra sting in the tail: that g must be true. But this
is a metamathematical deduction from outside PM.
Ah great you are directly delving into the actual heart of it. The key mistake is that true outside of PM does not count as true inside of
PM.
You once thought it would be wise to learn what words like "true" mean
in a mathematical context, but that did not last long. You stumbled
when it turned out you did not know what a function was.
We can get back to that. I still need to know the conventional
meanings of the related set of terms.
On 8/13/2020 5:17 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/13/2020 2:38 PM, Ben Bacarisse wrote:<cut>
olcott <NoOne@NoWhere.com> writes:
On 8/13/2020 5:50 AM, Ben Bacarisse wrote:
In a sound system, if you have a proof, the proposition is true. No one >>>>>> doubts that. But in the absence of a proof every proposition P
corresponds to one as yet unknown truth: either P or ~P.
Sure that is correct, That that is not what is asserted by the Gödel >>>>> proof.
Yes it is. You've never read it, have you? You would fall at the first page.
The Gödel proof asserts that G is definitely true and definitely
unprovable.
The paper has an extra sting in the tail: that g must be true. But this >>>> is a metamathematical deduction from outside PM.
Ah great you are directly delving into the actual heart of it. The key
mistake is that true outside of PM does not count as true inside of
PM.
Nonsense. There is no sign that you want to know what "true" means
anymore so it would seem pointless to try to explain. Until you know,
it would be absurd for me to reply. I only know the conventional
meanings for words.
The problem is that the conventional terms of the art from which the
formal notion of truth is composed:
{Interpretation, Satisfaction, logical consequence, decidability}
(please let me know it I missed any)
Suffer from the Sapir–Whorf hypothesis thus making the correct nature
of analytical truth inexpressible.
https://en.wikipedia.org/wiki/Linguistic_relativity
It seems that the key error of the conventional notion of truth is the concept of decidability. The assumption that every closed WFF maps to
a Boolean value is simply a false assumption.
32-bit x86 function calls already use signed 32-bit relative addressing.
void Bar(int N)
{
Foo(5);
}
void Foo(int N)
{
Bar(6);
}
101: 55 push ebp
102: 8BEC mov ebp, esp
104: 6A05 push 0x5
106: E823000000 call 0x12e ; +23h bytes little endian
10b: 83C404 add esp, 0x4 ; of 00000023h
10e: 5D pop ebp
10f: C3 ret
12e: 55 push ebp
12f: 8BEC mov ebp, esp
131: 6A06 push 0x6
133: E8C9FFFFFF call 0x101 ; -37h bytes little endian
138: 83C404 add esp, 0x4 ; of FFFFFFC9h
13b: 5D pop ebp
13c: C3 ret
The x86 architecture specifies a Turing complete model of computation
for its control flow operations:
Signed 32-bit relative addressing for function calls. (shown above).
Signed 32-bit relative addressing for unconditional branches.
Signed relative addressing for all conditional branches.
It can also be understood (although more difficult) that the above
Turing complete model of computation applied to the control flow of
the x86 architecture can be leveraged to provide access to unlimited
data.
I won't provide the details of this to avoid getting into endless
debate with people that do not understand that they do not understand.
I am going to go with the code that is the easiest to understand at the "C" level.
uint32_t* master_state;
master_state = AllocateHeap(16);
On 8/14/2020 6:48 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
32-bit x86 function calls already use signed 32-bit relative addressing. >>>
void Bar(int N)
{
Foo(5);
}
void Foo(int N)
{
Bar(6);
}
101: 55 push ebp
102: 8BEC mov ebp, esp
104: 6A05 push 0x5
106: E823000000 call 0x12e ; +23h bytes little endian
10b: 83C404 add esp, 0x4 ; of 00000023h
10e: 5D pop ebp
10f: C3 ret
12e: 55 push ebp
12f: 8BEC mov ebp, esp
131: 6A06 push 0x6
133: E8C9FFFFFF call 0x101 ; -37h bytes little endian
138: 83C404 add esp, 0x4 ; of FFFFFFC9h
13b: 5D pop ebp
13c: C3 ret
The x86 architecture specifies a Turing complete model of computation
for its control flow operations:
Signed 32-bit relative addressing for function calls. (shown above).
Signed 32-bit relative addressing for unconditional branches.
Signed relative addressing for all conditional branches.
There are two notions knocking around that we must distinguish between.
There is an informal, rather casual, notion of Turing complete that
glosses over all the reasons why some notation or other is not really
Turing complete. Then there is the more formal demonstration that some
notation really is Turing complete by demonstrating (often in general
terms) how that notation can simulate any Turing machine computation.
From the above, you seem to be going for the casual meaning because you
don't explain how you would deal with having, say, 87 trillion
functions. Just doing 32-bit relative calls won't work in that case.
We don't have to actually have the hardware. We only really need the
abstract model of computation.
To specify 87 trillion functions merely requires a linked list with 87 trillion elements.
On 8/14/2020 6:34 AM, Andy Walker wrote:
every standard m/c code
that I am familiar with is perfectly capable of encoding any one of
the standard UTMs [and therefore any TM computation]. All that is
needed beyond a quite simple FSM, such as that on the front cover of
Minsky's "Computation", is a mechanism for accessing arbitrarily much
storage...
In V41 (x86 is Turing Complete)
I replace the last two sentences with this:
On 8/14/2020 2:32 PM, olcott wrote:
Data access is also based on relative addressing therefore: The x86
language specifies an abstract model of computation that is Turing
complete.
proving (to anyone knowing how easy it is to simulate a Turing Machine
such as Andy and I)
On 8/14/2020 8:13 PM, Ben Bacarisse wrote:<cut>
olcott <NoOne@NoWhere.com> writes:
We don't have to actually have the hardware. We only really need the
abstract model of computation.
But you are declaring it to be x86 code. You can banish the hardware,
but x86 code embodies that hardware in the notation.
The abstract model of computation that the x86 language specifies is
Turing complete.
To specify 87 trillion functions merely requires a linked list with 87
trillion elements.
What does that look like in x86 code? And whatever answer you come up
with, be sure it extends further. There must be no finite limit. This
is a genuine question. I know you like to just ignore questions and
move on to say more wrong stuff, but please take a moment to prove me
wrong. How is this data structure implemented in x86 code?
Any software engineer of ordinary skill in the art would be able to
create a linked list of memory elements of unlimited length on the
basis of a signed 32-bit relative jump instruction.
How do you choose whose claims you accept without proof?
That I directly know how easy it is to simulate a Turing machine
because I studied this one
http://www.lns.mit.edu/~dsw/turing/turing.html
I can now say that the abstract model of computation specified by the
x86 language is Turing complete on the basis that this model can
easily simulate this qintuple based Turing machine: http://www.lns.mit.edu/~dsw/turing/turing.html
and Relative addressing for control flow and data access provides the abstract model for unlimited memory access.
On 8/15/2020 11:49 AM, Ben Bacarisse wrote:<cut>
olcott <NoOne@NoWhere.com> writes:
On 8/14/2020 8:13 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
To specify 87 trillion functions merely requires a linked list with 87 >>>>> trillion elements.
What does that look like in x86 code? And whatever answer you come up >>>> with, be sure it extends further. There must be no finite limit. This >>>> is a genuine question. I know you like to just ignore questions and
move on to say more wrong stuff, but please take a moment to prove me
wrong. How is this data structure implemented in x86 code?
Any software engineer of ordinary skill in the art would be able to
create a linked list of memory elements of unlimited length on the
basis of a signed 32-bit relative jump instruction.
I asked what the data structure would look like. Jumps have nothing to
do with that. What does the data structure for an essentially unbounded
linked list look like?
We have an unlimited number of memory address sized blocks of memory
where the beginning of each block has a relative jump into the prior
block and the end of each block has a relative jump into the next
block.
How do you choose whose claims you accept without proof?
That I directly know how easy it is to simulate a Turing machine
because I studied this one
http://www.lns.mit.edu/~dsw/turing/turing.html
That is not a Turing-complete implementation.
http://www.lns.mit.edu/~dsw/turing/doc/tm_manual.txt
Here is a Turing Complete adaptation, that I derived two years ago:
All integers are specified as hexadecimal digits [0-9A-F] encoded as:
H. Each element of the quintuple is space delimited. One Turing
Machine Description language instruction per line:
H+ HHHHHHHH HHHHHHHH H+ [RL]
States are of arbitrary finite length strings of hexadecimal digits.
Tape elements are unsigned 32-bit integers.
This all appears to be a rabbit hole down which there is nothing to be
learned. No one doubts that something very like x86 code (as assembler,
not machine code) can be tweaked to remove limits and would therefore be
Turing-complete. But you were at pains to point out how this was really
x86 code in every detail.
My halting problem research will be presented as a program that runs
on an x86 based UTM.
On 8/15/2020 4:34 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/15/2020 11:49 AM, Ben Bacarisse wrote:<cut>
olcott <NoOne@NoWhere.com> writes:
On 8/14/2020 8:13 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
To specify 87 trillion functions merely requires a linked list with 87 >>>>>>> trillion elements.
What does that look like in x86 code? And whatever answer you come up >>>>>> with, be sure it extends further. There must be no finite limit. This >>>>>> is a genuine question. I know you like to just ignore questions and >>>>>> move on to say more wrong stuff, but please take a moment to prove me >>>>>> wrong. How is this data structure implemented in x86 code?
Any software engineer of ordinary skill in the art would be able to
create a linked list of memory elements of unlimited length on the
basis of a signed 32-bit relative jump instruction.
I asked what the data structure would look like. Jumps have nothing to >>>> do with that. What does the data structure for an essentially unbounded >>>> linked list look like?
We have an unlimited number of memory address sized blocks of memory
where the beginning of each block has a relative jump into the prior
block and the end of each block has a relative jump into the next
block.
What do jumps have to do with it?
You are either totally lost or you are trying to construct a
distraction. Since it seems you can't describe the data structure, just
show the code that changes every 0 to a 1 (these can be any fixed size)
data in the list elements. This loop will make the data structure clear
by showing how the code moves from one element to the next, and how the
end of the list is detected.
How do you choose whose claims you accept without proof?
That I directly know how easy it is to simulate a Turing machine
because I studied this one
http://www.lns.mit.edu/~dsw/turing/turing.html
That is not a Turing-complete implementation.
http://www.lns.mit.edu/~dsw/turing/doc/tm_manual.txt
Here is a Turing Complete adaptation, that I derived two years ago:
All integers are specified as hexadecimal digits [0-9A-F] encoded as:
H. Each element of the quintuple is space delimited. One Turing
Machine Description language instruction per line:
The original input language in Turing complete. It's the implementation
that is not.
H+ HHHHHHHH HHHHHHHH H+ [RL]
States are of arbitrary finite length strings of hexadecimal digits.
Tape elements are unsigned 32-bit integers.
That was a pointless change from the point of view of the TM notation.
The original and your extended on are both Turing complete. (It's a odd
term for a notation that simply a way of writing TMs.)
He had a very small finite number of states.
And if you make corresponding changes to the implementation (I don't
think you did) I very must suspect it was not more Turing complete than
the original.
My redesign obviously has an unlimited number of states.
My halting problem research will be presented as a program that runs
on an x86 based UTM.
In what sense is this x86 "thing" U and in what sense is it a TM?
This is apparently over your head.
On 8/15/2020 8:36 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
I have shown that the abstract model of computation specified by the
x86 language is Turing complete pertaining to access to unlimited
memory.
No, you have asserted it. In fact you've dodged requests to show it.
Write the loop I talked about. That will overcome the problem you have
in describing data structures.
Your jmp examples are just silly (EIP is fixed-width and it's no help
for data accesses).
Within the abstract model of computation specified by the x86 language
we would understand that jumping into the next 4GB block resets the EIP
to an offset within this block. Since this is not currently defined
within the current specification it would be a required adjustment to
the current definition. You have definitely made a valid p0oiint
here. There might even be other teeny tiny little adjustments that
would be required.
It is common knowledge in the field that requirements of simulating a
Turing Machine are trivial.
You have no idea what is common knowledge in this field. What certainly
is common is a lot of hand-waving about limits being arbitrary and so
not crucial. And, to be frank, that's pretty much all anyone cares
about. But the technical detail -- can some abstract notation really
describe potentially unbounded access to storage -- is much less
commonly discussed.
I invite you to show that it's possible.
I already did, although your caveat was totally apt.
I would probably prefer to use my unstated assumption that the EIP is
only relative to the current 4GB memory block because this lets me
keep the x86 language essentially as it is.
This seems to indicate that there is some sort of rigorous formal
definition of UTM:
On 8/16/2020 7:23 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/15/2020 8:36 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
I have shown that the abstract model of computation specified by the >>>>> x86 language is Turing complete pertaining to access to unlimited
memory.
No, you have asserted it. In fact you've dodged requests to show it.
Write the loop I talked about. That will overcome the problem you have >>>> in describing data structures.
Your jmp examples are just silly (EIP is fixed-width and it's no help
for data accesses).
Within the abstract model of computation specified by the x86 language
we would understand that jumping into the next 4GB block resets the EIP
to an offset within this block. Since this is not currently defined
within the current specification it would be a required adjustment to
the current definition. You have definitely made a valid p0oiint
here. There might even be other teeny tiny little adjustments that
would be required.
Yes, since jumps are of no use for the key data structure, you need the
Jumps <are> moving the tape head.
same "teeny tiny" change to the data addressing modes and, hey presto,
Data addressing modes are already relative.
you have access to unbounded storage implicitly defined in the abstract
machine. The notation itself takes on the burden of there always being
a "next" data block. You can't do it in pure x86 code.
It can certainly be done in pure x86 code. Current x86 hardware does
not interpret the x86 code this way.
On 8/16/2020 2:52 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/16/2020 2:13 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/16/2020 11:41 AM, Ben Bacarisse wrote:<cut>
Show me a loop that scans the tape replacing zeros with ones.
I have already shown you enough code that any coder would get it.
How curious, though, that you have not shown the simple loop I
requested. Is it a "trade secret"?
You are apparently not a coder.
You know, I don't think that will be the conclusion that other readers >>>> draw!
I already have other readers of exceptional credibility that have
certified that my analysis is precisely correct:
The relative addressing of the x86 language does specify an abstract
model of computation with access to unlimited memory.
Who said that? I need to go and correct that post too.
Meanwhile, a simple unbounded loop is sill not possible (or at least,
you refuse to post your secret code).
It is self-evident to any software engineer and incomprehensible to
people without sufficient software engineering background.
Kaz can probably tell you how it is done and you may not understand
him either, yet you would believe him.
This is the simplest explantion of the process that I can make:
If we reduce your problem to filling all of unmlimited memory with
zeroes we simply have the code keep moving itself forward as it fills
the memory area before it with zeroes. To minimize the number of moves
it always moves itself exactly as far as it can reach with its
relative addressing.
On 8/21/2020 5:20 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
So you don't have the gumption to admit when you have been proved
wrong?
Me: Since it seems you can't describe the data structure, just show the
code that changes every 0 to a 1 (these can be any fixed size) data
in the list elements. This loop will make the data structure clear
by showing how the code moves from one element to the next, and how
the end of the list is detected.
So you are saying that you are incapable of this?
On 8/23/2020 7:00 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/22/2020 8:04 PM, Ben Bacarisse wrote:<cut>
So don't prove me wrong! I don't mind.
But I wonder what you gain by not posting the x86 program (the one that >>>> scans a linked list changing zeros to ones, but where the list is not in >>>> any way bounded in size). It's odd that you have decided not to show
/anyone/ that you are right, because you think I am biased. Why give me >>>> that power? The only loser in that strategy is you.
Its a pearls before swine thing.
As I say, you owe me nothing (though an apology for calling me a pig
would not go amiss). By all means, keep all your great discoveries
secret forever, but I don't see how that benefits you one iota.
Oddly, when I got bored telling you were wrong about the x86 and stopped
replying you complained about being ignored. It seems only being told
you are right is acceptable.
You consistently prove that you lack the required prerequisites:
As does anyone who knows the x86. The required prerequisite is to
accept your extended architecture as being called "x86".
No one disputes that suitable abstract instruction sets, unbounded
registers,
Yet you continue to fail to understand (or acknowledge) that the
32-bit wide fixed width registers defined by one of the two x86
abstract models of computation would by themselves provide access to unlimited memory without have to actually change the abstract model of computation defined by this x86 language at all.
It is the relative addressing of the x86 abstract model of computation defined by the x86 language that provides memory access and control
flow access to an unlimited number of 4GB blocks of memory.
It is only hardware implementations of this abstract model that place
any limit on memory access. These hardware limits are implementation
details that are not any aspect of the abstract model itself.
The x86 language itself allows the instruction pointer and/or the data pointer
to move forward in 1 to 0x7fffffff incremenents an unlimited number of
times from their current location.
On 8/23/2020 10:42 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
In other words you continue to refrain from agreeing this this is true:
If the implementation detail of the memory architecture of the x86
language was an unlimited sequence of contiguous 4GB blocks and the
absolute addressing modes of the 86 language only referred to
addresses within the current 4GB block then it becomes rather obvious
that the x86 language exactly as it currently exists specifies access
to unlimited memory.
The antecedent is false,
You and I both know that implementation details are not a part of an
abstract model of computation itself.
You and I both know that the capability of moving a tape head forward relative to its current location an unlimited number of times would
define access to unlimited storage.
That the abstract model of computation defined by the x86 language
allows its instruction pointer and its data pointer to move forward
relative to their current location an unlimited number of times also
very obviously defines access to unlimited storage.
You and I also both know that when the physical hardware
implementations of this abstract model place limits on the access to
storage that these limits are not a part of the abstract model itself.
Thus we can conclude that the abstract model of computation defined by
the x86 language is fully Turing complete pertaining the requirement
that a Turing machine has access to unlimited memory.
On 8/23/2020 11:51 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/23/2020 7:00 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/22/2020 8:04 PM, Ben Bacarisse wrote:<cut>
So don't prove me wrong! I don't mind.
But I wonder what you gain by not posting the x86 program (the one that >>>>>> scans a linked list changing zeros to ones, but where the list is not in >>>>>> any way bounded in size). It's odd that you have decided not to show >>>>>> /anyone/ that you are right, because you think I am biased. Why give me >>>>>> that power? The only loser in that strategy is you.
Its a pearls before swine thing.
As I say, you owe me nothing (though an apology for calling me a pig
would not go amiss). By all means, keep all your great discoveries
secret forever, but I don't see how that benefits you one iota.
Oddly, when I got bored telling you were wrong about the x86 and stopped >>>> replying you complained about being ignored. It seems only being told >>>> you are right is acceptable.
You consistently prove that you lack the required prerequisites:
As does anyone who knows the x86. The required prerequisite is to
accept your extended architecture as being called "x86".
No one disputes that suitable abstract instruction sets, unbounded
registers,
Yet you continue to fail to understand (or acknowledge) that the
32-bit wide fixed width registers defined by one of the two x86
abstract models of computation would by themselves provide access to
unlimited memory without have to actually change the abstract model of
computation defined by this x86 language at all.
I can not "understand" a falsehood. Why do you say "would" not "do"?
Maybe you really do know that the x86 does not provide access to
theoretically unlimited memory, so you merely say that is "would"? Why
the subjunctive implying conditionality? You mean it would if it could?
It is the relative addressing of the x86 abstract model of computation
defined by the x86 language that provides memory access and control
flow access to an unlimited number of 4GB blocks of memory.
No. The x86 has no way to access unbounded memory.
It is only hardware implementations of this abstract model that place
any limit on memory access. These hardware limits are implementation
details that are not any aspect of the abstract model itself.
No, it is the design of the x86 that prevents it. Did you try to write
the loop I suggested?
This is a very definitive statement from me: it's not possible. In
fact, I maybe wrong. There could be some corner of the specification
that would permit access to some unbounded data (like the PDP-11 paper
tape) so I am relying on other people to review what I write (and they
may all have given up). I know /you/ have not demonstrated it, but
The x86 language itself allows the instruction pointer and/or the data
pointer
There is no "data pointer" in this context. Relative addressing is
RIP-relative only. You can (when not in IA-32 and compatibility mode)
use RIP-relative addressing to access data but your remark suggests it
applies to registers other than the instruction pointer. Mind you, you
have not been clear if you are talking about 64 bit mode. All your
previous posts seemed to suggest IA-32.
to move forward in 1 to 0x7fffffff incremenents an unlimited number of
times from their current location.
To see why you are wrong (even about that) you need to read the details
about that addressing mode. But you don't do details.
Ah great you have finally provided some evidence that you can
understand these things.
Here is a detail that you missed.
On 8/23/2020 7:38 PM, Ben Bacarisse wrote:<cut>
But suit yourself. As I said, boasting and insulting is the main
pleasure for a crank like you, and you won't stop just because I call
you out on it. Have fun.
I don't disrespect you. I disrespect your disrespecting me.
The code is completed, it is only 65 bytes of x86 machine code.
The code begins @ machine address 0x100. Fills the 256 bytes preceding
it with “@@@@â€. Dynamically modifies its own first instruction to
refer to its new code address. Copies itself to a new address that is
exactly 0x100 bytes above its current code address and then does a
relative jump to this new address.
I had the emulator terminate its execution after the first four full
cycles. I output the first 1024 bytes of memory before and after
execution and output complete execution trace of all of the execution.
This is 23 pages long.
According to the meaning of the syntax of the x86 language this
program would continue to operate after it completed its execution @
machine address: FFFFFF00.
The only reason that it would not continue to operate would be that
the physical implementation of the abstract model did not implement
the semantics specified by the language syntax.
When you understand RIP relative addressing better you will more
directly see how it makes the instruction pointer equivalent to a
Turing machine tape head.
RIP relative addressing is much simpler, data can be directly read
and written to relative to the instruction pointer. It is much easier
to see that RIP relative addressing more directly provides an
abstract model of computation with unlimited memory access.
On 8/25/2020 2:29 PM, Keith Thompson wrote:<cut>
<cut>A relative JMP is defined in terms of its effect on EIP, which is
specifed in that document as a 32-bit register. This directly
contradicts your claim.
I have switched gears now.
I have a much more effective approach.
To make the abstract model of computation implied by RIP relative
addressing code have Turing complete memory access for both code and
data we merely fail to assume any specific size of the instruction
pointer.
The "C" programming language can be construed to be functionally
identical to a Turing machine if it can be mapped to an abstract model
of computation that is functionally identical to a Turing machine.
On 8/29/2020 11:47 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
The "C" programming language can be construed to be functionally
identical to a Turing machine if it can be mapped to an abstract model
of computation that is functionally identical to a Turing machine.
I am not sure what this conditional, hedged and vague remark is intended
to say, but the C programming language (without the standard IO library)
is not Turing complete[1]. I suspect it isn't Turing complete even
/with/ the standard IO library, but I'd have to think about that. It
all comes down to what parts of the language are bounded by definition.
[1] I am not 100% sure of this. I have seen no proof, but I can't see
any way round the bounded nature of the language. C appears to be too
low-level to be formally Turing complete, but I'd love to find out that
I'm wrong about this.
When the "C" programming language is mapped to this abstract model of computation it becomes functionally identical to a Turing Machine if
the following abstract model is functionally identical to a Turing
Machine:
On 8/29/2020 2:03 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/29/2020 11:47 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
<cut>
The "C" programming language can be construed to be functionally
identical to a Turing machine if it can be mapped to an abstract model >>>>> of computation that is functionally identical to a Turing machine.
I am not sure what this conditional, hedged and vague remark is intended >>>> to say, but the C programming language (without the standard IO library) >>>> is not Turing complete[1]. I suspect it isn't Turing complete even
/with/ the standard IO library, but I'd have to think about that. It
all comes down to what parts of the language are bounded by definition. >>>>
[1] I am not 100% sure of this. I have seen no proof, but I can't see >>>> any way round the bounded nature of the language. C appears to be too >>>> low-level to be formally Turing complete, but I'd love to find out that >>>> I'm wrong about this.
When the "C" programming language is mapped to this abstract model of
computation it becomes functionally identical to a Turing Machine if
the following abstract model is functionally identical to a Turing
Machine:
No. This is a technical question that can't be resolved with words like
"mapped". Any non-Turing complete model such as a pushdown automaton
can be "mapped" to a Turing machine (which is certainly Turing
complete).
You made that point.
We map the "C" programming language to the tiny subset of the x64
programming language and then validate that this "C" language can
recognize the set of recursively enumerable languages.
I think that the only higher level construct required to make "C"
truly Turing complete is the capability to create an unlimited number
of totally unlimited linked lists.
I have almost all of the details
figured out how to do that.
Can you see how adding this to "C" would make "C" Turing complete?
On 8/30/2020 9:38 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 8/29/2020 8:35 PM, Ben Bacarisse wrote:<cut>
olcott <NoOne@NoWhere.com> writes:
The Turing complete stuff would be machine code that the "C" does not >>>>> know about or care about. If "C" needs unlimited integers it makes
them out of ASCII digits.
You are confused. "Implementing" a non-Turing complete language by
using a Turing complete notation (even a Turing machine itself) does not >>>> make the original language Turing complete.
You already agreed that it does.
You are having trouble understanding what people (it's not just me) are
saying to you. I'm not bothered if you don't know what I am saying, but
I am happy to answer questions about it if you want to find out.
Obviously it's simpler for me if you continue as you are.
On 8/29/2020 5:53 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
I think that the only higher level construct required to make "C"
truly Turing complete is the capability to create an unlimited number >>>>> of totally unlimited linked lists.
You need much less than this, but that would do it.
Would you like some help to understand how this stuff works?
Not interested?
I have defined two terms:
(a) Functionally equivalent machines / computations.
(b) Identical computations
My intention is that functionally equivalent machines are exactly the
same thing as Turing equivalent machines.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 81:55:54 |
Calls: | 6,658 |
Calls today: | 4 |
Files: | 12,203 |
Messages: | 5,333,417 |
Posted today: | 1 |