On 2021-09-17 20:40, olcott wrote:
Why do theory of computation problems require pure functions?
Because the theory of computation isn't about C or x86. It isn't about computer programs at all, nor about modern computers. Just because 'computation' and 'computer' share the same root doesn't mean they deal
with the same set of phenomena. Remember that the theory of computation developed before there even were digital computers or programming
languages.
Computational theory is concerned with describing *mathematical*
functions, and all mathematical functions are pure functions. A
computation is a method for determining the value of the mathematical function f(x) for a given x. A computable function is one for which it
is possible to construct a TM which, given an input string representing
x, will determine the value of f(x).
You can write computer programs which perform computations, but the
majority of computer programs don't.
If you really want to learn about the theory of computation, I'd suggest
that you forget about C or x86 assembly or any sort of computer language altogether and focus on Turing Machines.
And don't try to mentally translate anything you read about Turing
Machines into computer languages. Don't think about loop counters, or
calls, or machine addresses, because all of these things pertain to
computers and computer programming rather than computations and they
will get in the way of you actually understanding either turing machines
or the theory of computation more generally.
André
On 9/18/21 11:37 AM, olcott wrote:
On 9/18/2021 10:25 AM, Richard Damon wrote:
On 9/18/21 11:19 AM, olcott wrote:
On 9/18/2021 10:13 AM, Richard Damon wrote:
On 9/18/21 10:49 AM, olcott wrote:
On 9/18/2021 9:43 AM, Richard Damon wrote:
On 9/18/21 9:54 AM, olcott wrote:
On 9/17/2021 11:50 PM, André G. Isaak wrote:
On 2021-09-17 20:40, olcott wrote:
Why do theory of computation problems require pure functions? >>>>>>>>>Because the theory of computation isn't about C or x86. It isn't >>>>>>>>> about
computer programs at all, nor about modern computers. Just because >>>>>>>>> 'computation' and 'computer' share the same root doesn't mean they >>>>>>>>> deal with the same set of phenomena. Remember that the theory of >>>>>>>>> computation developed before there even were digital computers or >>>>>>>>> programming languages.
Computational theory is concerned with describing *mathematical* >>>>>>>>> functions, and all mathematical functions are pure functions. A >>>>>>>>> computation is a method for determining the value of the
mathematical
function f(x) for a given x. A computable function is one for >>>>>>>>> which it
is possible to construct a TM which, given an input string
representing x, will determine the value of f(x).
You can write computer programs which perform computations, but the >>>>>>>>> majority of computer programs don't.
If you really want to learn about the theory of computation, I'd >>>>>>>>> suggest that you forget about C or x86 assembly or any sort of >>>>>>>>> computer language altogether and focus on Turing Machines.
And don't try to mentally translate anything you read about Turing >>>>>>>>> Machines into computer languages. Don't think about loop
counters, or
calls, or machine addresses, because all of these things pertain to >>>>>>>>> computers and computer programming rather than computations and >>>>>>>>> they
will get in the way of you actually understanding either turing >>>>>>>>> machines or the theory of computation more generally.
André
I only need to know this for one reason.
When my halt decider is examining the behavior of its input and the >>>>>>>> input calls this halt decider in infinite recursion it cannot keep >>>>>>>> track
of the behavior of this input without having a single pseudo static >>>>>>>> variable that is directly embedded in its machine code.
This pseudo static variable stores the ongoing execution trace. >>>>>>>> When the
variable is an ordinary local variable it loses all of its data >>>>>>>> between
each recursive simulation.
It still seems to be a pure function of its input in that
(1) The function return values are identical for identical arguments >>>>>>>>
I think the key issue is that if you want to talk about the plain >>>>>>> behavior of C / x86 code, then you need to talk about the ACTUAL >>>>>>> behavior of that code, which means that the call to H within the >>>>>>> simulated machine needs to be seen as a call to H, and NOT some
logical
transformation.
If you want to do the transformation, your need to actually PROVE >>>>>>> that
this transform is legitimate under the conditions you want to make >>>>>>> it,
and that needs a very FORMAL argument based on accepted truths and >>>>>>> principles. Your 'argument' based on H not affecting P until after it >>>>>>> makes its decision so it can ignore its effect, is one of those
arguments that just seems 'obvious' but you haven't actually
proved it.
You don't seem to uhderstand that nature of how to prove something >>>>>>> like
this.
'Obvious' things can be wrong, like it seems obvious that the
order you
add up a series shouldn't affect the result, even if the number of >>>>>>> terms
in infinite, but there are simple proofs to show that for SOME
infinite
sums, the order you take the terms can make a difference, dispite it >>>>>>> seeming contrary to the nature of addition (infinities break a lot of >>>>>>> common sense).
A key point about your 'static' variable problem.
There is only a single question here, not the myriad of questions that >>>>>> you refer to.
Does my use of a single static variable that holds the ongoing
execution
trace by itself necessarily completely disqualify my system from
applying to the halting problem or not?
a) Yes
b) No
Error of the Complex Question, just like the question of "Have you
stopped lying about your Halt Decider?".
It isn't the existence of a static variable itself that might
disqualify
your system for applying to the halting problem,
André disagrees so one of you must be wrong.
Then take his answer and go away as you admit defeat.
That is not how categorically exhaustive reasoning works. Categorically
exhaustive reasoning eliminates the possibility of the error of
omission. It results in limited subject domain omniscience.
It only stops when
(1) A solution is found.
(2) No solution exist within omniscience.
But your question, as I explained, was not categorically exhaustive.
Have you stopped lying about your Halt Decider?
but does the existence
of that variable, and the way that your program uses it mean that H is >>>>> not an actual Computation. The existence of the variable opens the door >>>>> to it being disqualified, but the actual disqualification needs more >>>>> information.
If EVERY call to H(P,I) for a given P and I returns the exact same
answer every time, then H is still a computation.
The key point here is that When H decides on the call H(P,P) and in
that
simulation of P(P) there is a call to H(P,P) then the 'correct' answer >>>>> for H needs to take into account that this simulated WILL do exactly >>>>> the
same thing as the H that is doing the simulation, so if the
simulating H
is going to answer Non-Halting, the behavior of the simulated machine >>>>> will be based on that simulated H also returning that same answer. Yes, >>>>> H is not going to have simulated to that point of the simulated
machines
execution, so the simulating H will not be able to see that result, but >>>>> that IS the behavior of the machine that it is simulating, and thus
affect what is the 'Right' answer for H to give.
If H really WAS a real
simulator, then there is no issue with the static variable as only >>>>>>> one
copy of H is actually execution, all the other copies are just
simulation, so the one variable holding the execution trace hold the >>>>>>> execution trace of the simulation that it is doing, and there is no >>>>>>> issue.
The only way that the issue you describe becomes a real issue is if H >>>>>>> doesn't ACTUALLY simulate/debug step through copies of itself, but >>>>>>> applies some sort of 'transform' to the execution, and you need to >>>>>>> have
some very good proof that this transform is actually valid, and that >>>>>>> the
resulting H, which passes data to embedded copies of itself, and thus >>>>>>> knows of stuff of its environment beyond what it is supposed to
know is
actually the equivalent of the decider that doesn't do any of that >>>>>>> 'illegal' operations. My guess is that your H is actually NOT the >>>>>>> equivalent of such an actual computation and does actually behave >>>>>>> differently depending on execution environment, and thus fails to >>>>>>> meet
the basic requirements.
Remember, if the H inside H^ doesn't behave exactly identical to >>>>>>> the H
that is deciding on that H^, then you aren't working on the right >>>>>>> definitions of the system.
This seems to be one of your fundamental tripping point, and is
one of
the issues with doing proofs like this in non-Turing Machine
environment
(like C/x86 or even RASP) that 'code fragments' can be
non-computations
and thus not the equivalent of a Turing Machine.
On 2021-09-18 14:55, olcott wrote:
On 9/18/2021 3:45 PM, Richard Damon wrote:There's something horrendously confused about this.
On 9/18/21 4:17 PM, olcott wrote:
On 9/18/2021 2:57 PM, Richard Damon wrote:
On 9/18/21 3:39 PM, olcott wrote:
On 9/18/2021 2:24 PM, Richard Damon wrote:
On 9/18/21 1:08 PM, olcott wrote:
On 9/18/2021 11:52 AM, André G. Isaak wrote:
On 2021-09-18 10:39, olcott wrote:
On 9/18/2021 11:10 AM, André G. Isaak wrote:
On 2021-09-18 09:17, olcott wrote:
On 9/18/2021 10:04 AM, André G. Isaak wrote:
On 2021-09-18 07:57, olcott wrote:
I either must stick with C/x86 code or write a RASP >>>>>>>>>>>>>> machine, the
only way that my key insights can possible be fully >>>>>>>>>>>>>> understood is
to have every single detail as fully operational code such >>>>>>>>>>>>>> that
we can simply see what it does and thus not merely imagine >>>>>>>>>>>>>> what
it would do.
Why do you insist on continuously repeating this rather >>>>>>>>>>>>> ridiculous
claim?
When working with Turing Machines one doesn't need to 'merely >>>>>>>>>>>>> imagine what it would do.' One can see *exactly* what it does. >>>>>>>>>>>>>
André
When H is defined as a simulating halt decider then H correctly >>>>>>>>>>>> decides the halt status of ⟨Ĥ⟩ ⟨Ĥ⟩.
And this statement relates to my post how exactly?
André
How can [One can see *exactly* what it does] show that my
claim that
simulating halt decider H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does not correctly
decide the halt status of ⟨Ĥ⟩ ⟨Ĥ⟩ ?
I made no claims whatsoever in my post about your H.
This is exactly what I mean by dishonest dodge AKA the strawman >>>>>>>> error.
When we examine the Peter Linz H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
there is nothing in the peter Linz text where
[One can see *exactly* what it does]
when we assume that the Peter Linz H/Ĥ are based on simulating halt >>>>>>>> deciders.
In the Linz text, we are not given the detail of what the
machines do
inside for their processing, but we know EXACTLY how they behave >>>>>>> at an
input/output level.
H is given a defined input, a properly encoded version of H^ (<H^>) >>>>>>>
and is defined to end, based on whatever algorithm is inside H to >>>>>>> end at
either H.qy or H.qn.
He then defines a machine H^ that is based on a nearly exact copy >>>>>>> of H,
with just a minor change in the qy state to make it non-terminal but >>>>>>> loop, and the add a duplication of the input, so H^ can take as an >>>>>>> input
<H^> and then get to the copy of the code of H with <H^> <H^> on the >>>>>>> tape, which is by definition the encoding of the machine H^(<H^>). >>>>>>>
Since it is identical code with identical input, but the property of >>>>>>> being a Computation, if H(<H^>.<H^>) will end at H.qn, then H^
will end
at H^.qn (and then halt) and if H ends at H.qy then H^ will get to >>>>>>> H^.qy
(and then loop forever).
Yes, Linz doesn't describe how H is designed to get from H.q0 to >>>>>>> H.qn or
H.qy, and that is intentional, as that is specified totally by the >>>>>>> design of H, and since NOTHING has been assumed about it, no
possible H
has been excluded from the reasoning.
Thus, even your simulating halt decider case fits in his model.
Once we know what the provided H will do when given this input,
we can
see what the machine that this input represents, and if H said it >>>>>>> would
halt, it loops, and if H says it won't halt, it Halts, thus any
answer H
gives will be wrong.
The only other posibilities is that H never gets to either H.qn or >>>>>>> H.qy,
in which case it has also failed to give the right answer.
That is factually incorrect.
H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ correctly transitions to H.qy.
When did you ever claim that H (not H1) ever said that H^(H^) was
halting.
You can't use H1, as H^ is built on H, if you want to use H1, you need >>>>> to build the H1^ that calls it to test.
Maybe you still don't know what that line means.
The Linz text does cannot possibly get into sufficient detail to show >>>>>> how this is not true simply because it is true.
Since your described algorithm for H says that H will declare this
input
to be non-halting, you are lying to say that this H goes to the
terminal
state qy.
The actual behavior of machine H is determined by the algorithm
embedded
in it. That algorithm says that <H^> <H^> represents a non-halting
computation, therefore H will go to H.qn.
Linz says that the H that gives the RIGHT answer for an input that is >>>>> Halting will go to H.qy, but your H doesn't go there because it is not >>>>> right.
'Give the right answer' is NOT a valid algorithm for a Turing Machine. >>>>>
When I refer to H1/H I am referring to my C/x86 code.
When I refer to H/Ĥ I am referring to the Peter Linz templates.
You can't seem to keep this straight.
Which means you have something MAJORLY wrong with your argument.
H1 and H in you C/x86 code are two copies of your decider
Linz H is the Correct Halting decider that is what you claim is
equivalent of you H
Not any more. Ever since I created H1 that does not have pathological
self-reference (two weeks ago?) retaining H that does have
pathological self-reference, my H(P,P) is equivalent to the Linz Ĥ.qx
⟨Ĥ⟩.
My H1(P,P) is equivalent to the Linz H ⟨Ĥ⟩ ⟨Ĥ⟩.
Previosly I had no idea how two different halt deciders would
coordinate between themselves so I did not discuss the Linz H ⟨Ĥ⟩ ⟨Ĥ⟩.
To the extent that I can actually follow your claims given that you keep changing the names of things, here is my current understanding of what
you have:
You have created a 'Halt Decider' H.
Alongside that, you also have a 'confounding case' P which you claim is derived from your H in the same way that Linz derives H_Hat from H (It
isn't actually, but let's not worry about that for now).
You also have a second Halt Decider H1 which is the same as H except
that it 'does not have pathological self-reference'. You've never shown
*any* code for this H1, but my assumption is that it is simply a copy of
H, but since H1 resides at a distinct machine address from H, if H is
called from within H1 it won't be seen as a recursive call.
The problem here is that Linz's proof asserts that for any putative halt decider H, we can construct a new Turing Machine H_Hat which H will be
unable to decide.
Your H cannot correctly decide H(P, P). You seem to acknowledge this.
Your H1 according to you *can* correctly decide H1(P, P).
But your P isn't derived from your H1. It is derived from your H.
Alongside your H1 there will be a P1 such that H1(P1, P1) will not be correctly decided, so you still aren't overcoming Linz.
Linz doesn't claim that it is impossible for the halting status of H_Hat(H_Hat) to be decided. He claims only that it cannot correctly be decided by H.
Basically (subject to seeing your actual code), you have a machine H
which cannot correctly decide whether P(P) halts but which can correctly decide whether P1(P1) halts. You also have a machine H1 which can
correctly decide whether P(P) halts but which cannot correctly decide
whether P1(P1) halts. So neither of these can possibly count as a
universal halt decider since each has a case it cannot decide. Neither
your H nor your H1 can decide the corresponding case described by the
Linz Proof.
André
On 9/18/21 6:58 PM, olcott wrote:
On 9/18/2021 5:55 PM, André G. Isaak wrote:
On 2021-09-18 16:32, olcott wrote:
On 9/18/2021 5:28 PM, André G. Isaak wrote:
On 2021-09-18 16:15, olcott wrote:
On 9/18/2021 5:08 PM, André G. Isaak wrote:And since you have no idea *which* one of them is wrong, how exactly >>>>> is it that you have a 'universal halt decider'?
Basically (subject to seeing your actual code), you have a machine >>>>>>> H which cannot correctly decide whether P(P) halts but which can >>>>>>> correctly decide whether P1(P1) halts. You also have a machine H1 >>>>>>> which can correctly decide whether P(P) halts but which cannot
correctly decide whether P1(P1) halts. So neither of these can
possibly count as a universal halt decider since each has a case >>>>>>> it cannot decide. Neither your H nor your H1 can decide the
corresponding case described by the Linz Proof.
André
By using the H1/H pair we have a universal halt decider.
Whenever they don't provide the same result then one of them is wrong. >>>>>
André
I just defeated Rice's theorem, the other details are for another day.
How exactly have you managed to do that?
I already told you. I wish you would pay attention.
int main()
{
if (H1((u32)P, (u32)P) != H((u32)P, (u32)P))
OutputString("Pathological self-reference error!");
}
Which itself proves nothing.
What is the precise Semantic Property that you are deciding on.
Also, Rice's theorem requires a Decider to give the answer, so an if in
main isn't the decider that disproves Rice.
Define your property precisely enough that others can verify it.
Define a decider (maybe call it R) that decides that property on a
machine given to it.
My first guess is that you mean R to be:
u32 R(u32 P) {
return H1(P,P) != H(P,P);
}
but now you have to specify what semantic property of P it is detecting.
On 9/20/21 9:33 PM, olcott wrote:
On 9/20/2021 8:28 PM, Richard Damon wrote:
On 9/20/21 7:33 PM, olcott wrote:
On 9/20/2021 5:06 PM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
On 9/20/2021 5:00 AM, Ben Bacarisse wrote:
olcott <NoOne@NoWhere.com> writes:
The two machines are already fully operational.It's deceptive to call your secret C functions "machines". It
H1 is merely a copy of H.
hints at
the formality a clarity of Turing machines, but a TM that is a
copy of
another will have exactly the same transfer function (i.e. it will >>>>>>> never
compute a different result).
Unless one of them has a pathological self-reference relationship with >>>>>> its input and the other one does not.
No. I stated a fact (about TMs) that has no exceptions. Your secret C >>>>> code is another matter, of course. It's just the deception of calling >>>>> your code a "machine" that I was objecting to. It gives your hidden >>>>> code an air of formality that it lacks.
This is easily proven in a way that you are incapable of understanding. >>>>
void P(u32 x)
{
if (H(x, x))
HERE: goto HERE;
}
int main()
{
Output("Input_Halts = ", H((u32)P, (u32)P));
Output("Input_Halts = ", H1((u32)P, (u32)P));
}
Line 1 of main() specifies a different computation than line 2 of main() >>>> even though the source code of H1 is a copy of H.
Right, and since it is a DIFFERENT computation, it doesn't count that it >>> can decide P.
It is only the computation that P (really H^) is built on that matters.
Since H and H1 are different computaitons, H1 doesn't count.
They are only different because of the pathological self reference
(self-contradiction) error. All inputs not having this error relative to
their halt decider derive identical results for both H and H1.
There is NO 'pathological self reference error' for Turing machines,
because they CAN'T self reference, only provide copies.
olcott <NoOne@NoWhere.com> writes:
It doesn't matter what this difference is called, it can even be
called "late for dinner". That this difference exists and causes the
behavior to vary is what needs to be known.
Either H.q0 <H^><H^> and H^.qx <H^><H^> both transition to their
respective rejecting states, or H.q0 <H^><H^> transitions to H.qy and
H^.qx <H^><H^> does not halt. This difference in behaviour is of no
help to you.
Feel free to quote me. I won't deny having said this next week. If I
find I'm wrong about it, I'll say so. That's what an honest person
does. Whether anything /you/ say can be trusted will depend on how you respond to other recent posts.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 365 |
Nodes: | 16 (2 / 14) |
Uptime: | 77:06:13 |
Calls: | 7,775 |
Calls today: | 1 |
Files: | 12,911 |
Messages: | 5,750,022 |