On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it erroneous.
Since you don't understand the meaning of self-contradictory, that claim
is erroneous.
You are also working with a Strawman, because you can't understand the
actual statement G, so even if you were right about the statement you
are talking about, you would still be wrong about the actual statement.
The ACTUAL G has no "Self-Reference" in it, so can't be
"Self-Contradictory".
You are just proving how ignorant you are of logic.
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it erroneous.
Since you don't understand the meaning of self-contradictory, that
claim is erroneous.
When G asserts its own unprovability in F:
Any proof of G in F requires a sequence of inference steps in F that
prove that they themselves do not exist in F.
This is precisely analogous to you proving that you yourself never
existed.
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it erroneous. >>>>>
Since you don't understand the meaning of self-contradictory, that
claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps in F that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is true but
not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true.
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it erroneous. >>>>
Since you don't understand the meaning of self-contradictory, that
claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps in F that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is true but
not provable in F.
You don't need to do the proof in F,
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it
erroneous.
Since you don't understand the meaning of self-contradictory, that
claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps in F that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is true but
not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true.
So, you don't understand how to prove that something is "True in F" by
doing the steps in Meta-F.
Just shows you are ignorant.
Too bad you are going to die in such disgrace.
All you need to do is show that there exist a (possibly infinte) set of
steps from the truth makers in F, using the rules of F, to G. Youy don't
need to actually DO this in F, if you have a system that knowns about F.
Your mind is just too small.
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it
erroneous.
Since you don't understand the meaning of self-contradictory,
that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps in F that >>>>>> prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is true
but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true.
So, you don't understand how to prove that something is "True in F"
by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox expressed in
his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was true, then
the Liar's paradox would be true, thus that assumption can not be true.
Your
We can do the same thing when G asserts its own unprovability in F.
G cannot be proved in F because this requires a sequence of inference
steps in F that prove that they themselves do not exist in F.
Right, you can't prove, in F, that G is true, but you can prove, in
Meta-F, that G is true in F, and that G is unprovable in F, which is
what is required.
You are just showing that your mind can't handle the basics of logic, or truth.
It sounds like you are too stupid to learn, and that you have
intentionaally hamstrung yourself to avoid being "polluted" by "rote-learning" so you are just self-inflicted ignorant.
If you won't even try to learn the basics, you have just condemned
yourself into being a pathological liar because you just don't any better.
You and I can see both THAT G cannot be proved in F and WHY G cannot be
proved in F. G cannot be proved in F for the same pathological
self-reference(Olcott 2004) reason that the Liar Paradox cannot be
proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand claissic
arguement forms.
Just shows you are ignorant.
Too bad you are going to die in such disgrace.
All you need to do is show that there exist a (possibly infinte) set
of steps from the truth makers in F, using the rules of F, to G. Youy
don't need to actually DO this in F, if you have a system that knowns
about F.
Your mind is just too small.
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it
erroneous.
Since you don't understand the meaning of self-contradictory, that >>>>>> claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps in F that >>>>> prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is true
but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true.
So, you don't understand how to prove that something is "True in F" by
doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox expressed in
his theory is true in his meta-theory.
We can do the same thing when G asserts its own unprovability in F.
G cannot be proved in F because this requires a sequence of inference
steps in F that prove that they themselves do not exist in F.
You and I can see both THAT G cannot be proved in F and WHY G cannot be proved in F. G cannot be proved in F for the same pathological self-reference(Olcott 2004) reason that the Liar Paradox cannot be
proved in Tarski's theory.
Just shows you are ignorant.
Too bad you are going to die in such disgrace.
All you need to do is show that there exist a (possibly infinte) set
of steps from the truth makers in F, using the rules of F, to G. Youy
don't need to actually DO this in F, if you have a system that knowns
about F.
Your mind is just too small.
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it
erroneous.
Since you don't understand the meaning of self-contradictory,
that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps in F that >>>>>>> prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is true
but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true.
So, you don't understand how to prove that something is "True in F"
by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox expressed
in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was true, then
the Liar's paradox would be true, thus that assumption can not be true.
When one level of indirect reference is applied to the Liar Paradox it becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> TRUE.
Your
We can do the same thing when G asserts its own unprovability in F.
G cannot be proved in F because this requires a sequence of inference
steps in F that prove that they themselves do not exist in F.
Right, you can't prove, in F, that G is true, but you can prove, in
Meta-F, that G is true in F, and that G is unprovable in F, which is
what is required.
When G asserts its own unprovability in F it cannot be proved in F
because this requires a sequence of inference steps in F that prove that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way Tarski's Meta- theory removed the self-contradiction.
You are just showing that your mind can't handle the basics of logic,
or truth.
It may seem that way to someone that learns things by rote and mistakes
this for actual understanding of exactly how all of the elements of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you have
intentionaally hamstrung yourself to avoid being "polluted" by
"rote-learning" so you are just self-inflicted ignorant.
If you won't even try to learn the basics, you have just condemned
yourself into being a pathological liar because you just don't any
better.
I do at this point need to understand model theory very thoroughly.
Learning the details of these things could have boxed me into a corner
prior to my philosophical investigation of seeing how the key elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set of semantic tautologies. It is true that formal systems grounded in this foundation cannot be incomplete nor have any expressions of language that are undecidable. Now that I have this foundation I have a way to see exactly
how the concepts of math diverge from correct reasoning.
You and I can see both THAT G cannot be proved in F and WHY G cannot be
proved in F. G cannot be proved in F for the same pathological
self-reference(Olcott 2004) reason that the Liar Paradox cannot be
proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand claissic
arguement forms.
It is not that I do not understand, it is that I can directly see where
and how formal mathematical systems diverge from correct reasoning.
Because you are a learned-by-rote person you make sure to never examine whether or not any aspect of math diverges from correct reasoning, you
simply assume that math is the gospel even when it contradicts itself.
Just shows you are ignorant.
Too bad you are going to die in such disgrace.
All you need to do is show that there exist a (possibly infinte) set
of steps from the truth makers in F, using the rules of F, to G.
Youy don't need to actually DO this in F, if you have a system that
knowns about F.
Your mind is just too small.
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it >>>>>>>>>> erroneous.
Since you don't understand the meaning of self-contradictory, >>>>>>>>> that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps in F >>>>>>>> that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is true >>>>>>> but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true.
So, you don't understand how to prove that something is "True in F"
by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox expressed
in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was true, then
the Liar's paradox would be true, thus that assumption can not be true.
When one level of indirect reference is applied to the Liar Paradox it
becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> TRUE.
Your
We can do the same thing when G asserts its own unprovability in F.
G cannot be proved in F because this requires a sequence of inference
steps in F that prove that they themselves do not exist in F.
Right, you can't prove, in F, that G is true, but you can prove, in
Meta-F, that G is true in F, and that G is unprovable in F, which is
what is required.
When G asserts its own unprovability in F it cannot be proved in F
because this requires a sequence of inference steps in F that prove that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way Tarski's Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the basics of logic,
or truth.
It may seem that way to someone that learns things by rote and mistakes
this for actual understanding of exactly how all of the elements of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you have
intentionaally hamstrung yourself to avoid being "polluted" by
"rote-learning" so you are just self-inflicted ignorant.
If you won't even try to learn the basics, you have just condemned
yourself into being a pathological liar because you just don't any
better.
I do at this point need to understand model theory very thoroughly.
Learning the details of these things could have boxed me into a corner
prior to my philosophical investigation of seeing how the key elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set of semantic
tautologies. It is true that formal systems grounded in this foundation
cannot be incomplete nor have any expressions of language that are
undecidable. Now that I have this foundation I have a way to see exactly
how the concepts of math diverge from correct reasoning.
You and I can see both THAT G cannot be proved in F and WHY G cannot be >>>> proved in F. G cannot be proved in F for the same pathological
self-reference(Olcott 2004) reason that the Liar Paradox cannot be
proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand claissic
arguement forms.
It is not that I do not understand, it is that I can directly see
where and how formal mathematical systems diverge from correct reasoning.
But since you are discussing Formal Logic, you need to use the rules of Formal logic.
The other way to say it is that your "Correct Reasoning" diverges from
the accepted and proven system of Formal Logic.
Because you are a learned-by-rote person you make sure to never examine
whether or not any aspect of math diverges from correct reasoning, you
simply assume that math is the gospel even when it contradicts itself.
Nope, I know that with logic, if you follow the rules, you will get the correct answer by the rules.
If you break the rules, you have no idea where you will go.
As I have told you before, if you want to see what your "Correct > Reasoning" can do as a replaceent logic system, you need to start at the
BEGINNING, and see wht it gets.
To just try to change things at the end is just PROOF that your "Correct Reasoning" has to not be based on any real principles of logic.
Since it is clear that you want to change some of the basics of how
logic works, you are not allowed to just use ANY of classical logic
until you actually show what part of it is still usable under your
system and what changes happen.
Considering your current status, I would start working hard on that
right away, as with your current reputation, once you go, NO ONE is
going to want to look at your ideas, because you have done such a good
job showing that you don't understand how things work.
I haven't been able to get out of you exactly what you want to do with
your "Correct Reasoning", and until you show a heart to actually try to
do something constructive with it, and not just use it as an excuse for
bad logic, I don't care what it might be able to do, because, frankly, I don't think you have the intellect to come up with something like that.
But go ahead and prove me wrong, write an actual paper on the basics of
your "Correct Reasoning" and show how it actually works, and compare it
to "Classical Logic" and show what is different. Then maybe you can
start to work on showing it can actually do something useful.
Just shows you are ignorant.
Too bad you are going to die in such disgrace.
All you need to do is show that there exist a (possibly infinte)
set of steps from the truth makers in F, using the rules of F, to
G. Youy don't need to actually DO this in F, if you have a system
that knowns about F.
Your mind is just too small.
On 4/22/2023 6:19 PM, Richard Damon wrote:
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it >>>>>>>>>>> erroneous.
Since you don't understand the meaning of self-contradictory, >>>>>>>>>> that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps in F >>>>>>>>> that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is
true but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true.
So, you don't understand how to prove that something is "True in
F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox expressed
in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was true,
then the Liar's paradox would be true, thus that assumption can not
be true.
When one level of indirect reference is applied to the Liar Paradox it
becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> TRUE.
Your
We can do the same thing when G asserts its own unprovability in F.
G cannot be proved in F because this requires a sequence of inference >>>>> steps in F that prove that they themselves do not exist in F.
Right, you can't prove, in F, that G is true, but you can prove, in
Meta-F, that G is true in F, and that G is unprovable in F, which is
what is required.
When G asserts its own unprovability in F it cannot be proved in F
because this requires a sequence of inference steps in F that prove that >>> they themselves do not exist.
Meta-F merely removes the self-contradiction the same way Tarski's Meta- >>> theory removed the self-contradiction.
You are just showing that your mind can't handle the basics of
logic, or truth.
It may seem that way to someone that learns things by rote and mistakes
this for actual understanding of exactly how all of the elements of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you have
intentionaally hamstrung yourself to avoid being "polluted" by
"rote-learning" so you are just self-inflicted ignorant.
If you won't even try to learn the basics, you have just condemned
yourself into being a pathological liar because you just don't any
better.
I do at this point need to understand model theory very thoroughly.
Learning the details of these things could have boxed me into a corner
prior to my philosophical investigation of seeing how the key elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set of semantic
tautologies. It is true that formal systems grounded in this foundation
cannot be incomplete nor have any expressions of language that are
undecidable. Now that I have this foundation I have a way to see exactly >>> how the concepts of math diverge from correct reasoning.
You and I can see both THAT G cannot be proved in F and WHY G
cannot be
proved in F. G cannot be proved in F for the same pathological
self-reference(Olcott 2004) reason that the Liar Paradox cannot be
proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand claissic
arguement forms.
It is not that I do not understand, it is that I can directly see
where and how formal mathematical systems diverge from correct
reasoning.
But since you are discussing Formal Logic, you need to use the rules
of Formal logic.
I have never been talking about formal logic. I have always been talking about the philosophical foundations of correct reasoning.
The other way to say it is that your "Correct Reasoning" diverges from
the accepted and proven system of Formal Logic.
It is correct reasoning in the absolute sense that I refer to.
If anyone has the opinion that arithmetic does not exist they are
incorrect in the absolute sense of the word: "incorrect".
Because you are a learned-by-rote person you make sure to never examine
whether or not any aspect of math diverges from correct reasoning, you
simply assume that math is the gospel even when it contradicts itself.
Nope, I know that with logic, if you follow the rules, you will get
the correct answer by the rules.
If you break the rules, you have no idea where you will go.
In other words you never ever spend any time on making sure that these
rules fit together coherently.
Meaningless gobbledy-good until you actually define what you mean andAs I have told you before, if you want to see what your "Correct >
Reasoning" can do as a replaceent logic system, you need to start at the
BEGINNING, and see wht it gets.
The foundation of correct reasoning is that the entire body of
analytical truth is a set of semantic tautologies.
This means that all correct inference always requires determining the semantic consequence of expressions of language. This semantic
consequence can be specified syntactically, and indeed must be
represented syntactically to be computable
To just try to change things at the end is just PROOF that your
"Correct Reasoning" has to not be based on any real principles of logic.
Since it is clear that you want to change some of the basics of how
logic works, you are not allowed to just use ANY of classical logic
until you actually show what part of it is still usable under your
system and what changes happen.
Whenever an expression of language is derived as the semantic
consequence of other expressions of language we have valid inference.
The semantic consequence must be specified syntactically so that it can
be computed or examined in formal systems.
Just like in sound deductive inference when the premises are known to be true, and the reasoning valid (a semantic consequence) then the
conclusion is necessarily true.
Considering your current status, I would start working hard on that
right away, as with your current reputation, once you go, NO ONE is
going to want to look at your ideas, because you have done such a good
job showing that you don't understand how things work.
My reputation on one very important group has risen to quite credible
I haven't been able to get out of you exactly what you want to do with
your "Correct Reasoning", and until you show a heart to actually try
to do something constructive with it, and not just use it as an excuse
for bad logic, I don't care what it might be able to do, because,
frankly, I don't think you have the intellect to come up with
something like that.
Until we establish the foundation of correct reasoning in terms of a consistent and complete True(L,X) all AI systems will be anchored in the shifting sands of opinions.
But go ahead and prove me wrong, write an actual paper on the basics
of your "Correct Reasoning" and show how it actually works, and
compare it to "Classical Logic" and show what is different. Then maybe
you can start to work on showing it can actually do something useful.
The most important aspect of the tiny little foundation of a formal
system that I already specified immediately above is self-evident:
True(L,X) can be defined and incompleteness is impossible.
People that spend 99.99% of their attention on trying to show errors in
what I say rather than paying any attention understanding what I say
might not notice these dead obvious things
Just shows you are ignorant.
Too bad you are going to die in such disgrace.
All you need to do is show that there exist a (possibly infinte)
set of steps from the truth makers in F, using the rules of F, to
G. Youy don't need to actually DO this in F, if you have a system
that knowns about F.
Your mind is just too small.
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it
erroneous.
Since you don't understand the meaning of self-contradictory,
that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps in F that >>>>>> prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is true
but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true.
So, you don't understand how to prove that something is "True in F"
by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox expressed in
his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was true, then
the Liar's paradox would be true, thus that assumption can not be true.
Your
We can do the same thing when G asserts its own unprovability in F.
G cannot be proved in F because this requires a sequence of inference
steps in F that prove that they themselves do not exist in F.
Right, you can't prove, in F, that G is true, but you can prove, in
Meta-F, that G is true in F, and that G is unprovable in F, which is
what is required.
You are just showing that your mind can't handle the basics of logic, or
On 4/22/23 7:57 PM, olcott wrote:
On 4/22/2023 6:19 PM, Richard Damon wrote:
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it >>>>>>>>>>>> erroneous.
Since you don't understand the meaning of self-contradictory, >>>>>>>>>>> that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps in >>>>>>>>>> F that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is >>>>>>>>> true but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true. >>>>>>>>
So, you don't understand how to prove that something is "True in >>>>>>> F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox
expressed in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was true,
then the Liar's paradox would be true, thus that assumption can not
be true.
When one level of indirect reference is applied to the Liar Paradox it >>>> becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> TRUE.
Your
We can do the same thing when G asserts its own unprovability in F. >>>>>> G cannot be proved in F because this requires a sequence of inference >>>>>> steps in F that prove that they themselves do not exist in F.
Right, you can't prove, in F, that G is true, but you can prove, in
Meta-F, that G is true in F, and that G is unprovable in F, which
is what is required.
When G asserts its own unprovability in F it cannot be proved in F
because this requires a sequence of inference steps in F that prove
that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way Tarski's
Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the basics of
logic, or truth.
It may seem that way to someone that learns things by rote and mistakes >>>> this for actual understanding of exactly how all of the elements of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you have
intentionaally hamstrung yourself to avoid being "polluted" by
"rote-learning" so you are just self-inflicted ignorant.
If you won't even try to learn the basics, you have just condemned
yourself into being a pathological liar because you just don't any
better.
I do at this point need to understand model theory very thoroughly.
Learning the details of these things could have boxed me into a corner >>>> prior to my philosophical investigation of seeing how the key elements >>>> fail to fit together coherently.
It is true that the set of analytical truth is simply a set of semantic >>>> tautologies. It is true that formal systems grounded in this foundation >>>> cannot be incomplete nor have any expressions of language that are
undecidable. Now that I have this foundation I have a way to see
exactly
how the concepts of math diverge from correct reasoning.
You and I can see both THAT G cannot be proved in F and WHY G
cannot be
proved in F. G cannot be proved in F for the same pathological
self-reference(Olcott 2004) reason that the Liar Paradox cannot be >>>>>> proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand claissic
arguement forms.
It is not that I do not understand, it is that I can directly see
where and how formal mathematical systems diverge from correct
reasoning.
But since you are discussing Formal Logic, you need to use the rules
of Formal logic.
I have never been talking about formal logic. I have always been talking
about the philosophical foundations of correct reasoning.
No, you have been talking about theorys DEEP in formal logic. You can't
talk about the "errors" in those theories, with being in formal logic.
IF you think you can somehow talk about the foundations, while working
in the penthouse, you have just confirmed that you do not understand how
ANY form of logic works.
PERIOD.
The other way to say it is that your "Correct Reasoning" diverges
from the accepted and proven system of Formal Logic.
It is correct reasoning in the absolute sense that I refer to.
If anyone has the opinion that arithmetic does not exist they are
incorrect in the absolute sense of the word: "incorrect".
IF you reject the logic that a theory is based on, you need to reject
the logic system, NOT the theory.
You are just showing that you have wasted your LIFE because you don'tunderstnad how to work ligic.
Because you are a learned-by-rote person you make sure to never examine >>>> whether or not any aspect of math diverges from correct reasoning, you >>>> simply assume that math is the gospel even when it contradicts itself.
Nope, I know that with logic, if you follow the rules, you will get
the correct answer by the rules.
If you break the rules, you have no idea where you will go.
In other words you never ever spend any time on making sure that these
rules fit together coherently.
The rules work together just fine.
YOU don't like some of the results, but they work just fine for most of
the field.
You are just PROVING that you have no idea how to actually discuss a new foundation for logic, likely because you are incapable of actually
comeing up with a consistent basis for working logic.
Meaningless gobbledy-good until you actually define what you mean and
As I have told you before, if you want to see what your "Correct >
Reasoning" can do as a replaceent logic system, you need to start at the >>> BEGINNING, and see wht it gets.
The foundation of correct reasoning is that the entire body of
analytical truth is a set of semantic tautologies.
This means that all correct inference always requires determining the
semantic consequence of expressions of language. This semantic
consequence can be specified syntactically, and indeed must be
represented syntactically to be computable
spell out the actual rules that need to be followed.
Note, "Computability" is actually a fairly late in the process concept.
You first need to show that you logic can actually do something useful
To just try to change things at the end is just PROOF that your
"Correct Reasoning" has to not be based on any real principles of logic. >>>
Since it is clear that you want to change some of the basics of how
logic works, you are not allowed to just use ANY of classical logic
until you actually show what part of it is still usable under your
system and what changes happen.
Whenever an expression of language is derived as the semantic
consequence of other expressions of language we have valid inference.
And, are you using the "classical" definition of "semantic" (which makes
this sentence somewhat cirular) or do you mean something based on the
concept you sometimes use of "the meaning of the words".
The semantic consequence must be specified syntactically so that it can
be computed or examined in formal systems.
Just like in sound deductive inference when the premises are known to be
true, and the reasoning valid (a semantic consequence) then the
conclusion is necessarily true.
So, what is the difference in your system from classical Formal Logic?
The most important aspect of the tiny little foundation of a formal
system that I already specified immediately above is self-evident:
True(L,X) can be defined and incompleteness is impossible.
I don't think your system is anywhere near establish far enough for you
to say that.
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it >>>>>>>>>> erroneous.
Since you don't understand the meaning of self-contradictory, >>>>>>>>> that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps in F >>>>>>>> that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is true >>>>>>> but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true.
So, you don't understand how to prove that something is "True in F"
by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox expressed
in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was true, then
the Liar's paradox would be true, thus that assumption can not be true.
When one level of indirect reference is applied to the Liar Paradox it
becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> TRUE.
Your
We can do the same thing when G asserts its own unprovability in F.
G cannot be proved in F because this requires a sequence of inference
steps in F that prove that they themselves do not exist in F.
Right, you can't prove, in F, that G is true, but you can prove, in
Meta-F, that G is true in F, and that G is unprovable in F, which is
what is required.
When G asserts its own unprovability in F it cannot be proved in F
because this requires a sequence of inference steps in F that prove that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way Tarski's Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the basics of logic,
or truth.
It may seem that way to someone that learns things by rote and mistakes
this for actual understanding of exactly how all of the elements of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you have
intentionaally hamstrung yourself to avoid being "polluted" by
"rote-learning" so you are just self-inflicted ignorant.
If you won't even try to learn the basics, you have just condemned
yourself into being a pathological liar because you just don't any
better.
I do at this point need to understand model theory very thoroughly.
Learning the details of these things could have boxed me into a corner
prior to my philosophical investigation of seeing how the key elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set of semantic
tautologies. It is true that formal systems grounded in this foundation
cannot be incomplete nor have any expressions of language that are
undecidable. Now that I have this foundation I have a way to see exactly
how the concepts of math diverge from correct reasoning.
You and I can see both THAT G cannot be proved in F and WHY G cannot be >>>> proved in F. G cannot be proved in F for the same pathological
self-reference(Olcott 2004) reason that the Liar Paradox cannot be
proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand claissic
arguement forms.
It is not that I do not understand, it is that I can directly see
where and how formal mathematical systems diverge from correct reasoning.
But since you are discussing Formal Logic, you need to use the rules of Formal logic.
The other way to say it is that your "Correct Reasoning" diverges from
the accepted and proven system of Formal Logic.
Because you are a learned-by-rote person you make sure to never examine
whether or not any aspect of math diverges from correct reasoning, you
simply assume that math is the gospel even when it contradicts itself.
Nope, I know that with logic, if you follow the rules, you will get the correct answer by the rules.
If you break the rules, you have no idea where you will go.
As I have told you before, if you want to see what your "Correct
Reasoning" can do as a replaceent logic system, you need to start at the BEGINNING, and see wht it gets.
To just try to change things at the end is just PROOF that your "Correct Reasoning" has to not be based on any real principles of logic.
Since it is clear that you want to change some of the basics of how
logic works, you are not allowed to just use ANY of classical logic
until you actually show what part of it is still usable under your
system and what changes happen.
Considering your current status, I would start working hard on that
right away, as with your current reputation, once you go, NO ONE is
going to want to look at your ideas, because you have done such a good
job showing that you don't understand how things work.
I haven't been able to get out of you exactly what you want to do with
your "Correct Reasoning", and until you show a heart to actually try to
do something constructive with it, and not just use it as an excuse for
bad logic, I don't care what it might be able to do, because, frankly, I don't think you have the intellect to come up with something like that.
But go ahead and prove me wrong, write an actual paper on the basics of
your "Correct Reasoning" and show how it actually works, and compare it
to "Classical Logic" and show what is different. Then maybe you can
start to work on showing it can actually do something useful.
Just shows you are ignorant.
Too bad you are going to die in such disgrace.
All you need to do is show that there exist a (possibly infinte)
set of steps from the truth makers in F, using the rules of F, to
G. Youy don't need to actually DO this in F, if you have a system
that knowns about F.
Your mind is just too small.
On 4/22/2023 7:27 PM, Richard Damon wrote:
On 4/22/23 7:57 PM, olcott wrote:
On 4/22/2023 6:19 PM, Richard Damon wrote:
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it >>>>>>>>>>>>> erroneous.
Since you don't understand the meaning of
self-contradictory, that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps in >>>>>>>>>>> F that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is >>>>>>>>>> true but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true. >>>>>>>>>
So, you don't understand how to prove that something is "True in >>>>>>>> F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox
expressed in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was true,
then the Liar's paradox would be true, thus that assumption can
not be true.
When one level of indirect reference is applied to the Liar Paradox it >>>>> becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> TRUE.
Your
We can do the same thing when G asserts its own unprovability in F. >>>>>>> G cannot be proved in F because this requires a sequence of
inference
steps in F that prove that they themselves do not exist in F.
Right, you can't prove, in F, that G is true, but you can prove,
in Meta-F, that G is true in F, and that G is unprovable in F,
which is what is required.
When G asserts its own unprovability in F it cannot be proved in F
because this requires a sequence of inference steps in F that prove
that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way Tarski's
Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the basics of
logic, or truth.
It may seem that way to someone that learns things by rote and
mistakes
this for actual understanding of exactly how all of the elements of a >>>>> proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you have
intentionaally hamstrung yourself to avoid being "polluted" by
"rote-learning" so you are just self-inflicted ignorant.
If you won't even try to learn the basics, you have just condemned >>>>>> yourself into being a pathological liar because you just don't any >>>>>> better.
I do at this point need to understand model theory very thoroughly.
Learning the details of these things could have boxed me into a corner >>>>> prior to my philosophical investigation of seeing how the key elements >>>>> fail to fit together coherently.
It is true that the set of analytical truth is simply a set of
semantic
tautologies. It is true that formal systems grounded in this
foundation
cannot be incomplete nor have any expressions of language that are
undecidable. Now that I have this foundation I have a way to see
exactly
how the concepts of math diverge from correct reasoning.
You and I can see both THAT G cannot be proved in F and WHY G
cannot be
proved in F. G cannot be proved in F for the same pathological
self-reference(Olcott 2004) reason that the Liar Paradox cannot
be proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand claissic
arguement forms.
It is not that I do not understand, it is that I can directly see
where and how formal mathematical systems diverge from correct
reasoning.
But since you are discussing Formal Logic, you need to use the rules
of Formal logic.
I have never been talking about formal logic. I have always been talking >>> about the philosophical foundations of correct reasoning.
No, you have been talking about theorys DEEP in formal logic. You
can't talk about the "errors" in those theories, with being in formal
logic.
IF you think you can somehow talk about the foundations, while working
in the penthouse, you have just confirmed that you do not understand
how ANY form of logic works.
PERIOD.
The other way to say it is that your "Correct Reasoning" diverges
from the accepted and proven system of Formal Logic.
It is correct reasoning in the absolute sense that I refer to.
If anyone has the opinion that arithmetic does not exist they are
incorrect in the absolute sense of the word: "incorrect".
IF you reject the logic that a theory is based on, you need to reject
the logic system, NOT the theory.
You are just showing that you have wasted your LIFE because you
don'tunderstnad how to work ligic.
Nope, I know that with logic, if you follow the rules, you will get
Because you are a learned-by-rote person you make sure to never
examine
whether or not any aspect of math diverges from correct reasoning, you >>>>> simply assume that math is the gospel even when it contradicts itself. >>>>
the correct answer by the rules.
If you break the rules, you have no idea where you will go.
In other words you never ever spend any time on making sure that these
rules fit together coherently.
The rules work together just fine.
YOU don't like some of the results, but they work just fine for most
of the field.
You are just PROVING that you have no idea how to actually discuss a
new foundation for logic, likely because you are incapable of actually
comeing up with a consistent basis for working logic.
Meaningless gobbledy-good until you actually define what you mean and
As I have told you before, if you want to see what your "Correct >
Reasoning" can do as a replaceent logic system, you need to start at
the
BEGINNING, and see wht it gets.
The foundation of correct reasoning is that the entire body of
analytical truth is a set of semantic tautologies.
This means that all correct inference always requires determining the
semantic consequence of expressions of language. This semantic
consequence can be specified syntactically, and indeed must be
represented syntactically to be computable
spell out the actual rules that need to be followed.
Note, "Computability" is actually a fairly late in the process
concept. You first need to show that you logic can actually do
something useful
To just try to change things at the end is just PROOF that your
"Correct Reasoning" has to not be based on any real principles of
logic.
Since it is clear that you want to change some of the basics of how
logic works, you are not allowed to just use ANY of classical logic
until you actually show what part of it is still usable under your
system and what changes happen.
Whenever an expression of language is derived as the semantic
consequence of other expressions of language we have valid inference.
And, are you using the "classical" definition of "semantic" (which
makes this sentence somewhat cirular) or do you mean something based
on the concept you sometimes use of "the meaning of the words".
*Principle of explosion*
An alternate argument for the principle stems from model theory. A
sentence P is a semantic consequence of a set of sentences Γ only if
every model of Γ is a model of P. However, there is no model of the contradictory set (P ∧ ¬P) A fortiori, there is no model of (P ∧ ¬P) that is not a model of Q. Thus, vacuously, every model of (P ∧ ¬P) is a model of Q. Thus, Q is a semantic consequence of (P ∧ ¬P). https://en.wikipedia.org/wiki/Principle_of_explosion
Vacuous truth does not count as truth.
All variables must be quantified
"all cell phones in the room are turned off" will be true when no cell
phones are in the room.
∃cp ∈ cell_phones (in_this_room(cp)) ∧ turned_off(cp))
The semantic consequence must be specified syntactically so that it can
be computed or examined in formal systems.
Just like in sound deductive inference when the premises are known to be >>> true, and the reasoning valid (a semantic consequence) then the
conclusion is necessarily true.
So, what is the difference in your system from classical Formal Logic?
Semantic Necessity operator: ⊨□
FALSE ⊨□ FALSE // POE abolished
(P ∧ ¬P) ⊨□ FALSE // POE abolished
⇒ and → symbols are replaced by ⊨□
The sets that the variables range over must be defined
all variables must be quantified
// x is a semantic consequence of its premises in L
Provable(P,x) ≡ ∃x ∈ L, ∃P ⊆ L (P ⊨□ x)
// x is a semantic consequence of the axioms of L
True(L,x) ≡ ∃x ∈ L (Axioms(L) ⊨□ x)
*The above is all that I know right now*
The most important aspect of the tiny little foundation of a formal
system that I already specified immediately above is self-evident:
True(L,X) can be defined and incompleteness is impossible.
I don't think your system is anywhere near establish far enough for
you to say that.
Try and show exceptions to this rule and I will fill in any gaps that
you find.
G asserts its own unprovability in F
The reason that G cannot be proved in F is that this requires a
sequence of inference steps in F that proves no such sequence
of inference steps exists in F.
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it
erroneous.
Since you don't understand the meaning of self-contradictory,
that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps in F that >>>>>>> prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is true
but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true.
So, you don't understand how to prove that something is "True in F"
by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox expressed
in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was true, then
the Liar's paradox would be true, thus that assumption can not be true.
Your
We can do the same thing when G asserts its own unprovability in F.
G cannot be proved in F because this requires a sequence of inference
steps in F that prove that they themselves do not exist in F.
Right, you can't prove, in F, that G is true, but you can prove, in
Meta-F, that G is true in F, and that G is unprovable in F, which is
what is required.
You are just showing that your mind can't handle the basics of logic, or
Proving that G is true in F requires a sequence of inference steps that
prove that they themselves don't exist.
You might be bright enough to understand that is self-contradictory.
On 4/24/23 12:13 PM, olcott wrote:
On 4/24/2023 10:58 AM, olcott wrote:
On 4/22/2023 7:27 PM, Richard Damon wrote:∃sequence_of_inference_steps ⊆ F (sequence_of_inference_steps ⊢
On 4/22/23 7:57 PM, olcott wrote:
On 4/22/2023 6:19 PM, Richard Damon wrote:
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making >>>>>>>>>>>>>>> it erroneous.
Since you don't understand the meaning of
self-contradictory, that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps >>>>>>>>>>>>> in F that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is >>>>>>>>>>>> true but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true. >>>>>>>>>>>
So, you don't understand how to prove that something is "True >>>>>>>>>> in F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox
expressed in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was true, >>>>>>>> then the Liar's paradox would be true, thus that assumption can >>>>>>>> not be true.
When one level of indirect reference is applied to the Liar
Paradox it
becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> TRUE. >>>>>>>
Your
Right, you can't prove, in F, that G is true, but you can prove, >>>>>>>> in Meta-F, that G is true in F, and that G is unprovable in F, >>>>>>>> which is what is required.
We can do the same thing when G asserts its own unprovability >>>>>>>>> in F.
G cannot be proved in F because this requires a sequence of
inference
steps in F that prove that they themselves do not exist in F. >>>>>>>>
When G asserts its own unprovability in F it cannot be proved in F >>>>>>> because this requires a sequence of inference steps in F that
prove that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way
Tarski's Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the basics of >>>>>>>> logic, or truth.
It may seem that way to someone that learns things by rote and
mistakes
this for actual understanding of exactly how all of the elements >>>>>>> of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you have
intentionaally hamstrung yourself to avoid being "polluted" by >>>>>>>> "rote-learning" so you are just self-inflicted ignorant.
If you won't even try to learn the basics, you have just
condemned yourself into being a pathological liar because you
just don't any better.
I do at this point need to understand model theory very thoroughly. >>>>>>>
Learning the details of these things could have boxed me into a
corner
prior to my philosophical investigation of seeing how the key
elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set of
semantic
tautologies. It is true that formal systems grounded in this
foundation
cannot be incomplete nor have any expressions of language that are >>>>>>> undecidable. Now that I have this foundation I have a way to see >>>>>>> exactly
how the concepts of math diverge from correct reasoning.
You and I can see both THAT G cannot be proved in F and WHY G >>>>>>>>> cannot be
proved in F. G cannot be proved in F for the same pathological >>>>>>>>> self-reference(Olcott 2004) reason that the Liar Paradox cannot >>>>>>>>> be proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand
claissic arguement forms.
It is not that I do not understand, it is that I can directly see >>>>>>> where and how formal mathematical systems diverge from correct
reasoning.
But since you are discussing Formal Logic, you need to use the
rules of Formal logic.
I have never been talking about formal logic. I have always been
talking
about the philosophical foundations of correct reasoning.
No, you have been talking about theorys DEEP in formal logic. You
can't talk about the "errors" in those theories, with being in
formal logic.
IF you think you can somehow talk about the foundations, while
working in the penthouse, you have just confirmed that you do not
understand how ANY form of logic works.
PERIOD.
The other way to say it is that your "Correct Reasoning" diverges
from the accepted and proven system of Formal Logic.
It is correct reasoning in the absolute sense that I refer to.
If anyone has the opinion that arithmetic does not exist they are
incorrect in the absolute sense of the word: "incorrect".
IF you reject the logic that a theory is based on, you need to
reject the logic system, NOT the theory.
You are just showing that you have wasted your LIFE because you
don'tunderstnad how to work ligic.
Because you are a learned-by-rote person you make sure to never
examine
whether or not any aspect of math diverges from correct
reasoning, you
simply assume that math is the gospel even when it contradicts
itself.
Nope, I know that with logic, if you follow the rules, you will
get the correct answer by the rules.
If you break the rules, you have no idea where you will go.
In other words you never ever spend any time on making sure that these >>>>> rules fit together coherently.
The rules work together just fine.
YOU don't like some of the results, but they work just fine for most
of the field.
You are just PROVING that you have no idea how to actually discuss a
new foundation for logic, likely because you are incapable of
actually comeing up with a consistent basis for working logic.
Meaningless gobbledy-good until you actually define what you mean
As I have told you before, if you want to see what your "Correct
Reasoning" can do as a replaceent logic system, you need tostart at the
BEGINNING, and see wht it gets.
The foundation of correct reasoning is that the entire body of
analytical truth is a set of semantic tautologies.
This means that all correct inference always requires determining the >>>>> semantic consequence of expressions of language. This semantic
consequence can be specified syntactically, and indeed must be
represented syntactically to be computable
and spell out the actual rules that need to be followed.
Note, "Computability" is actually a fairly late in the process
concept. You first need to show that you logic can actually do
something useful
And, are you using the "classical" definition of "semantic" (which
To just try to change things at the end is just PROOF that your
"Correct Reasoning" has to not be based on any real principles of
logic.
Since it is clear that you want to change some of the basics of
how logic works, you are not allowed to just use ANY of classical
logic until you actually show what part of it is still usable
under your system and what changes happen.
Whenever an expression of language is derived as the semantic
consequence of other expressions of language we have valid inference. >>>>
makes this sentence somewhat cirular) or do you mean something based
on the concept you sometimes use of "the meaning of the words".
*Principle of explosion*
An alternate argument for the principle stems from model theory. A
sentence P is a semantic consequence of a set of sentences Γ only if
every model of Γ is a model of P. However, there is no model of the
contradictory set (P ∧ ¬P) A fortiori, there is no model of (P ∧ ¬P) >>> that is not a model of Q. Thus, vacuously, every model of (P ∧ ¬P) is a >>> model of Q. Thus, Q is a semantic consequence of (P ∧ ¬P).
https://en.wikipedia.org/wiki/Principle_of_explosion
Vacuous truth does not count as truth.
All variables must be quantified
"all cell phones in the room are turned off" will be true when no
cell phones are in the room.
∃cp ∈ cell_phones (in_this_room(cp)) ∧ turned_off(cp))
The semantic consequence must be specified syntactically so that it
can
be computed or examined in formal systems.
Just like in sound deductive inference when the premises are known
to be
true, and the reasoning valid (a semantic consequence) then the
conclusion is necessarily true.
So, what is the difference in your system from classical Formal Logic? >>>>
Semantic Necessity operator: ⊨□
FALSE ⊨□ FALSE // POE abolished
(P ∧ ¬P) ⊨□ FALSE // POE abolished
⇒ and → symbols are replaced by ⊨□
The sets that the variables range over must be defined
all variables must be quantified
// x is a semantic consequence of its premises in L
Provable(P,x) ≡ ∃x ∈ L, ∃P ⊆ L (P ⊨□ x)
// x is a semantic consequence of the axioms of L
True(L,x) ≡ ∃x ∈ L (Axioms(L) ⊨□ x)
*The above is all that I know right now*
The most important aspect of the tiny little foundation of a formal
system that I already specified immediately above is self-evident:
True(L,X) can be defined and incompleteness is impossible.
I don't think your system is anywhere near establish far enough for
you to say that.
Try and show exceptions to this rule and I will fill in any gaps that
you find.
G asserts its own unprovability in F
The reason that G cannot be proved in F is that this requires a
sequence of inference steps in F that proves no such sequence
of inference steps exists in F.
∄sequence_of_inference_steps ⊆ F)
So, you don't understand the differnce between the INFINITE set of
sequence steps that show that G is True, and the FINITE number of steps
that need to be shown to make G provable.
You are just showing you don't understand what you talking about and
just spouting word (or symbol) salad.
You are oriving you are an IDIOT.
On 4/24/23 11:25 AM, olcott wrote:
On 4/22/2023 6:19 PM, Richard Damon wrote:
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it >>>>>>>>>>>> erroneous.
Since you don't understand the meaning of self-contradictory, >>>>>>>>>>> that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps in >>>>>>>>>> F that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is >>>>>>>>> true but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true. >>>>>>>>
So, you don't understand how to prove that something is "True in >>>>>>> F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox
expressed in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was true,
then the Liar's paradox would be true, thus that assumption can not
be true.
When one level of indirect reference is applied to the Liar Paradox it >>>> becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> TRUE.
Your
We can do the same thing when G asserts its own unprovability in F. >>>>>> G cannot be proved in F because this requires a sequence of inference >>>>>> steps in F that prove that they themselves do not exist in F.
Right, you can't prove, in F, that G is true, but you can prove, in
Meta-F, that G is true in F, and that G is unprovable in F, which
is what is required.
When G asserts its own unprovability in F it cannot be proved in F
because this requires a sequence of inference steps in F that prove
that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way Tarski's
Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the basics of
logic, or truth.
It may seem that way to someone that learns things by rote and mistakes >>>> this for actual understanding of exactly how all of the elements of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you have
intentionaally hamstrung yourself to avoid being "polluted" by
"rote-learning" so you are just self-inflicted ignorant.
If you won't even try to learn the basics, you have just condemned
yourself into being a pathological liar because you just don't any
better.
I do at this point need to understand model theory very thoroughly.
Learning the details of these things could have boxed me into a corner >>>> prior to my philosophical investigation of seeing how the key elements >>>> fail to fit together coherently.
It is true that the set of analytical truth is simply a set of semantic >>>> tautologies. It is true that formal systems grounded in this foundation >>>> cannot be incomplete nor have any expressions of language that are
undecidable. Now that I have this foundation I have a way to see
exactly
how the concepts of math diverge from correct reasoning.
You and I can see both THAT G cannot be proved in F and WHY G
cannot be
proved in F. G cannot be proved in F for the same pathological
self-reference(Olcott 2004) reason that the Liar Paradox cannot be >>>>>> proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand claissic
arguement forms.
It is not that I do not understand, it is that I can directly see
where and how formal mathematical systems diverge from correct
reasoning.
But since you are discussing Formal Logic, you need to use the rules
of Formal logic.
The other way to say it is that your "Correct Reasoning" diverges
from the accepted and proven system of Formal Logic.
In classical logic, intuitionistic logic and similar logical systems,
the principle of explosion
ex falso [sequitur] quodlibet,
'from falsehood, anything [follows]'
ex contradictione [sequitur] quodlibet,
'from contradiction, anything [follows]')
Right, if a logic system can prove a contradiction, that out of that contradiction you can prove anything
https://en.wikipedia.org/wiki/Principle_of_explosion
∴ FALSE ⊢ Donald Trump is the Christ
∴ FALSE ⊢ Donald Trump is Satan
Which isn't what was being talked about.
You clearly don't understand how the principle of explosion works, which isn't surprising considering how many misconseptions you have about how
logic works.
Right now, I would say you are to ignorant on that basics of logic to be
able to explain, even in basic terms, how it works, you have shown
yourself to be that stupid.
*Correction abolishing the POE nonsense*
Semantic Necessity operator: ⊨□
FALSE ⊨□ FALSE
(P ∧ ¬P) ⊨□ FALSE
So, FULLY define what you mean by that.
Because you are a learned-by-rote person you make sure to never examine >>>> whether or not any aspect of math diverges from correct reasoning, you >>>> simply assume that math is the gospel even when it contradicts itself.
Nope, I know that with logic, if you follow the rules, you will get
the correct answer by the rules.
Then you must agree that Trump is the Christ and Trump is Satan both of
those were derived from correct logic.
If you break the rules, you have no idea where you will go.
As I have told you before, if you want to see what your "Correct
Reasoning" can do as a replaceent logic system, you need to start at
the BEGINNING, and see wht it gets.
I would be happy to talk this through with you.
The beginning is that
valid inference an expression X of language L must be a semantic
consequence of its premises in L
And what do you mean by "semantic"
because, conventional logic defines semantic consequence as the
conclusion must be true if the premise is true.
You seem to mean something diffent, but haven't explained what you mean
by that.
sound inference expression X of language L must be a semantic
consequence of the axioms of L.
For formal systems such as FOL the semantics is mostly the meaning of
the logic symbols.
These two logic symbols are abolished ⇒ → and replaced with this:
Semantic Necessity operator: ⊨□
Why do you need to abolish shows symbols?
You do understand that the
statment that A -> B is equivalent to the asserting of (~A | B) is
ALWAYS TRUE (which might be part of your problem, as you don't seem to understand that categorical meaning of ALL and NO), so either you need
to outlaw the negation operator, or the or operator to do this.
Again, what does "Semantic Necessity" operator mean?
Note, one issue with your use of symbols, so many of the symbols can
have slightly diffferent meanings based on the context and system you
are working in.
To just try to change things at the end is just PROOF that your
"Correct Reasoning" has to not be based on any real principles of logic. >>>
No logic must be based on correct reasoning any logic that prove Donal
Trump is the Christ is incorrect reasoning, thus the POE is abolished
You CAN'T abolish the Principle of Explosion unless you greatly restrict
the power of your logic.
These two logic symbols are abolished ⇒ → and replaced with this:
Semantic Necessity operator: ⊨□
Explosions have been abolished
Nope.
FALSE ⊨□ FALSE
(P ∧ ¬P) ⊨□ FALSE
Again DEFINE this operator, and the words you use to define it.
Since it is clear that you want to change some of the basics of how
logic works, you are not allowed to just use ANY of classical logic
until you actually show what part of it is still usable under your
system and what changes happen.
Yes lets apply my ideas to FOL. I have already sketched out many
details.
Go ahead, try to fully define your ideas.
Remember, until you get to supporting the Higher Order Logics, you can't
get to the incompleteness, as that has been only established for systems
with second order logic, which is also needed for the needed properties
of the whole numbers. First Order Peano Arithmatic might be complete,
but can't be proved (within itself) to be consistent. Second Order
Peaano Arithmatic (which adds the principle of Induction) IS incomplete
as it supports enough of the natural number to support Godel's proof.
Considering your current status, I would start working hard on that
right away, as with your current reputation, once you go, NO ONE is
going to want to look at your ideas, because you have done such a
good job showing that you don't understand how things work.
I haven't been able to get out of you exactly what you want to do
with your "Correct Reasoning", and until you show a heart to actually
try to do something constructive with it, and not just use it as an
excuse for bad logic, I don't care what it might be able to do,
because, frankly, I don't think you have the intellect to come up
with something like that.
I showed how the POE is easily abolished.
Nope.
I showed how Provable(L,x) and True(L,x) are defined.
Note clearly. For instance, A statment x is provable or True in a SYSTEM/THEORY (depending on your terminology) and NOT dependent on some
other statement in the system, as you definition seemed to imply. You
don't "Prove" something based on a statement, but in a System/Theory.
But go ahead and prove me wrong, write an actual paper on the basics
of your "Correct Reasoning" and show how it actually works, and
compare it to "Classical Logic" and show what is different. Then
maybe you can start to work on showing it can actually do something
useful.
I need a dialogue to vet aspects of my ideas.
The key thing that I have not yet filled in is how to specify the
semantics of every FOL expression.
This semantics seems fully specified:
∀n ∈ ℕ ∀m ∈ ℕ ((n > m) ⊨□ (n+1 > m))
Nope, you need to actually FULLY DEFINE what you mean by your symbols,
you can't just rely on refering to classical meaning since you clearly disagree with some of the classical meanings.
You seem to have some disjoint ideas, but seem to be unable to come up
with a cohesive whole. You use words that you don't seem to be able to actually fully define.
Since you are trying to reject some of the basics of classical logic,
you need to FULLY define how your logic works. Name ALL the basic
operation that you allow. Do you allow "Not", "Or", "And", "Equals",
etc. What are your rules for logical inference. How do you ACTUALLY
prove a statement given a set of "Truthmakers".
Remember, if you want to reject classical logic, you can't use it to
define your system.
On 4/24/23 10:40 AM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making it >>>>>>>>>> erroneous.
Since you don't understand the meaning of self-contradictory, >>>>>>>>> that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps in F >>>>>>>> that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is true >>>>>>> but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true.
So, you don't understand how to prove that something is "True in F"
by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox expressed
in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was true, then
the Liar's paradox would be true, thus that assumption can not be true.
Your
We can do the same thing when G asserts its own unprovability in F.
G cannot be proved in F because this requires a sequence of inference
steps in F that prove that they themselves do not exist in F.
Right, you can't prove, in F, that G is true, but you can prove, in
Meta-F, that G is true in F, and that G is unprovable in F, which is
what is required.
You are just showing that your mind can't handle the basics of logic, or
Proving that G is true in F requires a sequence of inference steps that
prove that they themselves don't exist.
You might be bright enough to understand that is self-contradictory.
Except that G is proved in Meta-F to be "True in F".
With a finite number of steps in Meta-F, we can prove that the infinite number of steps in F exist and are true.
In particular, in F, we need to check every number individual to see if
it satisfies the relationship, and we have no short cut to make this operation finite,
so we can't prove it. But in Meta-F, we know something
about the relationship, and are able to prove that no number can satisfy
the relationship, and do so in a finite number of steps.
Thus, we can prove in Meta-F that G must be true in F.
The sequence of steps in F is infinite, so not a proof in F.
In fact, in Meta-F we are also able to prove that there CAN'T be a
finite sequence set of steps that prove G true in F.
Thus, with logic in Meta-F, we can prove that, G is True in F and can
not be proven in F.
You just don't seem to understand how Meta-Logic works. And, it turns
out, that meta-logic is a very important tool for proving things, so
this is one of your Kryponites.
On 4/24/2023 6:35 PM, Richard Damon wrote:
On 4/24/23 12:13 PM, olcott wrote:
On 4/24/2023 10:58 AM, olcott wrote:
On 4/22/2023 7:27 PM, Richard Damon wrote:∃sequence_of_inference_steps ⊆ F (sequence_of_inference_steps ⊢
On 4/22/23 7:57 PM, olcott wrote:
On 4/22/2023 6:19 PM, Richard Damon wrote:
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making >>>>>>>>>>>>>>>> it erroneous.
Since you don't understand the meaning of
self-contradictory, that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps >>>>>>>>>>>>>> in F that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement >>>>>>>>>>>>> is true but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>> This sentence is not true: "This sentence is not true" is true. >>>>>>>>>>>>
So, you don't understand how to prove that something is "True >>>>>>>>>>> in F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox
expressed in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was
true, then the Liar's paradox would be true, thus that
assumption can not be true.
When one level of indirect reference is applied to the Liar
Paradox it
becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> TRUE. >>>>>>>>
Your
Right, you can't prove, in F, that G is true, but you can
We can do the same thing when G asserts its own unprovability >>>>>>>>>> in F.
G cannot be proved in F because this requires a sequence of >>>>>>>>>> inference
steps in F that prove that they themselves do not exist in F. >>>>>>>>>
prove, in Meta-F, that G is true in F, and that G is unprovable >>>>>>>>> in F, which is what is required.
When G asserts its own unprovability in F it cannot be proved in F >>>>>>>> because this requires a sequence of inference steps in F that
prove that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way
Tarski's Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the basics of >>>>>>>>> logic, or truth.
It may seem that way to someone that learns things by rote and >>>>>>>> mistakes
this for actual understanding of exactly how all of the elements >>>>>>>> of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you have >>>>>>>>> intentionaally hamstrung yourself to avoid being "polluted" by >>>>>>>>> "rote-learning" so you are just self-inflicted ignorant.
If you won't even try to learn the basics, you have just
condemned yourself into being a pathological liar because you >>>>>>>>> just don't any better.
I do at this point need to understand model theory very thoroughly. >>>>>>>>
Learning the details of these things could have boxed me into a >>>>>>>> corner
prior to my philosophical investigation of seeing how the key
elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set of >>>>>>>> semantic
tautologies. It is true that formal systems grounded in this
foundation
cannot be incomplete nor have any expressions of language that are >>>>>>>> undecidable. Now that I have this foundation I have a way to see >>>>>>>> exactly
how the concepts of math diverge from correct reasoning.
You and I can see both THAT G cannot be proved in F and WHY G >>>>>>>>>> cannot be
proved in F. G cannot be proved in F for the same pathological >>>>>>>>>> self-reference(Olcott 2004) reason that the Liar Paradox
cannot be proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand
claissic arguement forms.
It is not that I do not understand, it is that I can directly
see where and how formal mathematical systems diverge from
correct reasoning.
But since you are discussing Formal Logic, you need to use the
rules of Formal logic.
I have never been talking about formal logic. I have always been
talking
about the philosophical foundations of correct reasoning.
No, you have been talking about theorys DEEP in formal logic. You
can't talk about the "errors" in those theories, with being in
formal logic.
IF you think you can somehow talk about the foundations, while
working in the penthouse, you have just confirmed that you do not
understand how ANY form of logic works.
PERIOD.
The other way to say it is that your "Correct Reasoning" diverges >>>>>>> from the accepted and proven system of Formal Logic.
It is correct reasoning in the absolute sense that I refer to.
If anyone has the opinion that arithmetic does not exist they are
incorrect in the absolute sense of the word: "incorrect".
IF you reject the logic that a theory is based on, you need to
reject the logic system, NOT the theory.
You are just showing that you have wasted your LIFE because you
don'tunderstnad how to work ligic.
Because you are a learned-by-rote person you make sure to never >>>>>>>> examine
whether or not any aspect of math diverges from correct
reasoning, you
simply assume that math is the gospel even when it contradicts >>>>>>>> itself.
Nope, I know that with logic, if you follow the rules, you will
get the correct answer by the rules.
If you break the rules, you have no idea where you will go.
In other words you never ever spend any time on making sure that
these
rules fit together coherently.
The rules work together just fine.
YOU don't like some of the results, but they work just fine for
most of the field.
You are just PROVING that you have no idea how to actually discuss
a new foundation for logic, likely because you are incapable of
actually comeing up with a consistent basis for working logic.
Meaningless gobbledy-good until you actually define what you mean
As I have told you before, if you want to see what your "Correct >>>>>>> > Reasoning" can do as a replaceent logic system, you need to
start at the
BEGINNING, and see wht it gets.
The foundation of correct reasoning is that the entire body of
analytical truth is a set of semantic tautologies.
This means that all correct inference always requires determining the >>>>>> semantic consequence of expressions of language. This semantic
consequence can be specified syntactically, and indeed must be
represented syntactically to be computable
and spell out the actual rules that need to be followed.
Note, "Computability" is actually a fairly late in the process
concept. You first need to show that you logic can actually do
something useful
And, are you using the "classical" definition of "semantic" (which
To just try to change things at the end is just PROOF that your
"Correct Reasoning" has to not be based on any real principles of >>>>>>> logic.
Since it is clear that you want to change some of the basics of
how logic works, you are not allowed to just use ANY of classical >>>>>>> logic until you actually show what part of it is still usable
under your system and what changes happen.
Whenever an expression of language is derived as the semantic
consequence of other expressions of language we have valid inference. >>>>>
makes this sentence somewhat cirular) or do you mean something
based on the concept you sometimes use of "the meaning of the words". >>>>>
*Principle of explosion*
An alternate argument for the principle stems from model theory. A
sentence P is a semantic consequence of a set of sentences Γ only if
every model of Γ is a model of P. However, there is no model of the
contradictory set (P ∧ ¬P) A fortiori, there is no model of (P ∧ ¬P) >>>> that is not a model of Q. Thus, vacuously, every model of (P ∧ ¬P) is a >>>> model of Q. Thus, Q is a semantic consequence of (P ∧ ¬P).
https://en.wikipedia.org/wiki/Principle_of_explosion
Vacuous truth does not count as truth.
All variables must be quantified
"all cell phones in the room are turned off" will be true when no
cell phones are in the room.
∃cp ∈ cell_phones (in_this_room(cp)) ∧ turned_off(cp))
The semantic consequence must be specified syntactically so that
it can
be computed or examined in formal systems.
Just like in sound deductive inference when the premises are known >>>>>> to be
true, and the reasoning valid (a semantic consequence) then the
conclusion is necessarily true.
So, what is the difference in your system from classical Formal Logic? >>>>>
Semantic Necessity operator: ⊨□
FALSE ⊨□ FALSE // POE abolished
(P ∧ ¬P) ⊨□ FALSE // POE abolished
⇒ and → symbols are replaced by ⊨□
The sets that the variables range over must be defined
all variables must be quantified
// x is a semantic consequence of its premises in L
Provable(P,x) ≡ ∃x ∈ L, ∃P ⊆ L (P ⊨□ x)
// x is a semantic consequence of the axioms of L
True(L,x) ≡ ∃x ∈ L (Axioms(L) ⊨□ x)
*The above is all that I know right now*
The most important aspect of the tiny little foundation of a formal >>>>>> system that I already specified immediately above is self-evident: >>>>>> True(L,X) can be defined and incompleteness is impossible.
I don't think your system is anywhere near establish far enough for
you to say that.
Try and show exceptions to this rule and I will fill in any gaps that
you find.
G asserts its own unprovability in F
The reason that G cannot be proved in F is that this requires a
sequence of inference steps in F that proves no such sequence
of inference steps exists in F.
∄sequence_of_inference_steps ⊆ F)
So, you don't understand the differnce between the INFINITE set of
sequence steps that show that G is True, and the FINITE number of
steps that need to be shown to make G provable.
The experts seem to believe that unless a proof can be transformed into
a finite sequence of steps it is no actual proof at all. Try and cite a source that says otherwise.
We can imagine an Oracle machine that can complete these proofs in the
same sort of way that we can imagine a magic fairy that waves a magic
wand.
You are just showing you don't understand what you talking about and
just spouting word (or symbol) salad.
You are oriving you are an IDIOT.
I am seeing these things at a deeper philosophical level than you are. I
know that is hard to believe.
You are so sure that I must be wrong that you don't bother to understand
what I am saying.
It seem that the time has come for me to spend the little time that it
takes to understand the technical details of Gödel's proof.
I am estimating that have very good understanding of the preface to the
proof and the SEP article should provide this.
https://mavdisk.mnsu.edu/pj2943kt/Fall%202015/Promotion%20Application/Previous%20Years%20Article%2022%20Materials/godel-1931.pdf
On 4/24/23 11:28 PM, olcott wrote:
On 4/24/2023 6:35 PM, Richard Damon wrote:
On 4/24/23 12:13 PM, olcott wrote:
On 4/24/2023 10:58 AM, olcott wrote:
On 4/22/2023 7:27 PM, Richard Damon wrote:∃sequence_of_inference_steps ⊆ F (sequence_of_inference_steps ⊢
On 4/22/23 7:57 PM, olcott wrote:
On 4/22/2023 6:19 PM, Richard Damon wrote:
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, >>>>>>>>>>>>>>>>> making it erroneous.
Since you don't understand the meaning of
self-contradictory, that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference >>>>>>>>>>>>>>> steps in F that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement >>>>>>>>>>>>>> is true but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>>> This sentence is not true: "This sentence is not true" is >>>>>>>>>>>>> true.
So, you don't understand how to prove that something is >>>>>>>>>>>> "True in F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox >>>>>>>>>>> expressed in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was >>>>>>>>>> true, then the Liar's paradox would be true, thus that
assumption can not be true.
When one level of indirect reference is applied to the Liar
Paradox it
becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> TRUE. >>>>>>>>>
Your
Right, you can't prove, in F, that G is true, but you can
We can do the same thing when G asserts its own unprovability >>>>>>>>>>> in F.
G cannot be proved in F because this requires a sequence of >>>>>>>>>>> inference
steps in F that prove that they themselves do not exist in F. >>>>>>>>>>
prove, in Meta-F, that G is true in F, and that G is
unprovable in F, which is what is required.
When G asserts its own unprovability in F it cannot be proved in F >>>>>>>>> because this requires a sequence of inference steps in F that >>>>>>>>> prove that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way
Tarski's Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the basics of >>>>>>>>>> logic, or truth.
It may seem that way to someone that learns things by rote and >>>>>>>>> mistakes
this for actual understanding of exactly how all of the
elements of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you have >>>>>>>>>> intentionaally hamstrung yourself to avoid being "polluted" by >>>>>>>>>> "rote-learning" so you are just self-inflicted ignorant.
If you won't even try to learn the basics, you have just
condemned yourself into being a pathological liar because you >>>>>>>>>> just don't any better.
I do at this point need to understand model theory very
thoroughly.
Learning the details of these things could have boxed me into a >>>>>>>>> corner
prior to my philosophical investigation of seeing how the key >>>>>>>>> elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set of >>>>>>>>> semantic
tautologies. It is true that formal systems grounded in this >>>>>>>>> foundation
cannot be incomplete nor have any expressions of language that are >>>>>>>>> undecidable. Now that I have this foundation I have a way to >>>>>>>>> see exactly
how the concepts of math diverge from correct reasoning.
You and I can see both THAT G cannot be proved in F and WHY G >>>>>>>>>>> cannot be
proved in F. G cannot be proved in F for the same
pathological self-reference(Olcott 2004) reason that the Liar >>>>>>>>>>> Paradox cannot be proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand
claissic arguement forms.
It is not that I do not understand, it is that I can directly >>>>>>>>> see where and how formal mathematical systems diverge from
correct reasoning.
But since you are discussing Formal Logic, you need to use the >>>>>>>> rules of Formal logic.
I have never been talking about formal logic. I have always been >>>>>>> talking
about the philosophical foundations of correct reasoning.
No, you have been talking about theorys DEEP in formal logic. You
can't talk about the "errors" in those theories, with being in
formal logic.
IF you think you can somehow talk about the foundations, while
working in the penthouse, you have just confirmed that you do not
understand how ANY form of logic works.
PERIOD.
The other way to say it is that your "Correct Reasoning"
diverges from the accepted and proven system of Formal Logic.
It is correct reasoning in the absolute sense that I refer to.
If anyone has the opinion that arithmetic does not exist they are >>>>>>> incorrect in the absolute sense of the word: "incorrect".
IF you reject the logic that a theory is based on, you need to
reject the logic system, NOT the theory.
You are just showing that you have wasted your LIFE because you
don'tunderstnad how to work ligic.
Because you are a learned-by-rote person you make sure to never >>>>>>>>> examine
whether or not any aspect of math diverges from correct
reasoning, you
simply assume that math is the gospel even when it contradicts >>>>>>>>> itself.
Nope, I know that with logic, if you follow the rules, you will >>>>>>>> get the correct answer by the rules.
If you break the rules, you have no idea where you will go.
In other words you never ever spend any time on making sure that >>>>>>> these
rules fit together coherently.
The rules work together just fine.
YOU don't like some of the results, but they work just fine for
most of the field.
You are just PROVING that you have no idea how to actually discuss >>>>>> a new foundation for logic, likely because you are incapable of
actually comeing up with a consistent basis for working logic.
Meaningless gobbledy-good until you actually define what you mean
As I have told you before, if you want to see what your "Correct >>>>>>>> > Reasoning" can do as a replaceent logic system, you need to
start at the
BEGINNING, and see wht it gets.
The foundation of correct reasoning is that the entire body of
analytical truth is a set of semantic tautologies.
This means that all correct inference always requires determining >>>>>>> the
semantic consequence of expressions of language. This semantic
consequence can be specified syntactically, and indeed must be
represented syntactically to be computable
and spell out the actual rules that need to be followed.
Note, "Computability" is actually a fairly late in the process
concept. You first need to show that you logic can actually do
something useful
To just try to change things at the end is just PROOF that your >>>>>>>> "Correct Reasoning" has to not be based on any real principles >>>>>>>> of logic.
Since it is clear that you want to change some of the basics of >>>>>>>> how logic works, you are not allowed to just use ANY of
classical logic until you actually show what part of it is still >>>>>>>> usable under your system and what changes happen.
Whenever an expression of language is derived as the semantic
consequence of other expressions of language we have valid
inference.
And, are you using the "classical" definition of "semantic" (which >>>>>> makes this sentence somewhat cirular) or do you mean something
based on the concept you sometimes use of "the meaning of the
words".
*Principle of explosion*
An alternate argument for the principle stems from model theory. A
sentence P is a semantic consequence of a set of sentences Γ only if >>>>> every model of Γ is a model of P. However, there is no model of the >>>>> contradictory set (P ∧ ¬P) A fortiori, there is no model of (P ∧ ¬P)
that is not a model of Q. Thus, vacuously, every model of (P ∧ ¬P) >>>>> is a
model of Q. Thus, Q is a semantic consequence of (P ∧ ¬P).
https://en.wikipedia.org/wiki/Principle_of_explosion
Vacuous truth does not count as truth.
All variables must be quantified
"all cell phones in the room are turned off" will be true when no
cell phones are in the room.
∃cp ∈ cell_phones (in_this_room(cp)) ∧ turned_off(cp))
The semantic consequence must be specified syntactically so that >>>>>>> it can
be computed or examined in formal systems.
Just like in sound deductive inference when the premises are
known to be
true, and the reasoning valid (a semantic consequence) then the
conclusion is necessarily true.
So, what is the difference in your system from classical Formal
Logic?
Semantic Necessity operator: ⊨□
FALSE ⊨□ FALSE // POE abolished
(P ∧ ¬P) ⊨□ FALSE // POE abolished
⇒ and → symbols are replaced by ⊨□
The sets that the variables range over must be defined
all variables must be quantified
// x is a semantic consequence of its premises in L
Provable(P,x) ≡ ∃x ∈ L, ∃P ⊆ L (P ⊨□ x)
// x is a semantic consequence of the axioms of L
True(L,x) ≡ ∃x ∈ L (Axioms(L) ⊨□ x)
*The above is all that I know right now*
The most important aspect of the tiny little foundation of a formal >>>>>>> system that I already specified immediately above is self-evident: >>>>>>> True(L,X) can be defined and incompleteness is impossible.
I don't think your system is anywhere near establish far enough
for you to say that.
Try and show exceptions to this rule and I will fill in any gaps that >>>>> you find.
G asserts its own unprovability in F
The reason that G cannot be proved in F is that this requires a
sequence of inference steps in F that proves no such sequence
of inference steps exists in F.
∄sequence_of_inference_steps ⊆ F)
So, you don't understand the differnce between the INFINITE set of
sequence steps that show that G is True, and the FINITE number of
steps that need to be shown to make G provable.
The experts seem to believe that unless a proof can be transformed into
a finite sequence of steps it is no actual proof at all. Try and cite a
source that says otherwise.
WHy? Because I agree with that. A Proof needs to be done in a finite
number of steps.
The question is why the infinite number of steps in F that makes G true
don't count for making it true.
Yes, you can't write that out to KNOW it to be true, but that is the differece between knowledge and fact.
We can imagine an Oracle machine that can complete these proofs in the
same sort of way that we can imagine a magic fairy that waves a magic
wand.
You are just showing you don't understand what you talking about and
just spouting word (or symbol) salad.
You are oriving you are an IDIOT.
I am seeing these things at a deeper philosophical level than you are.
I know that is hard to believe.
But not according to the rules of the system you are talking about.
You don't get to change the rules on a system.
You are so sure that I must be wrong that you don't bother to understand
what I am saying.
No, I understand what you are saying and see where you are WRONG.
It seem that the time has come for me to spend the little time that it
takes to understand the technical details of Gödel's proof.
I am estimating that have very good understanding of the preface to
the proof and the SEP article should provide this.
https://mavdisk.mnsu.edu/pj2943kt/Fall%202015/Promotion%20Application/Previous%20Years%20Article%2022%20Materials/godel-1931.pdf
On 4/25/23 12:03 AM, olcott wrote:
On 4/24/2023 6:35 PM, Richard Damon wrote:
On 4/24/23 11:25 AM, olcott wrote:
On 4/22/2023 6:19 PM, Richard Damon wrote:
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making >>>>>>>>>>>>>> it erroneous.
Since you don't understand the meaning of
self-contradictory, that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps >>>>>>>>>>>> in F that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is >>>>>>>>>>> true but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true. >>>>>>>>>>
So, you don't understand how to prove that something is "True >>>>>>>>> in F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox
expressed in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was true, >>>>>>> then the Liar's paradox would be true, thus that assumption can
not be true.
When one level of indirect reference is applied to the Liar
Paradox it
becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> TRUE.
Your
We can do the same thing when G asserts its own unprovability in F. >>>>>>>> G cannot be proved in F because this requires a sequence of
inference
steps in F that prove that they themselves do not exist in F.
Right, you can't prove, in F, that G is true, but you can prove, >>>>>>> in Meta-F, that G is true in F, and that G is unprovable in F,
which is what is required.
When G asserts its own unprovability in F it cannot be proved in F >>>>>> because this requires a sequence of inference steps in F that
prove that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way Tarski's >>>>>> Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the basics of
logic, or truth.
It may seem that way to someone that learns things by rote and
mistakes
this for actual understanding of exactly how all of the elements of a >>>>>> proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you have
intentionaally hamstrung yourself to avoid being "polluted" by
"rote-learning" so you are just self-inflicted ignorant.
If you won't even try to learn the basics, you have just
condemned yourself into being a pathological liar because you
just don't any better.
I do at this point need to understand model theory very thoroughly. >>>>>>
Learning the details of these things could have boxed me into a
corner
prior to my philosophical investigation of seeing how the key
elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set of
semantic
tautologies. It is true that formal systems grounded in this
foundation
cannot be incomplete nor have any expressions of language that are >>>>>> undecidable. Now that I have this foundation I have a way to see
exactly
how the concepts of math diverge from correct reasoning.
You and I can see both THAT G cannot be proved in F and WHY G
cannot be
proved in F. G cannot be proved in F for the same pathological >>>>>>>> self-reference(Olcott 2004) reason that the Liar Paradox cannot >>>>>>>> be proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand claissic >>>>>>> arguement forms.
It is not that I do not understand, it is that I can directly see
where and how formal mathematical systems diverge from correct
reasoning.
But since you are discussing Formal Logic, you need to use the
rules of Formal logic.
The other way to say it is that your "Correct Reasoning" diverges
from the accepted and proven system of Formal Logic.
In classical logic, intuitionistic logic and similar logical
systems, the principle of explosion
ex falso [sequitur] quodlibet,
'from falsehood, anything [follows]'
ex contradictione [sequitur] quodlibet,
'from contradiction, anything [follows]')
Right, if a logic system can prove a contradiction, that out of that
contradiction you can prove anything
https://en.wikipedia.org/wiki/Principle_of_explosion
∴ FALSE ⊢ Donald Trump is the Christ
∴ FALSE ⊢ Donald Trump is Satan
Which isn't what was being talked about.
You clearly don't understand how the principle of explosion works,
which isn't surprising considering how many misconseptions you have
about how logic works.
ex falso [sequitur] quodlibet,'from falsehood, anything [follows]'
∴ FALSE ⊢ Donald Trump is the Christ
But you are using the wrong symbol
False -> Donald Trump is the Christ
Is the statement that this is implying.
You seem to have a confusion between the implication operator and the
proves operator.
Right now, I would say you are to ignorant on that basics of logic to
be able to explain, even in basic terms, how it works, you have shown
yourself to be that stupid.
*Correction abolishing the POE nonsense*
Semantic Necessity operator: ⊨□
FALSE ⊨□ FALSE
(P ∧ ¬P) ⊨□ FALSE
So, FULLY define what you mean by that.
The two logic symbols already say semantic necessity, model theory may
have screwed up the idea of semantics by allowing vacuous truth.
I must become a master expert of at least basic model theory.
So, you don't understand what it means to DEFINE something.
I guess your theory is dead them.
By your examples, your logical necessisty operator can only establish a falsehood. Seems about right for the arguments you have been making.
Because you are a learned-by-rote person you make sure to never
examine
whether or not any aspect of math diverges from correct reasoning, >>>>>> you
simply assume that math is the gospel even when it contradicts
itself.
Nope, I know that with logic, if you follow the rules, you will get
the correct answer by the rules.
Then you must agree that Trump is the Christ and Trump is Satan both of >>>> those were derived from correct logic.
If you break the rules, you have no idea where you will go.
As I have told you before, if you want to see what your "Correct
Reasoning" can do as a replaceent logic system, you need to start
at the BEGINNING, and see wht it gets.
I would be happy to talk this through with you.
The beginning is that
valid inference an expression X of language L must be a semantic
consequence of its premises in L
And what do you mean by "semantic"
What does meaning mean?
The premise that
the Moon if made from green cheese ⊨□ The Moon is made from cheese.
All of the conventional logic symbols retain their original meaning.
Variable are quantified and of a specific type.
Meaning postulates can axiomatise meaning.
So, you have no "Formal Logic" since you are allowing the addition of
new "axioms" based on "meaning" (which you admit you can't define).
The connection between elements of the proof must be at least as good as
relevance logic.
So, your logic system is WEAKER than stadard logic. Have you gone back
to the formal proofs that establish fields like Computability Theory and
see what still remains after the requirement of relevance logic?
because, conventional logic defines semantic consequence as the
conclusion must be true if the premise is true.
You seem to mean something diffent, but haven't explained what you
mean by that.
You never heard of ordinary sound deductive inference?
Yes, I have, but you aren't using it. for instance, you allow a logical conclusion to be made from an false premise.
You seem to want to remove
parts of the logic, but can't actually define what you mean.
sound inference expression X of language L must be a semantic
consequence of the axioms of L.
For formal systems such as FOL the semantics is mostly the meaning
of the logic symbols.
These two logic symbols are abolished ⇒ → and replaced with this:
Semantic Necessity operator: ⊨□
Why do you need to abolish shows symbols?
They seem to lead to the principle of explosion.
No, they are often used in the proof, but the mere ability to assert
simple logic.
Allowing the following sort of logic is enough:
IT is True that A
Therefore it is True that A | B
and
It is True that A | B
It is False that A
Therefore B must be True.
You can build the principle of explosion from simple logic like that, so unless you eliminate the "and" and the "or" predicate, you get the
principle of explosion.
You do understand that the statment that A -> B is equivalent to the
asserting of (~A | B) is ALWAYS TRUE (which might be part of your
problem, as you don't seem to understand that categorical meaning of
ALL and NO), so either you need to outlaw the negation operator, or
the or operator to do this.
Again, what does "Semantic Necessity" operator mean?
A ⊨□ B the meaning of B is an aspect of the meaning of A.
So, you seem to be saying that you will not be able to prove the
pythogrean theorem, since the conclusion doesn't have a "meaning" that
is an aspect of the "meaning" of the conditions.
Note, one issue with your use of symbols, so many of the symbols can
have slightly diffferent meanings based on the context and system you
are working in.
I don't see this can you provide examples?
I am stipulating standard meanings.
WHICH standard meaning.
That is your problem, you don't seem to know enough to understand that
there are shades of meaning in things.
To just try to change things at the end is just PROOF that your
"Correct Reasoning" has to not be based on any real principles of
logic.
No logic must be based on correct reasoning any logic that prove
Donal Trump is the Christ is incorrect reasoning, thus the POE is
abolished
You CAN'T abolish the Principle of Explosion unless you greatly
restrict the power of your logic.
My two axioms abolish it neatly. All that I am getting rid of is
incompleteness and undecidability and I am gaining a universal True(L,x)
predicate.
Nope, you don't understand how the Principle of Explosion works.
No AXIOMS can affect it,
as it comes out of a couple of simple logical
rules.
These two logic symbols are abolished ⇒ → and replaced with this:
Semantic Necessity operator: ⊨□
Explosions have been abolished
Nope.
FALSE ⊨□ FALSE
(P ∧ ¬P) ⊨□ FALSE
Again DEFINE this operator, and the words you use to define it.
The semantic meaning of B is necessitated by the semantic meaning of A.
If I have a dog then I have an animal because a dog is an animal.
So you seem to be limited to categorical logic only. As I have pointed
out, this means you can't prove the Pythagorean theorem, since the
conclusion isn't "semantically" related to the premises.
Since it is clear that you want to change some of the basics of how
logic works, you are not allowed to just use ANY of classical logic
until you actually show what part of it is still usable under your
system and what changes happen.
Yes lets apply my ideas to FOL. I have already sketched out many
details.
Go ahead, try to fully define your ideas.
Remember, until you get to supporting the Higher Order Logics, you
can't get to the incompleteness, as that has been only established
for systems
I have always been talking about HOL in terms of MTT
Which doesn't work.
with second order logic, which is also needed for the needed
properties of the whole numbers. First Order Peano Arithmatic might
be complete, but can't be proved (within itself) to be consistent.
Second Order Peaano Arithmatic (which adds the principle of
Induction) IS incomplete as it supports enough of the natural number
to support Godel's proof.
Considering your current status, I would start working hard on that
right away, as with your current reputation, once you go, NO ONE is
going to want to look at your ideas, because you have done such a
good job showing that you don't understand how things work.
I haven't been able to get out of you exactly what you want to do
with your "Correct Reasoning", and until you show a heart to
actually try to do something constructive with it, and not just use
it as an excuse for bad logic, I don't care what it might be able
to do, because, frankly, I don't think you have the intellect to
come up with something like that.
I showed how the POE is easily abolished.
Nope.
If axioms stipulate that explosion cannot occur then it cannot occur.
Nope. Such an axiom just make your system inconsistant and exploded.
Remember, you never NEED to use a given axiom, so adding an axiom can't
keep you from showing something.
I showed how Provable(L,x) and True(L,x) are defined.
Note clearly. For instance, A statment x is provable or True in a
SYSTEM/THEORY (depending on your terminology) and NOT dependent on
some other statement in the system, as you definition seemed to
imply. You don't "Prove" something based on a statement, but in a
System/Theory.
F ⊢ A is used to express (in the meta-level) that A
is derivable in F, that is, that there is a proof of
A in F, or, in other words, that A is a theorem of F.
https://plato.stanford.edu/entries/goedel-incompleteness/
Right, but you have shown examples where you "L" above was a STATEMENT,
not a FIELD/THEORY.
You also don't understand that the difference between Provable and True
is that Provable requires a finite series of steps, but True can be
satisfied by an infinite series of steps.
But go ahead and prove me wrong, write an actual paper on the
basics of your "Correct Reasoning" and show how it actually works,
and compare it to "Classical Logic" and show what is different.
Then maybe you can start to work on showing it can actually do
something useful.
I need a dialogue to vet aspects of my ideas.
The key thing that I have not yet filled in is how to specify the
semantics of every FOL expression.
This semantics seems fully specified:
∀n ∈ ℕ ∀m ∈ ℕ ((n > m) ⊨□ (n+1 > m))
Nope, you need to actually FULLY DEFINE what you mean by your
symbols, you can't just rely on refering to classical meaning since
you clearly disagree with some of the classical meanings.
in the above case I can switch to conventional symbols without losing
anything ∀n ∈ ℕ ∀m ∈ ℕ ((n > m) ⊢ (n+1 > m))
Except that is a domain error, as the ⊢ operator needs a field/theory as its left operand.
T
Implication does a poor job of if-then
p---q---(p ⇒ q)---(if p then q)
T---T------T------------T
T---F------F------------F
F---T------T------------undefined
F---F------T------------undefinedT
Not undefined at all.
From falsity, anything follows.
The statement "If false then B" makes no assertion at all for this case. Think of your programming languages, a false condition in an if
statement ignores the conditional statements after it.
You seem to have some disjoint ideas, but seem to be unable to come
up with a cohesive whole. You use words that you don't seem to be
able to actually fully define.
Since you are trying to reject some of the basics of classical logic,
you need to FULLY define how your logic works. Name ALL the basic
operation that you allow. Do you allow "Not", "Or", "And", "Equals",
etc. What are your rules for logical inference. How do you ACTUALLY
prove a statement given a set of "Truthmakers".
All of the details that I provided are all of the detail that I know
right now.
Which is your problem. You seem incapable of understanding how the
changes you want to make affect the whole system, because you don't know
it well enough,
Remember, if you want to reject classical logic, you can't use it to
define your system.
I can take it as a basis and add and subtract things from it
And if you change it at all, you need to go back and see what all the
effects are. You can't assume the tree remains the same if you change
its roots.
On 4/25/2023 6:56 AM, Richard Damon wrote:
On 4/25/23 12:03 AM, olcott wrote:
On 4/24/2023 6:35 PM, Richard Damon wrote:
On 4/24/23 11:25 AM, olcott wrote:
On 4/22/2023 6:19 PM, Richard Damon wrote:
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making >>>>>>>>>>>>>>> it erroneous.
Since you don't understand the meaning of
self-contradictory, that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps >>>>>>>>>>>>> in F that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement is >>>>>>>>>>>> true but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski:
This sentence is not true: "This sentence is not true" is true. >>>>>>>>>>>
So, you don't understand how to prove that something is "True >>>>>>>>>> in F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox
expressed in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was true, >>>>>>>> then the Liar's paradox would be true, thus that assumption can >>>>>>>> not be true.
When one level of indirect reference is applied to the Liar
Paradox it
becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> TRUE. >>>>>>>
Your
Right, you can't prove, in F, that G is true, but you can prove, >>>>>>>> in Meta-F, that G is true in F, and that G is unprovable in F, >>>>>>>> which is what is required.
We can do the same thing when G asserts its own unprovability >>>>>>>>> in F.
G cannot be proved in F because this requires a sequence of
inference
steps in F that prove that they themselves do not exist in F. >>>>>>>>
When G asserts its own unprovability in F it cannot be proved in F >>>>>>> because this requires a sequence of inference steps in F that
prove that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way
Tarski's Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the basics of >>>>>>>> logic, or truth.
It may seem that way to someone that learns things by rote and
mistakes
this for actual understanding of exactly how all of the elements >>>>>>> of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you have
intentionaally hamstrung yourself to avoid being "polluted" by >>>>>>>> "rote-learning" so you are just self-inflicted ignorant.
If you won't even try to learn the basics, you have just
condemned yourself into being a pathological liar because you
just don't any better.
I do at this point need to understand model theory very thoroughly. >>>>>>>
Learning the details of these things could have boxed me into a
corner
prior to my philosophical investigation of seeing how the key
elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set of
semantic
tautologies. It is true that formal systems grounded in this
foundation
cannot be incomplete nor have any expressions of language that are >>>>>>> undecidable. Now that I have this foundation I have a way to see >>>>>>> exactly
how the concepts of math diverge from correct reasoning.
You and I can see both THAT G cannot be proved in F and WHY G >>>>>>>>> cannot be
proved in F. G cannot be proved in F for the same pathological >>>>>>>>> self-reference(Olcott 2004) reason that the Liar Paradox cannot >>>>>>>>> be proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand
claissic arguement forms.
It is not that I do not understand, it is that I can directly see >>>>>>> where and how formal mathematical systems diverge from correct
reasoning.
But since you are discussing Formal Logic, you need to use the
rules of Formal logic.
The other way to say it is that your "Correct Reasoning" diverges
from the accepted and proven system of Formal Logic.
In classical logic, intuitionistic logic and similar logical
systems, the principle of explosion
ex falso [sequitur] quodlibet,
'from falsehood, anything [follows]'
ex contradictione [sequitur] quodlibet,
'from contradiction, anything [follows]')
Right, if a logic system can prove a contradiction, that out of that
contradiction you can prove anything
https://en.wikipedia.org/wiki/Principle_of_explosion
∴ FALSE ⊢ Donald Trump is the Christ
∴ FALSE ⊢ Donald Trump is Satan
Which isn't what was being talked about.
You clearly don't understand how the principle of explosion works,
which isn't surprising considering how many misconseptions you have
about how logic works.
ex falso [sequitur] quodlibet,'from falsehood, anything [follows]'
∴ FALSE ⊢ Donald Trump is the Christ
But you are using the wrong symbol
'from falsehood, anything [follows]'
FALSE Proves that Donald Trump is the Christ
It is jack ass nonsense like this that proves the
principle of explosion is nothing even kludge
False -> Donald Trump is the Christ
Semantic Necessity operator: ⊨□
FALSE ⊨□ FALSE // POE abolished
(P ∧ ¬P) ⊨□ FALSE // POE abolished
From False only False follows.
From Contradiction only False follows.
Is the statement that this is implying.
I reject implication and replace it with this
Semantic Necessity operator: ⊨□
Or this Archimedes Plutonium's:
If--> then
T --> T = T
T --> F = F
F --> T = U (unknown or uncertain)
F --> F = U (unknown or uncertain)
You seem to have a confusion between the implication operator and the
proves operator.
I meant the stronger meaning The best hing to use might be :
Archimedes Plutonium's: If--> then (see above).
The whole idea is to formalize the notion of correct reasoning and use
this model to correct the issues with formal logic.
Right now, I would say you are to ignorant on that basics of logic
to be able to explain, even in basic terms, how it works, you have
shown yourself to be that stupid.
*Correction abolishing the POE nonsense*
Semantic Necessity operator: ⊨□
FALSE ⊨□ FALSE
(P ∧ ¬P) ⊨□ FALSE
So, FULLY define what you mean by that.
The two logic symbols already say semantic necessity, model theory may
have screwed up the idea of semantics by allowing vacuous truth.
I must become a master expert of at least basic model theory.
So, you don't understand what it means to DEFINE something.
I guess your theory is dead them.
Vacuous truth is eliminated by requiring every variable to be quantified.
By your examples, your logical necessisty operator can only establish
a falsehood. Seems about right for the arguments you have been making.
Try and explain what you mean by that.
The big picture of what I am doing is defining the foundation of the formalization of correct reasoning.
I am doing this on the basis of existing systems, then adding, removing
or changing things as needed to conform the system to correct reasoning.
Because you are a learned-by-rote person you make sure to never
examine
whether or not any aspect of math diverges from correct
reasoning, you
simply assume that math is the gospel even when it contradicts
itself.
Nope, I know that with logic, if you follow the rules, you will
get the correct answer by the rules.
Then you must agree that Trump is the Christ and Trump is Satan
both of
those were derived from correct logic.
If you break the rules, you have no idea where you will go.
As I have told you before, if you want to see what your "Correct
Reasoning" can do as a replaceent logic system, you need to start
at the BEGINNING, and see wht it gets.
I would be happy to talk this through with you.
The beginning is that
valid inference an expression X of language L must be a semantic
consequence of its premises in L
And what do you mean by "semantic"
What does meaning mean?
The premise that
the Moon if made from green cheese ⊨□ The Moon is made from cheese.
All of the conventional logic symbols retain their original meaning.
Variable are quantified and of a specific type.
Meaning postulates can axiomatise meaning.
So, you have no "Formal Logic" since you are allowing the addition of
new "axioms" based on "meaning" (which you admit you can't define).
When I am redefining current systems so that they conform to correct reasoning I make minimal changes to existing notions.
The connection between elements of the proof must be at least as good as >>> relevance logic.
So, your logic system is WEAKER than stadard logic. Have you gone back
to the formal proofs that establish fields like Computability Theory
and see what still remains after the requirement of relevance logic?
No it is not and you cannot show that it is.
because, conventional logic defines semantic consequence as the
conclusion must be true if the premise is true.
You seem to mean something diffent, but haven't explained what you
mean by that.
You never heard of ordinary sound deductive inference?
Yes, I have, but you aren't using it. for instance, you allow a
logical conclusion to be made from an false premise.
False does Derive False, Please try to back up all of your assertions
with reasoning. For statements like the one above you need a time
stamped quote of exactly what I said.
You seem to want to remove parts of the logic, but can't actually
define what you mean.
I have defined many key aspects many times: True/False/Non Sequitur
abolishes incompleteness and undefinability while maintaining consistency.
sound inference expression X of language L must be a semantic
consequence of the axioms of L.
For formal systems such as FOL the semantics is mostly the meaning
of the logic symbols.
These two logic symbols are abolished ⇒ → and replaced with this: >>>>> Semantic Necessity operator: ⊨□
Why do you need to abolish shows symbols?
They seem to lead to the principle of explosion.
No, they are often used in the proof, but the mere ability to assert
simple logic.
Allowing the following sort of logic is enough:
IT is True that A
Therefore it is True that A | B
and
It is True that A | B
It is False that A
Therefore B must be True.
You can build the principle of explosion from simple logic like that,
so unless you eliminate the "and" and the "or" predicate, you get the
principle of explosion.
Show me how and I will point out how it is fixed.
You do understand that the statment that A -> B is equivalent to the
asserting of (~A | B) is ALWAYS TRUE (which might be part of your
problem, as you don't seem to understand that categorical meaning of
ALL and NO), so either you need to outlaw the negation operator, or
the or operator to do this.
Again, what does "Semantic Necessity" operator mean?
A ⊨□ B the meaning of B is an aspect of the meaning of A.
So, you seem to be saying that you will not be able to prove the
pythogrean theorem, since the conclusion doesn't have a "meaning" that
is an aspect of the "meaning" of the conditions.
It has plenty of geometric meaning.
Note, one issue with your use of symbols, so many of the symbols can
have slightly diffferent meanings based on the context and system
you are working in.
I don't see this can you provide examples?
I am stipulating standard meanings.
WHICH standard meaning.
All of the logic symbols have their standard meaning.
That is your problem, you don't seem to know enough to understand that
there are shades of meaning in things.
To just try to change things at the end is just PROOF that your
"Correct Reasoning" has to not be based on any real principles of
logic.
No logic must be based on correct reasoning any logic that prove
Donal Trump is the Christ is incorrect reasoning, thus the POE is
abolished
You CAN'T abolish the Principle of Explosion unless you greatly
restrict the power of your logic.
My two axioms abolish it neatly. All that I am getting rid of is
incompleteness and undecidability and I am gaining a universal True(L,x) >>> predicate.
Nope, you don't understand how the Principle of Explosion works.
These are stipulated
FALSE ⊨□ FALSE
(P ∧ ¬P) ⊨□ FALSE
No AXIOMS can affect it,
That sounds ridiculous to me. Can you show what you mean?
as it comes out of a couple of simple logical rules.
These two logic symbols are abolished ⇒ → and replaced with this: >>>>> Semantic Necessity operator: ⊨□
Explosions have been abolished
Nope.
FALSE ⊨□ FALSE
(P ∧ ¬P) ⊨□ FALSE
Again DEFINE this operator, and the words you use to define it.
The semantic meaning of B is necessitated by the semantic meaning of A.
If I have a dog then I have an animal because a dog is an animal.
So you seem to be limited to categorical logic only. As I have pointed
out, this means you can't prove the Pythagorean theorem, since the
conclusion isn't "semantically" related to the premises.
I am simply using that as a concrete starting point to show one example
of how it works.
Since it is clear that you want to change some of the basics of
how logic works, you are not allowed to just use ANY of classical
logic until you actually show what part of it is still usable
under your system and what changes happen.
Yes lets apply my ideas to FOL. I have already sketched out many
details.
Go ahead, try to fully define your ideas.
Remember, until you get to supporting the Higher Order Logics, you
can't get to the incompleteness, as that has been only established
for systems
I have always been talking about HOL in terms of MTT
Which doesn't work.
MTT does work. The earlier version translated even very complex logic expressions into the equivalent direct graph.
I think that the current version only does a parse tree.
On 4/26/23 2:07 AM, olcott wrote:
On 4/25/2023 6:56 AM, Richard Damon wrote:
On 4/25/23 12:03 AM, olcott wrote:
On 4/24/2023 6:35 PM, Richard Damon wrote:
On 4/24/23 11:25 AM, olcott wrote:
On 4/22/2023 6:19 PM, Richard Damon wrote:
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, making >>>>>>>>>>>>>>>> it erroneous.
Since you don't understand the meaning of
self-contradictory, that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference steps >>>>>>>>>>>>>> in F that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement >>>>>>>>>>>>> is true but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>> This sentence is not true: "This sentence is not true" is true. >>>>>>>>>>>>
So, you don't understand how to prove that something is "True >>>>>>>>>>> in F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox
expressed in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was
true, then the Liar's paradox would be true, thus that
assumption can not be true.
When one level of indirect reference is applied to the Liar
Paradox it
becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> TRUE. >>>>>>>>
Your
Right, you can't prove, in F, that G is true, but you can
We can do the same thing when G asserts its own unprovability >>>>>>>>>> in F.
G cannot be proved in F because this requires a sequence of >>>>>>>>>> inference
steps in F that prove that they themselves do not exist in F. >>>>>>>>>
prove, in Meta-F, that G is true in F, and that G is unprovable >>>>>>>>> in F, which is what is required.
When G asserts its own unprovability in F it cannot be proved in F >>>>>>>> because this requires a sequence of inference steps in F that
prove that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way
Tarski's Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the basics of >>>>>>>>> logic, or truth.
It may seem that way to someone that learns things by rote and >>>>>>>> mistakes
this for actual understanding of exactly how all of the elements >>>>>>>> of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you have >>>>>>>>> intentionaally hamstrung yourself to avoid being "polluted" by >>>>>>>>> "rote-learning" so you are just self-inflicted ignorant.
If you won't even try to learn the basics, you have just
condemned yourself into being a pathological liar because you >>>>>>>>> just don't any better.
I do at this point need to understand model theory very thoroughly. >>>>>>>>
Learning the details of these things could have boxed me into a >>>>>>>> corner
prior to my philosophical investigation of seeing how the key
elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set of >>>>>>>> semantic
tautologies. It is true that formal systems grounded in this
foundation
cannot be incomplete nor have any expressions of language that are >>>>>>>> undecidable. Now that I have this foundation I have a way to see >>>>>>>> exactly
how the concepts of math diverge from correct reasoning.
You and I can see both THAT G cannot be proved in F and WHY G >>>>>>>>>> cannot be
proved in F. G cannot be proved in F for the same pathological >>>>>>>>>> self-reference(Olcott 2004) reason that the Liar Paradox
cannot be proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand
claissic arguement forms.
It is not that I do not understand, it is that I can directly
see where and how formal mathematical systems diverge from
correct reasoning.
But since you are discussing Formal Logic, you need to use the
rules of Formal logic.
The other way to say it is that your "Correct Reasoning" diverges >>>>>>> from the accepted and proven system of Formal Logic.
In classical logic, intuitionistic logic and similar logical
systems, the principle of explosion
ex falso [sequitur] quodlibet,
'from falsehood, anything [follows]'
ex contradictione [sequitur] quodlibet,
'from contradiction, anything [follows]')
Right, if a logic system can prove a contradiction, that out of
that contradiction you can prove anything
https://en.wikipedia.org/wiki/Principle_of_explosion
∴ FALSE ⊢ Donald Trump is the Christ
∴ FALSE ⊢ Donald Trump is Satan
Which isn't what was being talked about.
You clearly don't understand how the principle of explosion works,
which isn't surprising considering how many misconseptions you have
about how logic works.
ex falso [sequitur] quodlibet,'from falsehood, anything [follows]'
∴ FALSE ⊢ Donald Trump is the Christ
But you are using the wrong symbol
'from falsehood, anything [follows]'
FALSE Proves that Donald Trump is the Christ
That isn't what the statment actually means, so you are just stupid.
It is jack ass nonsense like this that proves the
principle of explosion is nothing even kludge
Right, false doesn't PROVE anything, but implies anything,
On 4/26/23 12:38 AM, olcott wrote:
On 4/25/2023 6:56 AM, Richard Damon wrote:
On 4/24/23 11:28 PM, olcott wrote:
On 4/24/2023 6:35 PM, Richard Damon wrote:
On 4/24/23 12:13 PM, olcott wrote:
On 4/24/2023 10:58 AM, olcott wrote:
On 4/22/2023 7:27 PM, Richard Damon wrote:∃sequence_of_inference_steps ⊆ F (sequence_of_inference_steps ⊢ >>>>>> ∄sequence_of_inference_steps ⊆ F)
On 4/22/23 7:57 PM, olcott wrote:
On 4/22/2023 6:19 PM, Richard Damon wrote:
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, >>>>>>>>>>>>>>>>>>> making it erroneous.
Since you don't understand the meaning of
self-contradictory, that claim is erroneous. >>>>>>>>>>>>>>>>>>
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
It is of course impossible to prove in F that a >>>>>>>>>>>>>>>> statement is true but not provable in F.
Any proof of G in F requires a sequence of inference >>>>>>>>>>>>>>>>> steps in F that
prove that they themselves do not exist in F. >>>>>>>>>>>>>>>>
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>>>>> This sentence is not true: "This sentence is not true" is >>>>>>>>>>>>>>> true.
So, you don't understand how to prove that something is >>>>>>>>>>>>>> "True in F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox >>>>>>>>>>>>> expressed in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was >>>>>>>>>>>> true, then the Liar's paradox would be true, thus that >>>>>>>>>>>> assumption can not be true.
When one level of indirect reference is applied to the Liar >>>>>>>>>>> Paradox it
becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> >>>>>>>>>>> TRUE.
Your
Right, you can't prove, in F, that G is true, but you can >>>>>>>>>>>> prove, in Meta-F, that G is true in F, and that G is
We can do the same thing when G asserts its own
unprovability in F.
G cannot be proved in F because this requires a sequence of >>>>>>>>>>>>> inference
steps in F that prove that they themselves do not exist in F. >>>>>>>>>>>>
unprovable in F, which is what is required.
When G asserts its own unprovability in F it cannot be proved >>>>>>>>>>> in F
because this requires a sequence of inference steps in F that >>>>>>>>>>> prove that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way >>>>>>>>>>> Tarski's Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the basics >>>>>>>>>>>> of logic, or truth.
It may seem that way to someone that learns things by rote >>>>>>>>>>> and mistakes
this for actual understanding of exactly how all of the
elements of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you >>>>>>>>>>>> have intentionaally hamstrung yourself to avoid being
"polluted" by "rote-learning" so you are just self-inflicted >>>>>>>>>>>> ignorant.
If you won't even try to learn the basics, you have just >>>>>>>>>>>> condemned yourself into being a pathological liar because >>>>>>>>>>>> you just don't any better.
I do at this point need to understand model theory very
thoroughly.
Learning the details of these things could have boxed me into >>>>>>>>>>> a corner
prior to my philosophical investigation of seeing how the key >>>>>>>>>>> elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set >>>>>>>>>>> of semantic
tautologies. It is true that formal systems grounded in this >>>>>>>>>>> foundation
cannot be incomplete nor have any expressions of language >>>>>>>>>>> that are
undecidable. Now that I have this foundation I have a way to >>>>>>>>>>> see exactly
how the concepts of math diverge from correct reasoning. >>>>>>>>>>>
You and I can see both THAT G cannot be proved in F and WHY >>>>>>>>>>>>> G cannot be
proved in F. G cannot be proved in F for the same
pathological self-reference(Olcott 2004) reason that the >>>>>>>>>>>>> Liar Paradox cannot be proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand >>>>>>>>>>>> claissic arguement forms.
It is not that I do not understand, it is that I can directly >>>>>>>>>>> see where and how formal mathematical systems diverge from >>>>>>>>>>> correct reasoning.
But since you are discussing Formal Logic, you need to use the >>>>>>>>>> rules of Formal logic.
I have never been talking about formal logic. I have always
been talking
about the philosophical foundations of correct reasoning.
No, you have been talking about theorys DEEP in formal logic.
You can't talk about the "errors" in those theories, with being >>>>>>>> in formal logic.
IF you think you can somehow talk about the foundations, while >>>>>>>> working in the penthouse, you have just confirmed that you do
not understand how ANY form of logic works.
PERIOD.
The other way to say it is that your "Correct Reasoning"
diverges from the accepted and proven system of Formal Logic. >>>>>>>>>>
It is correct reasoning in the absolute sense that I refer to. >>>>>>>>> If anyone has the opinion that arithmetic does not exist they are >>>>>>>>> incorrect in the absolute sense of the word: "incorrect".
IF you reject the logic that a theory is based on, you need to >>>>>>>> reject the logic system, NOT the theory.
You are just showing that you have wasted your LIFE because you >>>>>>>> don'tunderstnad how to work ligic.
Because you are a learned-by-rote person you make sure to >>>>>>>>>>> never examine
whether or not any aspect of math diverges from correct
reasoning, you
simply assume that math is the gospel even when it
contradicts itself.
Nope, I know that with logic, if you follow the rules, you >>>>>>>>>> will get the correct answer by the rules.
If you break the rules, you have no idea where you will go. >>>>>>>>>>
In other words you never ever spend any time on making sure
that these
rules fit together coherently.
The rules work together just fine.
YOU don't like some of the results, but they work just fine for >>>>>>>> most of the field.
You are just PROVING that you have no idea how to actually
discuss a new foundation for logic, likely because you are
incapable of actually comeing up with a consistent basis for
working logic.
Meaningless gobbledy-good until you actually define what you
As I have told you before, if you want to see what your
"Correct > Reasoning" can do as a replaceent logic system, you >>>>>>>>>> need to start at the
BEGINNING, and see wht it gets.
The foundation of correct reasoning is that the entire body of >>>>>>>>> analytical truth is a set of semantic tautologies.
This means that all correct inference always requires
determining the
semantic consequence of expressions of language. This semantic >>>>>>>>> consequence can be specified syntactically, and indeed must be >>>>>>>>> represented syntactically to be computable
mean and spell out the actual rules that need to be followed.
Note, "Computability" is actually a fairly late in the process >>>>>>>> concept. You first need to show that you logic can actually do >>>>>>>> something useful
To just try to change things at the end is just PROOF that >>>>>>>>>> your "Correct Reasoning" has to not be based on any real
principles of logic.
Since it is clear that you want to change some of the basics >>>>>>>>>> of how logic works, you are not allowed to just use ANY of >>>>>>>>>> classical logic until you actually show what part of it is >>>>>>>>>> still usable under your system and what changes happen.
Whenever an expression of language is derived as the semantic >>>>>>>>> consequence of other expressions of language we have valid
inference.
And, are you using the "classical" definition of "semantic"
(which makes this sentence somewhat cirular) or do you mean
something based on the concept you sometimes use of "the
meaning of the words".
*Principle of explosion*
An alternate argument for the principle stems from model theory. A >>>>>>> sentence P is a semantic consequence of a set of sentences Γ only if >>>>>>> every model of Γ is a model of P. However, there is no model of the >>>>>>> contradictory set (P ∧ ¬P) A fortiori, there is no model of (P ∧ ¬P)
that is not a model of Q. Thus, vacuously, every model of (P ∧ >>>>>>> ¬P) is a
model of Q. Thus, Q is a semantic consequence of (P ∧ ¬P).
https://en.wikipedia.org/wiki/Principle_of_explosion
Vacuous truth does not count as truth.
All variables must be quantified
"all cell phones in the room are turned off" will be true when no >>>>>>> cell phones are in the room.
∃cp ∈ cell_phones (in_this_room(cp)) ∧ turned_off(cp))
The semantic consequence must be specified syntactically so
that it can
be computed or examined in formal systems.
Just like in sound deductive inference when the premises are >>>>>>>>> known to be
true, and the reasoning valid (a semantic consequence) then the >>>>>>>>> conclusion is necessarily true.
So, what is the difference in your system from classical Formal >>>>>>>> Logic?
Semantic Necessity operator: ⊨□
FALSE ⊨□ FALSE // POE abolished
(P ∧ ¬P) ⊨□ FALSE // POE abolished
⇒ and → symbols are replaced by ⊨□
The sets that the variables range over must be defined
all variables must be quantified
// x is a semantic consequence of its premises in L
Provable(P,x) ≡ ∃x ∈ L, ∃P ⊆ L (P ⊨□ x)
// x is a semantic consequence of the axioms of L
True(L,x) ≡ ∃x ∈ L (Axioms(L) ⊨□ x)
*The above is all that I know right now*
The most important aspect of the tiny little foundation of a >>>>>>>>> formal
system that I already specified immediately above is self-evident: >>>>>>>>> True(L,X) can be defined and incompleteness is impossible.
I don't think your system is anywhere near establish far enough >>>>>>>> for you to say that.
Try and show exceptions to this rule and I will fill in any gaps >>>>>>> that
you find.
G asserts its own unprovability in F
The reason that G cannot be proved in F is that this requires a
sequence of inference steps in F that proves no such sequence
of inference steps exists in F.
So, you don't understand the differnce between the INFINITE set of
sequence steps that show that G is True, and the FINITE number of
steps that need to be shown to make G provable.
The experts seem to believe that unless a proof can be transformed into >>>> a finite sequence of steps it is no actual proof at all. Try and cite a >>>> source that says otherwise.
WHy? Because I agree with that. A Proof needs to be done in a finite
number of steps.
The question is why the infinite number of steps in F that makes G
true don't count for making it true.
Yes, you can't write that out to KNOW it to be true, but that is the
differece between knowledge and fact.
Infinite proof are not allowed: Because they can't possibly ever occur.
We can imagine an Oracle machine that can complete these proofs in the >>>> same sort of way that we can imagine a magic fairy that waves a magic
wand.
You are just showing you don't understand what you talking about
and just spouting word (or symbol) salad.
You are oriving you are an IDIOT.
I am seeing these things at a deeper philosophical level than you
are. I know that is hard to believe.
But not according to the rules of the system you are talking about.
You don't get to change the rules on a system.
YES I DO !!!
My whole purpose to provide the *correct reasoning* foundation such that
formal systems can be defined without undecidability or undefinability,
or inconsistently.
No, to change the rules you have to go back to the beginning.
On 4/26/2023 7:07 AM, Richard Damon wrote:
On 4/26/23 2:07 AM, olcott wrote:
On 4/25/2023 6:56 AM, Richard Damon wrote:
On 4/25/23 12:03 AM, olcott wrote:
On 4/24/2023 6:35 PM, Richard Damon wrote:
On 4/24/23 11:25 AM, olcott wrote:
On 4/22/2023 6:19 PM, Richard Damon wrote:
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, >>>>>>>>>>>>>>>>> making it erroneous.
Since you don't understand the meaning of
self-contradictory, that claim is erroneous.
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference >>>>>>>>>>>>>>> steps in F that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement >>>>>>>>>>>>>> is true but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>>> This sentence is not true: "This sentence is not true" is >>>>>>>>>>>>> true.
So, you don't understand how to prove that something is >>>>>>>>>>>> "True in F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox >>>>>>>>>>> expressed in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was >>>>>>>>>> true, then the Liar's paradox would be true, thus that
assumption can not be true.
When one level of indirect reference is applied to the Liar
Paradox it
becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> TRUE. >>>>>>>>>
Your
Right, you can't prove, in F, that G is true, but you can
We can do the same thing when G asserts its own unprovability >>>>>>>>>>> in F.
G cannot be proved in F because this requires a sequence of >>>>>>>>>>> inference
steps in F that prove that they themselves do not exist in F. >>>>>>>>>>
prove, in Meta-F, that G is true in F, and that G is
unprovable in F, which is what is required.
When G asserts its own unprovability in F it cannot be proved in F >>>>>>>>> because this requires a sequence of inference steps in F that >>>>>>>>> prove that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way
Tarski's Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the basics of >>>>>>>>>> logic, or truth.
It may seem that way to someone that learns things by rote and >>>>>>>>> mistakes
this for actual understanding of exactly how all of the
elements of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you have >>>>>>>>>> intentionaally hamstrung yourself to avoid being "polluted" by >>>>>>>>>> "rote-learning" so you are just self-inflicted ignorant.
If you won't even try to learn the basics, you have just
condemned yourself into being a pathological liar because you >>>>>>>>>> just don't any better.
I do at this point need to understand model theory very
thoroughly.
Learning the details of these things could have boxed me into a >>>>>>>>> corner
prior to my philosophical investigation of seeing how the key >>>>>>>>> elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set of >>>>>>>>> semantic
tautologies. It is true that formal systems grounded in this >>>>>>>>> foundation
cannot be incomplete nor have any expressions of language that are >>>>>>>>> undecidable. Now that I have this foundation I have a way to >>>>>>>>> see exactly
how the concepts of math diverge from correct reasoning.
You and I can see both THAT G cannot be proved in F and WHY G >>>>>>>>>>> cannot be
proved in F. G cannot be proved in F for the same
pathological self-reference(Olcott 2004) reason that the Liar >>>>>>>>>>> Paradox cannot be proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand
claissic arguement forms.
It is not that I do not understand, it is that I can directly >>>>>>>>> see where and how formal mathematical systems diverge from
correct reasoning.
But since you are discussing Formal Logic, you need to use the >>>>>>>> rules of Formal logic.
The other way to say it is that your "Correct Reasoning"
diverges from the accepted and proven system of Formal Logic.
In classical logic, intuitionistic logic and similar logical
systems, the principle of explosion
ex falso [sequitur] quodlibet,
'from falsehood, anything [follows]'
ex contradictione [sequitur] quodlibet,
'from contradiction, anything [follows]')
Right, if a logic system can prove a contradiction, that out of
that contradiction you can prove anything
https://en.wikipedia.org/wiki/Principle_of_explosion
∴ FALSE ⊢ Donald Trump is the Christ
∴ FALSE ⊢ Donald Trump is Satan
Which isn't what was being talked about.
You clearly don't understand how the principle of explosion works, >>>>>> which isn't surprising considering how many misconseptions you
have about how logic works.
ex falso [sequitur] quodlibet,'from falsehood, anything [follows]'
∴ FALSE ⊢ Donald Trump is the Christ
But you are using the wrong symbol
'from falsehood, anything [follows]'
FALSE Proves that Donald Trump is the Christ
That isn't what the statment actually means, so you are just stupid.
It is jack ass nonsense like this that proves the
principle of explosion is nothing even kludge
Right, false doesn't PROVE anything, but implies anything,
"P, ¬P ⊢ Q For any statements P and Q, if P and not-P are both true,
then it logically follows that Q is true."
https://en.wikipedia.org/wiki/Principle_of_explosion#Symbolic_representation
On 4/26/23 10:41 PM, olcott wrote:
On 4/26/2023 7:07 AM, Richard Damon wrote:
On 4/26/23 2:07 AM, olcott wrote:
On 4/25/2023 6:56 AM, Richard Damon wrote:
On 4/25/23 12:03 AM, olcott wrote:
On 4/24/2023 6:35 PM, Richard Damon wrote:
On 4/24/23 11:25 AM, olcott wrote:
On 4/22/2023 6:19 PM, Richard Damon wrote:
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote:
On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, >>>>>>>>>>>>>>>>>> making it erroneous.
Since you don't understand the meaning of
self-contradictory, that claim is erroneous. >>>>>>>>>>>>>>>>>
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
Any proof of G in F requires a sequence of inference >>>>>>>>>>>>>>>> steps in F that
prove that they themselves do not exist in F.
It is of course impossible to prove in F that a statement >>>>>>>>>>>>>>> is true but not provable in F.
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>>>> This sentence is not true: "This sentence is not true" is >>>>>>>>>>>>>> true.
So, you don't understand how to prove that something is >>>>>>>>>>>>> "True in F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox >>>>>>>>>>>> expressed in his theory is true in his meta-theory.
No, he didn't, he showed that *IF* a certain assuption was >>>>>>>>>>> true, then the Liar's paradox would be true, thus that
assumption can not be true.
When one level of indirect reference is applied to the Liar >>>>>>>>>> Paradox it
becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> TRUE. >>>>>>>>>>
Your
Right, you can't prove, in F, that G is true, but you can >>>>>>>>>>> prove, in Meta-F, that G is true in F, and that G is
We can do the same thing when G asserts its own
unprovability in F.
G cannot be proved in F because this requires a sequence of >>>>>>>>>>>> inference
steps in F that prove that they themselves do not exist in F. >>>>>>>>>>>
unprovable in F, which is what is required.
When G asserts its own unprovability in F it cannot be proved >>>>>>>>>> in F
because this requires a sequence of inference steps in F that >>>>>>>>>> prove that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way
Tarski's Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the basics >>>>>>>>>>> of logic, or truth.
It may seem that way to someone that learns things by rote and >>>>>>>>>> mistakes
this for actual understanding of exactly how all of the
elements of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you have >>>>>>>>>>> intentionaally hamstrung yourself to avoid being "polluted" >>>>>>>>>>> by "rote-learning" so you are just self-inflicted ignorant. >>>>>>>>>>>
If you won't even try to learn the basics, you have just >>>>>>>>>>> condemned yourself into being a pathological liar because you >>>>>>>>>>> just don't any better.
I do at this point need to understand model theory very
thoroughly.
Learning the details of these things could have boxed me into >>>>>>>>>> a corner
prior to my philosophical investigation of seeing how the key >>>>>>>>>> elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set of >>>>>>>>>> semantic
tautologies. It is true that formal systems grounded in this >>>>>>>>>> foundation
cannot be incomplete nor have any expressions of language that >>>>>>>>>> are
undecidable. Now that I have this foundation I have a way to >>>>>>>>>> see exactly
how the concepts of math diverge from correct reasoning.
You and I can see both THAT G cannot be proved in F and WHY >>>>>>>>>>>> G cannot be
proved in F. G cannot be proved in F for the same
pathological self-reference(Olcott 2004) reason that the >>>>>>>>>>>> Liar Paradox cannot be proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand >>>>>>>>>>> claissic arguement forms.
It is not that I do not understand, it is that I can directly >>>>>>>>>> see where and how formal mathematical systems diverge from >>>>>>>>>> correct reasoning.
But since you are discussing Formal Logic, you need to use the >>>>>>>>> rules of Formal logic.
The other way to say it is that your "Correct Reasoning"
diverges from the accepted and proven system of Formal Logic. >>>>>>>>
In classical logic, intuitionistic logic and similar logical
systems, the principle of explosion
ex falso [sequitur] quodlibet,
'from falsehood, anything [follows]'
ex contradictione [sequitur] quodlibet,
'from contradiction, anything [follows]')
Right, if a logic system can prove a contradiction, that out of
that contradiction you can prove anything
https://en.wikipedia.org/wiki/Principle_of_explosion
∴ FALSE ⊢ Donald Trump is the Christ
∴ FALSE ⊢ Donald Trump is Satan
Which isn't what was being talked about.
You clearly don't understand how the principle of explosion
works, which isn't surprising considering how many misconseptions >>>>>>> you have about how logic works.
ex falso [sequitur] quodlibet,'from falsehood, anything [follows]' >>>>>> ∴ FALSE ⊢ Donald Trump is the Christ
But you are using the wrong symbol
'from falsehood, anything [follows]'
FALSE Proves that Donald Trump is the Christ
That isn't what the statment actually means, so you are just stupid.
It is jack ass nonsense like this that proves the
principle of explosion is nothing even kludge
Right, false doesn't PROVE anything, but implies anything,
"P, ¬P ⊢ Q For any statements P and Q, if P and not-P are both true,
then it logically follows that Q is true."
https://en.wikipedia.org/wiki/Principle_of_explosion#Symbolic_representation >>
So, you don't understnad what you are reading.
FALSE itself isn't proving anything.
On 4/26/23 10:47 PM, olcott wrote:
On 4/26/2023 7:07 AM, Richard Damon wrote:
On 4/26/23 12:38 AM, olcott wrote:
On 4/25/2023 6:56 AM, Richard Damon wrote:
On 4/24/23 11:28 PM, olcott wrote:
On 4/24/2023 6:35 PM, Richard Damon wrote:
On 4/24/23 12:13 PM, olcott wrote:
On 4/24/2023 10:58 AM, olcott wrote:
On 4/22/2023 7:27 PM, Richard Damon wrote:∃sequence_of_inference_steps ⊆ F (sequence_of_inference_steps ⊢ >>>>>>>> ∄sequence_of_inference_steps ⊆ F)
On 4/22/23 7:57 PM, olcott wrote:
On 4/22/2023 6:19 PM, Richard Damon wrote:No, you have been talking about theorys DEEP in formal logic. >>>>>>>>>> You can't talk about the "errors" in those theories, with
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:No, he didn't, he showed that *IF* a certain assuption was >>>>>>>>>>>>>> true, then the Liar's paradox would be true, thus that >>>>>>>>>>>>>> assumption can not be true.
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/15/23 10:54 PM, olcott wrote:But Godel's G doesn't do that.
G is unprovable because it is self-contradictory, >>>>>>>>>>>>>>>>>>>>> making it erroneous.
Since you don't understand the meaning of >>>>>>>>>>>>>>>>>>>> self-contradictory, that claim is erroneous. >>>>>>>>>>>>>>>>>>>>
When G asserts its own unprovability in F: >>>>>>>>>>>>>>>>>>
It is of course impossible to prove in F that a >>>>>>>>>>>>>>>>>> statement is true but not provable in F.
Any proof of G in F requires a sequence of inference >>>>>>>>>>>>>>>>>>> steps in F that
prove that they themselves do not exist in F. >>>>>>>>>>>>>>>>>>
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>>>>>>> This sentence is not true: "This sentence is not true" >>>>>>>>>>>>>>>>> is true.
So, you don't understand how to prove that something is >>>>>>>>>>>>>>>> "True in F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox >>>>>>>>>>>>>>> expressed in his theory is true in his meta-theory. >>>>>>>>>>>>>>
When one level of indirect reference is applied to the Liar >>>>>>>>>>>>> Paradox it
becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> >>>>>>>>>>>>> TRUE.
Your
We can do the same thing when G asserts its own
unprovability in F.
G cannot be proved in F because this requires a sequence >>>>>>>>>>>>>>> of inference
steps in F that prove that they themselves do not exist >>>>>>>>>>>>>>> in F.
Right, you can't prove, in F, that G is true, but you can >>>>>>>>>>>>>> prove, in Meta-F, that G is true in F, and that G is >>>>>>>>>>>>>> unprovable in F, which is what is required.
When G asserts its own unprovability in F it cannot be >>>>>>>>>>>>> proved in F
because this requires a sequence of inference steps in F >>>>>>>>>>>>> that prove that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way >>>>>>>>>>>>> Tarski's Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the >>>>>>>>>>>>>> basics of logic, or truth.
It may seem that way to someone that learns things by rote >>>>>>>>>>>>> and mistakes
this for actual understanding of exactly how all of the >>>>>>>>>>>>> elements of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you >>>>>>>>>>>>>> have intentionaally hamstrung yourself to avoid being >>>>>>>>>>>>>> "polluted" by "rote-learning" so you are just
self-inflicted ignorant.
If you won't even try to learn the basics, you have just >>>>>>>>>>>>>> condemned yourself into being a pathological liar because >>>>>>>>>>>>>> you just don't any better.
I do at this point need to understand model theory very >>>>>>>>>>>>> thoroughly.
Learning the details of these things could have boxed me >>>>>>>>>>>>> into a corner
prior to my philosophical investigation of seeing how the >>>>>>>>>>>>> key elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set >>>>>>>>>>>>> of semantic
tautologies. It is true that formal systems grounded in >>>>>>>>>>>>> this foundation
cannot be incomplete nor have any expressions of language >>>>>>>>>>>>> that are
undecidable. Now that I have this foundation I have a way >>>>>>>>>>>>> to see exactly
how the concepts of math diverge from correct reasoning. >>>>>>>>>>>>>
You and I can see both THAT G cannot be proved in F and >>>>>>>>>>>>>>> WHY G cannot be
proved in F. G cannot be proved in F for the same >>>>>>>>>>>>>>> pathological self-reference(Olcott 2004) reason that the >>>>>>>>>>>>>>> Liar Paradox cannot be proved in Tarski's theory. >>>>>>>>>>>>>>>
Which he didn't do, but you are too stupid to understand >>>>>>>>>>>>>> claissic arguement forms.
It is not that I do not understand, it is that I can >>>>>>>>>>>>> directly see where and how formal mathematical systems >>>>>>>>>>>>> diverge from correct reasoning.
But since you are discussing Formal Logic, you need to use >>>>>>>>>>>> the rules of Formal logic.
I have never been talking about formal logic. I have always >>>>>>>>>>> been talking
about the philosophical foundations of correct reasoning. >>>>>>>>>>
being in formal logic.
IF you think you can somehow talk about the foundations, while >>>>>>>>>> working in the penthouse, you have just confirmed that you do >>>>>>>>>> not understand how ANY form of logic works.
PERIOD.
The other way to say it is that your "Correct Reasoning" >>>>>>>>>>>> diverges from the accepted and proven system of Formal Logic. >>>>>>>>>>>>
It is correct reasoning in the absolute sense that I refer to. >>>>>>>>>>> If anyone has the opinion that arithmetic does not exist they >>>>>>>>>>> are
incorrect in the absolute sense of the word: "incorrect". >>>>>>>>>>>
IF you reject the logic that a theory is based on, you need to >>>>>>>>>> reject the logic system, NOT the theory.
You are just showing that you have wasted your LIFE because >>>>>>>>>> you don'tunderstnad how to work ligic.
Because you are a learned-by-rote person you make sure to >>>>>>>>>>>>> never examine
whether or not any aspect of math diverges from correct >>>>>>>>>>>>> reasoning, you
simply assume that math is the gospel even when it
contradicts itself.
Nope, I know that with logic, if you follow the rules, you >>>>>>>>>>>> will get the correct answer by the rules.
If you break the rules, you have no idea where you will go. >>>>>>>>>>>>
In other words you never ever spend any time on making sure >>>>>>>>>>> that these
rules fit together coherently.
The rules work together just fine.
YOU don't like some of the results, but they work just fine >>>>>>>>>> for most of the field.
You are just PROVING that you have no idea how to actually >>>>>>>>>> discuss a new foundation for logic, likely because you are >>>>>>>>>> incapable of actually comeing up with a consistent basis for >>>>>>>>>> working logic.
Meaningless gobbledy-good until you actually define what you >>>>>>>>>> mean and spell out the actual rules that need to be followed. >>>>>>>>>>
As I have told you before, if you want to see what your >>>>>>>>>>>> "Correct > Reasoning" can do as a replaceent logic system, >>>>>>>>>>>> you need to start at the
BEGINNING, and see wht it gets.
The foundation of correct reasoning is that the entire body of >>>>>>>>>>> analytical truth is a set of semantic tautologies.
This means that all correct inference always requires
determining the
semantic consequence of expressions of language. This semantic >>>>>>>>>>> consequence can be specified syntactically, and indeed must be >>>>>>>>>>> represented syntactically to be computable
Note, "Computability" is actually a fairly late in the process >>>>>>>>>> concept. You first need to show that you logic can actually do >>>>>>>>>> something useful
To just try to change things at the end is just PROOF that >>>>>>>>>>>> your "Correct Reasoning" has to not be based on any real >>>>>>>>>>>> principles of logic.
Since it is clear that you want to change some of the basics >>>>>>>>>>>> of how logic works, you are not allowed to just use ANY of >>>>>>>>>>>> classical logic until you actually show what part of it is >>>>>>>>>>>> still usable under your system and what changes happen. >>>>>>>>>>>>
Whenever an expression of language is derived as the semantic >>>>>>>>>>> consequence of other expressions of language we have valid >>>>>>>>>>> inference.
And, are you using the "classical" definition of "semantic" >>>>>>>>>> (which makes this sentence somewhat cirular) or do you mean >>>>>>>>>> something based on the concept you sometimes use of "the >>>>>>>>>> meaning of the words".
*Principle of explosion*
An alternate argument for the principle stems from model theory. A >>>>>>>>> sentence P is a semantic consequence of a set of sentences Γ >>>>>>>>> only if
every model of Γ is a model of P. However, there is no model of >>>>>>>>> the
contradictory set (P ∧ ¬P) A fortiori, there is no model of (P >>>>>>>>> ∧ ¬P)
that is not a model of Q. Thus, vacuously, every model of (P ∧ >>>>>>>>> ¬P) is a
model of Q. Thus, Q is a semantic consequence of (P ∧ ¬P). >>>>>>>>> https://en.wikipedia.org/wiki/Principle_of_explosion
Vacuous truth does not count as truth.
All variables must be quantified
"all cell phones in the room are turned off" will be true when >>>>>>>>> no cell phones are in the room.
∃cp ∈ cell_phones (in_this_room(cp)) ∧ turned_off(cp)) >>>>>>>>>
The semantic consequence must be specified syntactically so >>>>>>>>>>> that it can
be computed or examined in formal systems.
Just like in sound deductive inference when the premises are >>>>>>>>>>> known to be
true, and the reasoning valid (a semantic consequence) then the >>>>>>>>>>> conclusion is necessarily true.
So, what is the difference in your system from classical
Formal Logic?
Semantic Necessity operator: ⊨□
FALSE ⊨□ FALSE // POE abolished
(P ∧ ¬P) ⊨□ FALSE // POE abolished
⇒ and → symbols are replaced by ⊨□
The sets that the variables range over must be defined
all variables must be quantified
// x is a semantic consequence of its premises in L
Provable(P,x) ≡ ∃x ∈ L, ∃P ⊆ L (P ⊨□ x)
// x is a semantic consequence of the axioms of L
True(L,x) ≡ ∃x ∈ L (Axioms(L) ⊨□ x)
*The above is all that I know right now*
The most important aspect of the tiny little foundation of a >>>>>>>>>>> formalI don't think your system is anywhere near establish far
system that I already specified immediately above is
self-evident:
True(L,X) can be defined and incompleteness is impossible. >>>>>>>>>>
enough for you to say that.
Try and show exceptions to this rule and I will fill in any
gaps that
you find.
G asserts its own unprovability in F
The reason that G cannot be proved in F is that this requires a >>>>>>>>> sequence of inference steps in F that proves no such sequence >>>>>>>>> of inference steps exists in F.
So, you don't understand the differnce between the INFINITE set
of sequence steps that show that G is True, and the FINITE number >>>>>>> of steps that need to be shown to make G provable.
The experts seem to believe that unless a proof can be transformed >>>>>> into
a finite sequence of steps it is no actual proof at all. Try and
cite a
source that says otherwise.
WHy? Because I agree with that. A Proof needs to be done in a
finite number of steps.
The question is why the infinite number of steps in F that makes G
true don't count for making it true.
Yes, you can't write that out to KNOW it to be true, but that is
the differece between knowledge and fact.
Infinite proof are not allowed: Because they can't possibly ever occur. >>>>
We can imagine an Oracle machine that can complete these proofs in >>>>>> the
same sort of way that we can imagine a magic fairy that waves a magic >>>>>> wand.
You are just showing you don't understand what you talking about >>>>>>> and just spouting word (or symbol) salad.
You are oriving you are an IDIOT.
I am seeing these things at a deeper philosophical level than you
are. I know that is hard to believe.
But not according to the rules of the system you are talking about.
You don't get to change the rules on a system.
YES I DO !!!
My whole purpose to provide the *correct reasoning* foundation such
that
formal systems can be defined without undecidability or undefinability, >>>> or inconsistently.
No, to change the rules you have to go back to the beginning.
Non_Sequitur(G) ↔ ∃φ ((T ⊬ φ) ∧ (T ⊬ ¬φ))
No, Non_Sequitur: Most of what Peter Olcott says.
You still can't change the rules without going back to the beginning,
The standard definition of mathematical incompleteness:
Incomplete(T) ↔ ∃φ ∈ F((T ⊬ φ) ∧ (T ⊬ ¬φ))
Requires formal systems to do the logically impossible:
to prove self-contradictory expressions of language.
So formal systems are "incomplete" in the same sense that we determine
that a baker that cannot bake a proper angel food cake using ordinary
red house bricks as the only ingredient lacks sufficient baking skill.
On 4/27/23 9:09 PM, olcott wrote:
On 4/27/2023 6:19 AM, Richard Damon wrote:
On 4/26/23 10:41 PM, olcott wrote:
On 4/26/2023 7:07 AM, Richard Damon wrote:
On 4/26/23 2:07 AM, olcott wrote:
On 4/25/2023 6:56 AM, Richard Damon wrote:
On 4/25/23 12:03 AM, olcott wrote:
On 4/24/2023 6:35 PM, Richard Damon wrote:
On 4/24/23 11:25 AM, olcott wrote:
On 4/22/2023 6:19 PM, Richard Damon wrote:
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:No, he didn't, he showed that *IF* a certain assuption was >>>>>>>>>>>>> true, then the Liar's paradox would be true, thus that >>>>>>>>>>>>> assumption can not be true.
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/15/23 10:54 PM, olcott wrote:
G is unprovable because it is self-contradictory, >>>>>>>>>>>>>>>>>>>> making it erroneous.
Since you don't understand the meaning of >>>>>>>>>>>>>>>>>>> self-contradictory, that claim is erroneous. >>>>>>>>>>>>>>>>>>>
When G asserts its own unprovability in F:
But Godel's G doesn't do that.
It is of course impossible to prove in F that a >>>>>>>>>>>>>>>>> statement is true but not provable in F.
Any proof of G in F requires a sequence of inference >>>>>>>>>>>>>>>>>> steps in F that
prove that they themselves do not exist in F. >>>>>>>>>>>>>>>>>
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>>>>>> This sentence is not true: "This sentence is not true" >>>>>>>>>>>>>>>> is true.
So, you don't understand how to prove that something is >>>>>>>>>>>>>>> "True in F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox >>>>>>>>>>>>>> expressed in his theory is true in his meta-theory. >>>>>>>>>>>>>
When one level of indirect reference is applied to the Liar >>>>>>>>>>>> Paradox it
becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> >>>>>>>>>>>> TRUE.
Your
Right, you can't prove, in F, that G is true, but you can >>>>>>>>>>>>> prove, in Meta-F, that G is true in F, and that G is >>>>>>>>>>>>> unprovable in F, which is what is required.
We can do the same thing when G asserts its own
unprovability in F.
G cannot be proved in F because this requires a sequence >>>>>>>>>>>>>> of inference
steps in F that prove that they themselves do not exist in F. >>>>>>>>>>>>>
When G asserts its own unprovability in F it cannot be >>>>>>>>>>>> proved in F
because this requires a sequence of inference steps in F >>>>>>>>>>>> that prove that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way >>>>>>>>>>>> Tarski's Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the basics >>>>>>>>>>>>> of logic, or truth.
It may seem that way to someone that learns things by rote >>>>>>>>>>>> and mistakes
this for actual understanding of exactly how all of the >>>>>>>>>>>> elements of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you >>>>>>>>>>>>> have intentionaally hamstrung yourself to avoid being >>>>>>>>>>>>> "polluted" by "rote-learning" so you are just
self-inflicted ignorant.
If you won't even try to learn the basics, you have just >>>>>>>>>>>>> condemned yourself into being a pathological liar because >>>>>>>>>>>>> you just don't any better.
I do at this point need to understand model theory very >>>>>>>>>>>> thoroughly.
Learning the details of these things could have boxed me >>>>>>>>>>>> into a corner
prior to my philosophical investigation of seeing how the >>>>>>>>>>>> key elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set >>>>>>>>>>>> of semantic
tautologies. It is true that formal systems grounded in this >>>>>>>>>>>> foundation
cannot be incomplete nor have any expressions of language >>>>>>>>>>>> that are
undecidable. Now that I have this foundation I have a way to >>>>>>>>>>>> see exactly
how the concepts of math diverge from correct reasoning. >>>>>>>>>>>>
You and I can see both THAT G cannot be proved in F and >>>>>>>>>>>>>> WHY G cannot be
proved in F. G cannot be proved in F for the same
pathological self-reference(Olcott 2004) reason that the >>>>>>>>>>>>>> Liar Paradox cannot be proved in Tarski's theory.
Which he didn't do, but you are too stupid to understand >>>>>>>>>>>>> claissic arguement forms.
It is not that I do not understand, it is that I can
directly see where and how formal mathematical systems >>>>>>>>>>>> diverge from correct reasoning.
But since you are discussing Formal Logic, you need to use >>>>>>>>>>> the rules of Formal logic.
The other way to say it is that your "Correct Reasoning" >>>>>>>>>>> diverges from the accepted and proven system of Formal Logic. >>>>>>>>>>
In classical logic, intuitionistic logic and similar logical >>>>>>>>>> systems, the principle of explosion
ex falso [sequitur] quodlibet,
'from falsehood, anything [follows]'
ex contradictione [sequitur] quodlibet,
'from contradiction, anything [follows]')
Right, if a logic system can prove a contradiction, that out of >>>>>>>>> that contradiction you can prove anything
https://en.wikipedia.org/wiki/Principle_of_explosion
∴ FALSE ⊢ Donald Trump is the Christ
∴ FALSE ⊢ Donald Trump is Satan
Which isn't what was being talked about.
You clearly don't understand how the principle of explosion
works, which isn't surprising considering how many
misconseptions you have about how logic works.
ex falso [sequitur] quodlibet,'from falsehood, anything [follows]' >>>>>>>> ∴ FALSE ⊢ Donald Trump is the Christ
But you are using the wrong symbol
'from falsehood, anything [follows]'
FALSE Proves that Donald Trump is the Christ
That isn't what the statment actually means, so you are just stupid. >>>>>
It is jack ass nonsense like this that proves the
principle of explosion is nothing even kludge
Right, false doesn't PROVE anything, but implies anything,
"P, ¬P ⊢ Q For any statements P and Q, if P and not-P are both >>>> true,
then it logically follows that Q is true."
https://en.wikipedia.org/wiki/Principle_of_explosion#Symbolic_representation
So, you don't understnad what you are reading.
FALSE itself isn't proving anything.
'from falsehood, anything [follows]'
Is not saying that a FALSE antecedent implies any consequent.
That is EXACTLY what it is saying. That a false premise can be said to
imply any consequent, since that implication only holds if the premise
is actually true.
You are just not understanding what the words actually mean, because you
are ignorant by choice.
On 4/27/23 9:07 PM, olcott wrote:
The standard definition of mathematical incompleteness:
Incomplete(T) ↔ ∃φ ∈ F((T ⊬ φ) ∧ (T ⊬ ¬φ))
Remember, that if φ ∈ F then φ has a defined truth value in F, it is either True of False, and thus can't be the "Liar's Paradox".
Requires formal systems to do the logically impossible:
to prove self-contradictory expressions of language.
Nope. You don't understand what the words mean.
Remember, incompleteness only happens if a TRUE statement can't be
proven, an actual "self-contradictory" statment won't be true.
(or is FALSE and can't be proven to be false, which is basically the
same thing).
So formal systems are "incomplete" in the same sense that we determine
that a baker that cannot bake a proper angel food cake using ordinary
red house bricks as the only ingredient lacks sufficient baking skill.
Nope, becuase they are only inclomplete if a statement that is TRUE
can't be proven.
You just don't understand that,
On 4/27/2023 9:41 PM, Richard Damon wrote:
On 4/27/23 9:09 PM, olcott wrote:
On 4/27/2023 6:19 AM, Richard Damon wrote:
On 4/26/23 10:41 PM, olcott wrote:
On 4/26/2023 7:07 AM, Richard Damon wrote:
On 4/26/23 2:07 AM, olcott wrote:
On 4/25/2023 6:56 AM, Richard Damon wrote:
On 4/25/23 12:03 AM, olcott wrote:
On 4/24/2023 6:35 PM, Richard Damon wrote:
On 4/24/23 11:25 AM, olcott wrote:
On 4/22/2023 6:19 PM, Richard Damon wrote:
On 4/22/23 6:49 PM, olcott wrote:
On 4/22/2023 5:22 PM, Richard Damon wrote:
On 4/22/23 6:10 PM, olcott wrote:
On 4/22/2023 4:54 PM, Richard Damon wrote:No, he didn't, he showed that *IF* a certain assuption was >>>>>>>>>>>>>> true, then the Liar's paradox would be true, thus that >>>>>>>>>>>>>> assumption can not be true.
On 4/22/23 5:36 PM, olcott wrote:
On 4/22/2023 4:27 PM, Richard Damon wrote:
On 4/22/23 5:08 PM, olcott wrote:
On 4/16/2023 6:16 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/15/23 10:54 PM, olcott wrote:But Godel's G doesn't do that.
G is unprovable because it is self-contradictory, >>>>>>>>>>>>>>>>>>>>> making it erroneous.
Since you don't understand the meaning of >>>>>>>>>>>>>>>>>>>> self-contradictory, that claim is erroneous. >>>>>>>>>>>>>>>>>>>>
When G asserts its own unprovability in F: >>>>>>>>>>>>>>>>>>
It is of course impossible to prove in F that a >>>>>>>>>>>>>>>>>> statement is true but not provable in F.
Any proof of G in F requires a sequence of inference >>>>>>>>>>>>>>>>>>> steps in F that
prove that they themselves do not exist in F. >>>>>>>>>>>>>>>>>>
You don't need to do the proof in F,
To prove G in F you do.
Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>>>>>>> This sentence is not true: "This sentence is not true" >>>>>>>>>>>>>>>>> is true.
So, you don't understand how to prove that something is >>>>>>>>>>>>>>>> "True in F" by doing the steps in Meta-F.
I just showed you how Tarski proved that the Liar Paradox >>>>>>>>>>>>>>> expressed in his theory is true in his meta-theory. >>>>>>>>>>>>>>
When one level of indirect reference is applied to the Liar >>>>>>>>>>>>> Paradox it
becomes actually true. There was no "if".
This sentence is not true: "This sentence is not true" <IS> >>>>>>>>>>>>> TRUE.
Your
We can do the same thing when G asserts its own
unprovability in F.
G cannot be proved in F because this requires a sequence >>>>>>>>>>>>>>> of inference
steps in F that prove that they themselves do not exist >>>>>>>>>>>>>>> in F.
Right, you can't prove, in F, that G is true, but you can >>>>>>>>>>>>>> prove, in Meta-F, that G is true in F, and that G is >>>>>>>>>>>>>> unprovable in F, which is what is required.
When G asserts its own unprovability in F it cannot be >>>>>>>>>>>>> proved in F
because this requires a sequence of inference steps in F >>>>>>>>>>>>> that prove that
they themselves do not exist.
Meta-F merely removes the self-contradiction the same way >>>>>>>>>>>>> Tarski's Meta-
theory removed the self-contradiction.
You are just showing that your mind can't handle the >>>>>>>>>>>>>> basics of logic, or truth.
It may seem that way to someone that learns things by rote >>>>>>>>>>>>> and mistakes
this for actual understanding of exactly how all of the >>>>>>>>>>>>> elements of a
proof fit together coherently or fail to do so.
It sounds like you are too stupid to learn, and that you >>>>>>>>>>>>>> have intentionaally hamstrung yourself to avoid being >>>>>>>>>>>>>> "polluted" by "rote-learning" so you are just
self-inflicted ignorant.
If you won't even try to learn the basics, you have just >>>>>>>>>>>>>> condemned yourself into being a pathological liar because >>>>>>>>>>>>>> you just don't any better.
I do at this point need to understand model theory very >>>>>>>>>>>>> thoroughly.
Learning the details of these things could have boxed me >>>>>>>>>>>>> into a corner
prior to my philosophical investigation of seeing how the >>>>>>>>>>>>> key elements
fail to fit together coherently.
It is true that the set of analytical truth is simply a set >>>>>>>>>>>>> of semantic
tautologies. It is true that formal systems grounded in >>>>>>>>>>>>> this foundation
cannot be incomplete nor have any expressions of language >>>>>>>>>>>>> that are
undecidable. Now that I have this foundation I have a way >>>>>>>>>>>>> to see exactly
how the concepts of math diverge from correct reasoning. >>>>>>>>>>>>>
You and I can see both THAT G cannot be proved in F and >>>>>>>>>>>>>>> WHY G cannot be
proved in F. G cannot be proved in F for the same >>>>>>>>>>>>>>> pathological self-reference(Olcott 2004) reason that the >>>>>>>>>>>>>>> Liar Paradox cannot be proved in Tarski's theory. >>>>>>>>>>>>>>>
Which he didn't do, but you are too stupid to understand >>>>>>>>>>>>>> claissic arguement forms.
It is not that I do not understand, it is that I can >>>>>>>>>>>>> directly see where and how formal mathematical systems >>>>>>>>>>>>> diverge from correct reasoning.
But since you are discussing Formal Logic, you need to use >>>>>>>>>>>> the rules of Formal logic.
The other way to say it is that your "Correct Reasoning" >>>>>>>>>>>> diverges from the accepted and proven system of Formal Logic. >>>>>>>>>>>
In classical logic, intuitionistic logic and similar logical >>>>>>>>>>> systems, the principle of explosion
ex falso [sequitur] quodlibet,
'from falsehood, anything [follows]'
ex contradictione [sequitur] quodlibet,
'from contradiction, anything [follows]')
Right, if a logic system can prove a contradiction, that out >>>>>>>>>> of that contradiction you can prove anything
https://en.wikipedia.org/wiki/Principle_of_explosion
∴ FALSE ⊢ Donald Trump is the Christ
∴ FALSE ⊢ Donald Trump is Satan
Which isn't what was being talked about.
You clearly don't understand how the principle of explosion >>>>>>>>>> works, which isn't surprising considering how many
misconseptions you have about how logic works.
ex falso [sequitur] quodlibet,'from falsehood, anything [follows]' >>>>>>>>> ∴ FALSE ⊢ Donald Trump is the Christ
But you are using the wrong symbol
'from falsehood, anything [follows]'
FALSE Proves that Donald Trump is the Christ
That isn't what the statment actually means, so you are just stupid. >>>>>>
It is jack ass nonsense like this that proves the
principle of explosion is nothing even kludge
Right, false doesn't PROVE anything, but implies anything,
"P, ¬P ⊢ Q For any statements P and Q, if P and not-P are both >>>>> true,
then it logically follows that Q is true."
https://en.wikipedia.org/wiki/Principle_of_explosion#Symbolic_representation
So, you don't understnad what you are reading.
FALSE itself isn't proving anything.
'from falsehood, anything [follows]'
Is not saying that a FALSE antecedent implies any consequent.
That is EXACTLY what it is saying. That a false premise can be said to
imply any consequent, since that implication only holds if the premise
is actually true.
You are just not understanding what the words actually mean, because
you are ignorant by choice.
https://proofwiki.org/wiki/Rule_of_Explosion
Sequent Form ⊥ ⊢ ϕ
https://en.wikipedia.org/wiki/List_of_logic_symbols
⊥ falsum, ⊢ proves ϕ this logic sentence
On 4/28/23 12:23 AM, olcott wrote:
On 4/27/2023 9:41 PM, Richard Damon wrote:
On 4/27/23 9:07 PM, olcott wrote:
The standard definition of mathematical incompleteness:
Incomplete(T) ↔ ∃φ ∈ F((T ⊬ φ) ∧ (T ⊬ ¬φ))
Remember, that if φ ∈ F then φ has a defined truth value in F, it is >>> either True of False, and thus can't be the "Liar's Paradox".
How about this one?
Incomplete(T) ↔ ∃φ ∈ WFF(F) ((T ⊬ φ) ∧ (T ⊬ ¬φ))
You use of terms that you do not define doesn't help you.
Incomplete means, in actual words, that there exists a true statement in
F that can not be proven in F, or similarly a False statement in F that
can not be disproven (proven to be false).
Incompleteness is NOT about statements that meet the "syntax" of F, but
might not actually be Truthbearers. Of course you can't prove or refute
a non-truthbearer (at best you might be able to show it is a non-truthbearer).
Trying to use ANY other definition that isn't actually equivalent is
just proof that you don't understand the rules of logic and have fallen
to a strawman.
Requires formal systems to do the logically impossible:
to prove self-contradictory expressions of language.
Nope. You don't understand what the words mean.
That part I have correctly and Gödel acknowledged the
self-contradictory expressions ... can likewise be used for a similar
undecidability proof...
And none of those are directly about G in F.
...14 Every epistemological antinomy can likewise be used for a
similar undecidability proof...
(Gödel 1931:40)
Yep, you can use the FORM of any epistemolgoical antinomy, converting it
from a statement about its own truth to being about its own provability,
to get a similar proof.
THIS SECOND SENTENCE IS THE LIAR PARADOX YOU FOOL
Antinomy
...term often used in logic and epistemology, when describing a
paradox or unresolvable contradiction.
https://www.newworldencyclopedia.org/entry/Antinomy
Remember, incompleteness only happens if a TRUE statement can't be
proven, an actual "self-contradictory" statment won't be true.
You have that incorrectly too.
When G asserts its own unprovability in F the proof of G in F requires
a sequence of inference steps in F that prove that they themselves do
not exist.
But G DOESN'T "asserts its own unprovability in F", and G is not proven
"in F".
You statement just shows that you don't understand the proof and are
totally missing that almost all of the paper is written from the aspect
of Meta-F.
Gödel’s Theorem, as a simple corollary of Proposition VI (p. 57) is
frequently called, proves that there are arithmetical propositions
which are undecidable (i.e. neither provable nor disprovable) within
their arithmetical system, and the proof proceeds by actually
specifying such a proposition, namely the proposition g expressed by
the formula to which “17 Gen r” refers (p. 58). g is an arithmetical
proposition; but the proposition that g is undecidable within the
system is not an arithmetical proposition, since it is concerned with
provability within an arithmetical system, and this is a
meta-arithmetical and not an arithmetical notion. Gödel’s Theorem is
thus a result which belongs not to mathematics but to metamathematics,
the name given by Hilbert to the study of rigorous proof in
mathematics and symbolic logic
https://mavdisk.mnsu.edu/pj2943kt/Fall%202015/Promotion%20Application/Previous%20Years%20Article%2022%20Materials/godel-1931.pdf
Yes, Hilbert had simillar errors in logic, which he, I believe,
eventaully realized. Yes, much of Godel's proof could be described as "meta-mathematics", but that meta- shows that IN MATHEMATICS ITSELF,
there exist propositions that are true but can not be proven within mathematics. Thus, mathematics meets the requirements to be called "incomplete"
(or is FALSE and can't be proven to be false, which is basically the
same thing).
So formal systems are "incomplete" in the same sense that we determine >>>> that a baker that cannot bake a proper angel food cake using ordinary
red house bricks as the only ingredient lacks sufficient baking skill. >>>>
Nope, becuase they are only inclomplete if a statement that is TRUE
can't be proven.
The liar paradox is self contradictory when applied to itself is not
self-contradictory when applied to another different instance of itself.
So, you don't understand what the "Liar Paradox" actually is. It is, and
only is, a statement that asserts ITS OWN falsehood. It can't refer to a "different instance" as either that other isn't "itself", so refering to
it makes this statement not the liar's paradox, or it actually IS
itself, and thus must have the same truth value. You don't seem to
understand the fundamental rule that if a copy of the statement is
considered to be "the same statement", that all those copies must. by definition. have the same truth value.
This sentence is not true: "This sentence is not true"
inner one is neither true nor false
outer one is true because the inner one is neither true nor false
But isn't the liar's paradox.
Again, you show you don't understand the actual meaning of the words.
When G asserts its own unprovability in F the proof of G in F requires
a sequence of inference steps in F that prove that they themselves do
not exist. metamathematics, can see that G cannot be proved in F.
Exept that Godel's G doesn't "assert is own unprovability in F" nor is
"G proved in F" (in fact, Godel shows such a proof is impossible).
metamathematics can PROVE that Godel's statement G is TRUE IN
MATHEMATICS, but not provable there.
You don't seem to understand how meta-systems work and how they can
actaully prove things about the system they are a meta- for.
If you "Correct Reasoning" can't handle meta-logic, then you aren't
going to be able to prove much in it, as most major proofs actually use meta-logic.
You just don't understand that,
On 4/28/2023 6:40 AM, Richard Damon wrote:
On 4/28/23 12:23 AM, olcott wrote:
On 4/27/2023 9:41 PM, Richard Damon wrote:
On 4/27/23 9:07 PM, olcott wrote:
The standard definition of mathematical incompleteness:
Incomplete(T) ↔ ∃φ ∈ F((T ⊬ φ) ∧ (T ⊬ ¬φ))
Remember, that if φ ∈ F then φ has a defined truth value in F, it is >>>> either True of False, and thus can't be the "Liar's Paradox".
How about this one?
Incomplete(T) ↔ ∃φ ∈ WFF(F) ((T ⊬ φ) ∧ (T ⊬ ¬φ))
You use of terms that you do not define doesn't help you.
Incomplete means, in actual words, that there exists a true statement
in F that can not be proven in F, or similarly a False statement in F
that can not be disproven (proven to be false).
I don't think that there is any source that says it is a true statement
in F.
Incompleteness is NOT about statements that meet the "syntax" of F,
but might not actually be Truthbearers. Of course you can't prove or
refute a non-truthbearer (at best you might be able to show it is a
non-truthbearer).
"Kurt Gödel's incompleteness theorem demonstrates that mathematics
contains true statements that cannot be proved. His proof achieves
this by constructing paradoxical mathematical statements. To see how
the proof works, begin by considering the liar's paradox: "This
statement is false." This statement is true if and only if it is
false, and therefore it is neither true nor false.
Now let's consider "This statement is unprovable." If it is provable,
then we are proving a falsehood, which is extremely unpleasant and is
generally assumed to be impossible. The only alternative left is that
this statement is unprovable. Therefore, it is in fact both true and
unprovable. Our system of reasoning is incomplete, because some truths
are unprovable."
https://www.scientificamerican.com/article/what-is-goumldels-proof/
Trying to use ANY other definition that isn't actually equivalent is
just proof that you don't understand the rules of logic and have
fallen to a strawman.
Requires formal systems to do the logically impossible:
to prove self-contradictory expressions of language.
Nope. You don't understand what the words mean.
That part I have correctly and Gödel acknowledged the
self-contradictory expressions ... can likewise be used for a similar
undecidability proof...
And none of those are directly about G in F.
...14 Every epistemological antinomy can likewise be used for a
similar undecidability proof...
(Gödel 1931:40)
Yep, you can use the FORM of any epistemolgoical antinomy, converting
it from a statement about its own truth to being about its own
provability, to get a similar proof.
G asserts its own unprovability in F
(a formal system with its own provability predicate)
THIS SECOND SENTENCE IS THE LIAR PARADOX YOU FOOL
Antinomy
...term often used in logic and epistemology, when describing a
paradox or unresolvable contradiction.
https://www.newworldencyclopedia.org/entry/Antinomy
Remember, incompleteness only happens if a TRUE statement can't be
proven, an actual "self-contradictory" statment won't be true.
You have that incorrectly too.
When G asserts its own unprovability in F the proof of G in F
requires a sequence of inference steps in F that prove that they
themselves do not exist.
But G DOESN'T "asserts its own unprovability in F", and G is not
proven "in F".
You statement just shows that you don't understand the proof and are
totally missing that almost all of the paper is written from the
aspect of Meta-F.
Gödel’s Theorem, as a simple corollary of Proposition VI (p. 57) is
frequently called, proves that there are arithmetical propositions
which are undecidable (i.e. neither provable nor disprovable) within
their arithmetical system, and the proof proceeds by actually
specifying such a proposition, namely the proposition g expressed by
the formula to which “17 Gen r” refers (p. 58). g is an arithmetical >>> proposition; but the proposition that g is undecidable within the
system is not an arithmetical proposition, since it is concerned with
provability within an arithmetical system, and this is a
meta-arithmetical and not an arithmetical notion. Gödel’s Theorem is
thus a result which belongs not to mathematics but to
metamathematics, the name given by Hilbert to the study of rigorous
proof in mathematics and symbolic logic
https://mavdisk.mnsu.edu/pj2943kt/Fall%202015/Promotion%20Application/Previous%20Years%20Article%2022%20Materials/godel-1931.pdf
Yes, Hilbert had simillar errors in logic, which he, I believe,
eventaully realized. Yes, much of Godel's proof could be described as
"meta-mathematics", but that meta- shows that IN MATHEMATICS ITSELF,
there exist propositions that are true but can not be proven within
mathematics. Thus, mathematics meets the requirements to be called
"incomplete"
(or is FALSE and can't be proven to be false, which is basically the
same thing).
So formal systems are "incomplete" in the same sense that we determine >>>>> that a baker that cannot bake a proper angel food cake using ordinary >>>>> red house bricks as the only ingredient lacks sufficient baking skill. >>>>>
Nope, becuase they are only inclomplete if a statement that is TRUE
can't be proven.
The liar paradox is self contradictory when applied to itself is not
self-contradictory when applied to another different instance of itself.
So, you don't understand what the "Liar Paradox" actually is. It is,
and only is, a statement that asserts ITS OWN falsehood. It can't
refer to a "different instance" as either that other isn't "itself",
so refering to it makes this statement not the liar's paradox, or it
actually IS itself, and thus must have the same truth value. You don't
seem to understand the fundamental rule that if a copy of the
statement is considered to be "the same statement", that all those
copies must. by definition. have the same truth value.
This sentence is not true: "This sentence is not true"
Because you speak in Falsehoods and lies. Maybe you don't understandinner one is neither true nor false
outer one is true because the inner one is neither true nor false
But isn't the liar's paradox.
Again, you show you don't understand the actual meaning of the words.
Rather than paying attention you merely glance at what I say and say
that I don't understand.
When G asserts its own unprovability in F the proof of G in F
requires a sequence of inference steps in F that prove that they
themselves do not exist. metamathematics, can see that G cannot be
proved in F.
Exept that Godel's G doesn't "assert is own unprovability in F" nor is
"G proved in F" (in fact, Godel shows such a proof is impossible).
I AM NOT TALKING ABOUT EXACTLY WHAT HE SAYS ABOUT HIS PROOF
I AM TALKING ABOUT WHAT HE SAYS IS AN EQUIVALENT PROOF
...14 Every epistemological antinomy can likewise be used for a
similar undecidability proof... (Gödel 1931:39-41)
THEREFORE WHEN G ASSERTS ITS OWN UNPROVABILITY IN F
(and F is powerful enough to have its own provability predicate)
THEN THIS EPISTEMOLOGICAL ANTINOMY MEETS HIS EQUIVALENCE SPEC
WE CAN SEE THAT IT IS AN {epistemological antinomy} BECAUSE
THE PROOF OF G IN F REQUIRES A SEQUENCE OF INFERENCE STEPS
IN F THAT PROVE THAT THEY THEMSELVES DO NOT EXIST.
metamathematics can PROVE that Godel's statement G is TRUE IN
MATHEMATICS, but not provable there.
You don't seem to understand how meta-systems work and how they can
actaully prove things about the system they are a meta- for.
IT IS NOT MY LACK OF KNOWLEDGE OF LOGIC IT IS YOUR LACK OF KNOWLEDGE
OF EPISTEMOLOGY
If you "Correct Reasoning" can't handle meta-logic, then you aren't
going to be able to prove much in it, as most major proofs actually
use meta-logic.
You just don't understand that,
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 365 |
Nodes: | 16 (0 / 16) |
Uptime: | 11:06:28 |
Calls: | 7,758 |
Calls today: | 1 |
Files: | 12,897 |
Messages: | 5,744,534 |