• Re: A proof of G in F

    From olcott@21:1/5 to Richard Damon on Sat Apr 22 16:08:36 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it erroneous.


    Since you don't understand the meaning of self-contradictory, that claim
    is erroneous.


    When G asserts its own unprovability in F:

    Any proof of G in F requires a sequence of inference steps in F that
    prove that they themselves do not exist in F.

    This is precisely analogous to you proving that you yourself never
    existed.


    You are also working with a Strawman, because you can't understand the
    actual statement G, so even if you were right about the statement you
    are talking about, you would still be wrong about the actual statement.

    The ACTUAL G has no "Self-Reference" in it, so can't be
    "Self-Contradictory".

    You are just proving how ignorant you are of logic.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Apr 22 17:27:11 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it erroneous.


    Since you don't understand the meaning of self-contradictory, that
    claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in F that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is true but
    not provable in F.

    You don't need to do the proof in F, Godel shows how to construct a
    Meta-F system from F that allows construction a proof IN META-F that
    shows that his G is True in F, and not provable in F.


    This is precisely analogous to you proving that you yourself never
    existed.

    Nope, you are just proving you can't do your strawman, because you have
    straw for brains.

    You start from an incorrect assertion, that G actually asserts its own unprovability, and then continue to make errors in asserting that this
    is proven in F itself.

    This just shows you fundamentally don't understand how the logic works,
    and that you are too stupid to be able to be taught it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Apr 22 17:54:30 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it erroneous. >>>>>

    Since you don't understand the meaning of self-contradictory, that
    claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in F that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is true but
    not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true.



    So, you don't understand how to prove that something is "True in F" by
    doing the steps in Meta-F.

    Just shows you are ignorant.

    Too bad you are going to die in such disgrace.

    All you need to do is show that there exist a (possibly infinte) set of
    steps from the truth makers in F, using the rules of F, to G. Youy don't
    need to actually DO this in F, if you have a system that knowns about F.

    Your mind is just too small.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sat Apr 22 16:36:45 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it erroneous. >>>>

    Since you don't understand the meaning of self-contradictory, that
    claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in F that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is true but
    not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sat Apr 22 17:10:44 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it
    erroneous.


    Since you don't understand the meaning of self-contradictory, that
    claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in F that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is true but
    not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true.



    So, you don't understand how to prove that something is "True in F" by
    doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox expressed in
    his theory is true in his meta-theory.

    We can do the same thing when G asserts its own unprovability in F.
    G cannot be proved in F because this requires a sequence of inference
    steps in F that prove that they themselves do not exist in F.

    You and I can see both THAT G cannot be proved in F and WHY G cannot be
    proved in F. G cannot be proved in F for the same pathological self-reference(Olcott 2004) reason that the Liar Paradox cannot be
    proved in Tarski's theory.

    Just shows you are ignorant.

    Too bad you are going to die in such disgrace.

    All you need to do is show that there exist a (possibly infinte) set of
    steps from the truth makers in F, using the rules of F, to G. Youy don't
    need to actually DO this in F, if you have a system that knowns about F.

    Your mind is just too small.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sat Apr 22 17:49:04 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it
    erroneous.


    Since you don't understand the meaning of self-contradictory,
    that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in F that >>>>>> prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is true
    but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true.



    So, you don't understand how to prove that something is "True in F"
    by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox expressed in
    his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was true, then
    the Liar's paradox would be true, thus that assumption can not be true.


    When one level of indirect reference is applied to the Liar Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE.

    Your


    We can do the same thing when G asserts its own unprovability in F.
    G cannot be proved in F because this requires a sequence of inference
    steps in F that prove that they themselves do not exist in F.

    Right, you can't prove, in F, that G is true, but you can prove, in
    Meta-F, that G is true in F, and that G is unprovable in F, which is
    what is required.


    When G asserts its own unprovability in F it cannot be proved in F
    because this requires a sequence of inference steps in F that prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way Tarski's Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics of logic, or truth.


    It may seem that way to someone that learns things by rote and mistakes
    this for actual understanding of exactly how all of the elements of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have
    intentionaally hamstrung yourself to avoid being "polluted" by "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just condemned
    yourself into being a pathological liar because you just don't any better.


    I do at this point need to understand model theory very thoroughly.

    Learning the details of these things could have boxed me into a corner
    prior to my philosophical investigation of seeing how the key elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of semantic tautologies. It is true that formal systems grounded in this foundation
    cannot be incomplete nor have any expressions of language that are
    undecidable. Now that I have this foundation I have a way to see exactly
    how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY G cannot be
    proved in F. G cannot be proved in F for the same pathological
    self-reference(Olcott 2004) reason that the Liar Paradox cannot be
    proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand claissic
    arguement forms.


    It is not that I do not understand, it is that I can directly see where
    and how formal mathematical systems diverge from correct reasoning.

    Because you are a learned-by-rote person you make sure to never examine
    whether or not any aspect of math diverges from correct reasoning, you
    simply assume that math is the gospel even when it contradicts itself.

    Just shows you are ignorant.

    Too bad you are going to die in such disgrace.

    All you need to do is show that there exist a (possibly infinte) set
    of steps from the truth makers in F, using the rules of F, to G. Youy
    don't need to actually DO this in F, if you have a system that knowns
    about F.

    Your mind is just too small.



    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Apr 22 18:22:34 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it
    erroneous.


    Since you don't understand the meaning of self-contradictory, that >>>>>> claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in F that >>>>> prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is true
    but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true.



    So, you don't understand how to prove that something is "True in F" by
    doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox expressed in
    his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was true, then
    the Liar's paradox would be true, thus that assumption can not be true.

    Your


    We can do the same thing when G asserts its own unprovability in F.
    G cannot be proved in F because this requires a sequence of inference
    steps in F that prove that they themselves do not exist in F.

    Right, you can't prove, in F, that G is true, but you can prove, in
    Meta-F, that G is true in F, and that G is unprovable in F, which is
    what is required.

    You are just showing that your mind can't handle the basics of logic, or
    truth.

    It sounds like you are too stupid to learn, and that you have
    intentionaally hamstrung yourself to avoid being "polluted" by
    "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just condemned
    yourself into being a pathological liar because you just don't any better.


    You and I can see both THAT G cannot be proved in F and WHY G cannot be proved in F. G cannot be proved in F for the same pathological self-reference(Olcott 2004) reason that the Liar Paradox cannot be
    proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand claissic
    arguement forms.

    Just shows you are ignorant.

    Too bad you are going to die in such disgrace.

    All you need to do is show that there exist a (possibly infinte) set
    of steps from the truth makers in F, using the rules of F, to G. Youy
    don't need to actually DO this in F, if you have a system that knowns
    about F.

    Your mind is just too small.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Apr 22 19:19:27 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it
    erroneous.


    Since you don't understand the meaning of self-contradictory,
    that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in F that >>>>>>> prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is true
    but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true.



    So, you don't understand how to prove that something is "True in F"
    by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox expressed
    in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was true, then
    the Liar's paradox would be true, thus that assumption can not be true.


    When one level of indirect reference is applied to the Liar Paradox it becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE.

    Your


    We can do the same thing when G asserts its own unprovability in F.
    G cannot be proved in F because this requires a sequence of inference
    steps in F that prove that they themselves do not exist in F.

    Right, you can't prove, in F, that G is true, but you can prove, in
    Meta-F, that G is true in F, and that G is unprovable in F, which is
    what is required.


    When G asserts its own unprovability in F it cannot be proved in F
    because this requires a sequence of inference steps in F that prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way Tarski's Meta- theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics of logic,
    or truth.


    It may seem that way to someone that learns things by rote and mistakes
    this for actual understanding of exactly how all of the elements of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have
    intentionaally hamstrung yourself to avoid being "polluted" by
    "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just condemned
    yourself into being a pathological liar because you just don't any
    better.


    I do at this point need to understand model theory very thoroughly.

    Learning the details of these things could have boxed me into a corner
    prior to my philosophical investigation of seeing how the key elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of semantic tautologies. It is true that formal systems grounded in this foundation cannot be incomplete nor have any expressions of language that are undecidable. Now that I have this foundation I have a way to see exactly
    how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY G cannot be
    proved in F. G cannot be proved in F for the same pathological
    self-reference(Olcott 2004) reason that the Liar Paradox cannot be
    proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand claissic
    arguement forms.


    It is not that I do not understand, it is that I can directly see where
    and how formal mathematical systems diverge from correct reasoning.

    But since you are discussing Formal Logic, you need to use the rules of
    Formal logic.

    The other way to say it is that your "Correct Reasoning" diverges from
    the accepted and proven system of Formal Logic.


    Because you are a learned-by-rote person you make sure to never examine whether or not any aspect of math diverges from correct reasoning, you
    simply assume that math is the gospel even when it contradicts itself.

    Nope, I know that with logic, if you follow the rules, you will get the
    correct answer by the rules.

    If you break the rules, you have no idea where you will go.

    As I have told you before, if you want to see what your "Correct
    Reasoning" can do as a replaceent logic system, you need to start at the BEGINNING, and see wht it gets.

    To just try to change things at the end is just PROOF that your "Correct Reasoning" has to not be based on any real principles of logic.

    Since it is clear that you want to change some of the basics of how
    logic works, you are not allowed to just use ANY of classical logic
    until you actually show what part of it is still usable under your
    system and what changes happen.

    Considering your current status, I would start working hard on that
    right away, as with your current reputation, once you go, NO ONE is
    going to want to look at your ideas, because you have done such a good
    job showing that you don't understand how things work.

    I haven't been able to get out of you exactly what you want to do with
    your "Correct Reasoning", and until you show a heart to actually try to
    do something constructive with it, and not just use it as an excuse for
    bad logic, I don't care what it might be able to do, because, frankly, I
    don't think you have the intellect to come up with something like that.

    But go ahead and prove me wrong, write an actual paper on the basics of
    your "Correct Reasoning" and show how it actually works, and compare it
    to "Classical Logic" and show what is different. Then maybe you can
    start to work on showing it can actually do something useful.


    Just shows you are ignorant.

    Too bad you are going to die in such disgrace.

    All you need to do is show that there exist a (possibly infinte) set
    of steps from the truth makers in F, using the rules of F, to G.
    Youy don't need to actually DO this in F, if you have a system that
    knowns about F.

    Your mind is just too small.




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sat Apr 22 18:57:47 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it >>>>>>>>>> erroneous.


    Since you don't understand the meaning of self-contradictory, >>>>>>>>> that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in F >>>>>>>> that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is true >>>>>>> but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true.



    So, you don't understand how to prove that something is "True in F"
    by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox expressed
    in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was true, then
    the Liar's paradox would be true, thus that assumption can not be true.


    When one level of indirect reference is applied to the Liar Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE.

    Your


    We can do the same thing when G asserts its own unprovability in F.
    G cannot be proved in F because this requires a sequence of inference
    steps in F that prove that they themselves do not exist in F.

    Right, you can't prove, in F, that G is true, but you can prove, in
    Meta-F, that G is true in F, and that G is unprovable in F, which is
    what is required.


    When G asserts its own unprovability in F it cannot be proved in F
    because this requires a sequence of inference steps in F that prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way Tarski's Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics of logic,
    or truth.


    It may seem that way to someone that learns things by rote and mistakes
    this for actual understanding of exactly how all of the elements of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have
    intentionaally hamstrung yourself to avoid being "polluted" by
    "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just condemned
    yourself into being a pathological liar because you just don't any
    better.


    I do at this point need to understand model theory very thoroughly.

    Learning the details of these things could have boxed me into a corner
    prior to my philosophical investigation of seeing how the key elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of semantic
    tautologies. It is true that formal systems grounded in this foundation
    cannot be incomplete nor have any expressions of language that are
    undecidable. Now that I have this foundation I have a way to see exactly
    how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY G cannot be >>>> proved in F. G cannot be proved in F for the same pathological
    self-reference(Olcott 2004) reason that the Liar Paradox cannot be
    proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand claissic
    arguement forms.


    It is not that I do not understand, it is that I can directly see
    where and how formal mathematical systems diverge from correct reasoning.

    But since you are discussing Formal Logic, you need to use the rules of Formal logic.


    I have never been talking about formal logic. I have always been talking
    about the philosophical foundations of correct reasoning.

    The other way to say it is that your "Correct Reasoning" diverges from
    the accepted and proven system of Formal Logic.


    It is correct reasoning in the absolute sense that I refer to.
    If anyone has the opinion that arithmetic does not exist they are
    incorrect in the absolute sense of the word: "incorrect".


    Because you are a learned-by-rote person you make sure to never examine
    whether or not any aspect of math diverges from correct reasoning, you
    simply assume that math is the gospel even when it contradicts itself.

    Nope, I know that with logic, if you follow the rules, you will get the correct answer by the rules.

    If you break the rules, you have no idea where you will go.


    In other words you never ever spend any time on making sure that these
    rules fit together coherently.

    As I have told you before, if you want to see what your "Correct > Reasoning" can do as a replaceent logic system, you need to start at the
    BEGINNING, and see wht it gets.


    The foundation of correct reasoning is that the entire body of
    analytical truth is a set of semantic tautologies.

    This means that all correct inference always requires determining the
    semantic consequence of expressions of language. This semantic
    consequence can be specified syntactically, and indeed must be
    represented syntactically to be computable.

    To just try to change things at the end is just PROOF that your "Correct Reasoning" has to not be based on any real principles of logic.

    Since it is clear that you want to change some of the basics of how
    logic works, you are not allowed to just use ANY of classical logic
    until you actually show what part of it is still usable under your
    system and what changes happen.


    Whenever an expression of language is derived as the semantic
    consequence of other expressions of language we have valid inference.

    The semantic consequence must be specified syntactically so that it can
    be computed or examined in formal systems.

    Just like in sound deductive inference when the premises are known to be
    true, and the reasoning valid (a semantic consequence) then the
    conclusion is necessarily true.

    Considering your current status, I would start working hard on that
    right away, as with your current reputation, once you go, NO ONE is
    going to want to look at your ideas, because you have done such a good
    job showing that you don't understand how things work.


    My reputation on one very important group has risen to quite credible

    I haven't been able to get out of you exactly what you want to do with
    your "Correct Reasoning", and until you show a heart to actually try to
    do something constructive with it, and not just use it as an excuse for
    bad logic, I don't care what it might be able to do, because, frankly, I don't think you have the intellect to come up with something like that.


    Until we establish the foundation of correct reasoning in terms of a
    consistent and complete True(L,X) all AI systems will be anchored in the shifting sands of opinions.

    But go ahead and prove me wrong, write an actual paper on the basics of
    your "Correct Reasoning" and show how it actually works, and compare it
    to "Classical Logic" and show what is different. Then maybe you can
    start to work on showing it can actually do something useful.


    The most important aspect of the tiny little foundation of a formal
    system that I already specified immediately above is self-evident:
    True(L,X) can be defined and incompleteness is impossible.

    People that spend 99.99% of their attention on trying to show errors in
    what I say rather than paying any attention understanding what I say
    might not notice these dead obvious things

    Just shows you are ignorant.

    Too bad you are going to die in such disgrace.

    All you need to do is show that there exist a (possibly infinte)
    set of steps from the truth makers in F, using the rules of F, to
    G. Youy don't need to actually DO this in F, if you have a system
    that knowns about F.

    Your mind is just too small.





    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Apr 22 20:27:12 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/22/23 7:57 PM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it >>>>>>>>>>> erroneous.


    Since you don't understand the meaning of self-contradictory, >>>>>>>>>> that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in F >>>>>>>>> that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is
    true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true.



    So, you don't understand how to prove that something is "True in
    F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox expressed
    in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was true,
    then the Liar's paradox would be true, thus that assumption can not
    be true.


    When one level of indirect reference is applied to the Liar Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE.

    Your


    We can do the same thing when G asserts its own unprovability in F.
    G cannot be proved in F because this requires a sequence of inference >>>>> steps in F that prove that they themselves do not exist in F.

    Right, you can't prove, in F, that G is true, but you can prove, in
    Meta-F, that G is true in F, and that G is unprovable in F, which is
    what is required.


    When G asserts its own unprovability in F it cannot be proved in F
    because this requires a sequence of inference steps in F that prove that >>> they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way Tarski's Meta- >>> theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics of
    logic, or truth.


    It may seem that way to someone that learns things by rote and mistakes
    this for actual understanding of exactly how all of the elements of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have
    intentionaally hamstrung yourself to avoid being "polluted" by
    "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just condemned
    yourself into being a pathological liar because you just don't any
    better.


    I do at this point need to understand model theory very thoroughly.

    Learning the details of these things could have boxed me into a corner
    prior to my philosophical investigation of seeing how the key elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of semantic
    tautologies. It is true that formal systems grounded in this foundation
    cannot be incomplete nor have any expressions of language that are
    undecidable. Now that I have this foundation I have a way to see exactly >>> how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY G
    cannot be
    proved in F. G cannot be proved in F for the same pathological
    self-reference(Olcott 2004) reason that the Liar Paradox cannot be
    proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand claissic
    arguement forms.


    It is not that I do not understand, it is that I can directly see
    where and how formal mathematical systems diverge from correct
    reasoning.

    But since you are discussing Formal Logic, you need to use the rules
    of Formal logic.


    I have never been talking about formal logic. I have always been talking about the philosophical foundations of correct reasoning.

    No, you have been talking about theorys DEEP in formal logic. You can't
    talk about the "errors" in those theories, with being in formal logic.

    IF you think you can somehow talk about the foundations, while working
    in the penthouse, you have just confirmed that you do not understand how
    ANY form of logic works.

    PERIOD.



    The other way to say it is that your "Correct Reasoning" diverges from
    the accepted and proven system of Formal Logic.


    It is correct reasoning in the absolute sense that I refer to.
    If anyone has the opinion that arithmetic does not exist they are
    incorrect in the absolute sense of the word: "incorrect".


    IF you reject the logic that a theory is based on, you need to reject
    the logic system, NOT the theory.

    You are just showing that you have wasted your LIFE because you
    don'tunderstnad how to work ligic.


    Because you are a learned-by-rote person you make sure to never examine
    whether or not any aspect of math diverges from correct reasoning, you
    simply assume that math is the gospel even when it contradicts itself.

    Nope, I know that with logic, if you follow the rules, you will get
    the correct answer by the rules.

    If you break the rules, you have no idea where you will go.


    In other words you never ever spend any time on making sure that these
    rules fit together coherently.

    The rules work together just fine.

    YOU don't like some of the results, but they work just fine for most of
    the field.

    You are just PROVING that you have no idea how to actually discuss a new foundation for logic, likely because you are incapable of actually
    comeing up with a consistent basis for working logic.


    As I have told you before, if you want to see what your "Correct  >
    Reasoning" can do as a replaceent logic system, you need to start at the
    BEGINNING, and see wht it gets.


    The foundation of correct reasoning is that the entire body of
    analytical truth is a set of semantic tautologies.

    This means that all correct inference always requires determining the semantic consequence of expressions of language. This semantic
    consequence can be specified syntactically, and indeed must be
    represented syntactically to be computable
    Meaningless gobbledy-good until you actually define what you mean and
    spell out the actual rules that need to be followed.

    Note, "Computability" is actually a fairly late in the process concept.
    You first need to show that you logic can actually do something useful


    To just try to change things at the end is just PROOF that your
    "Correct Reasoning" has to not be based on any real principles of logic.

    Since it is clear that you want to change some of the basics of how
    logic works, you are not allowed to just use ANY of classical logic
    until you actually show what part of it is still usable under your
    system and what changes happen.


    Whenever an expression of language is derived as the semantic
    consequence of other expressions of language we have valid inference.

    And, are you using the "classical" definition of "semantic" (which makes
    this sentence somewhat cirular) or do you mean something based on the
    concept you sometimes use of "the meaning of the words".


    The semantic consequence must be specified syntactically so that it can
    be computed or examined in formal systems.

    Just like in sound deductive inference when the premises are known to be true, and the reasoning valid (a semantic consequence) then the
    conclusion is necessarily true.

    So, what is the difference in your system from classical Formal Logic?

    You keep on talking like you are making some major change, but you can't
    seem to specify what that is. And when asked to define, you just waffle.

    You are just showing thw shallowness of even what you are trying to
    think up.


    Considering your current status, I would start working hard on that
    right away, as with your current reputation, once you go, NO ONE is
    going to want to look at your ideas, because you have done such a good
    job showing that you don't understand how things work.


    My reputation on one very important group has risen to quite credible

    What group?



    I haven't been able to get out of you exactly what you want to do with
    your "Correct Reasoning", and until you show a heart to actually try
    to do something constructive with it, and not just use it as an excuse
    for bad logic, I don't care what it might be able to do, because,
    frankly, I don't think you have the intellect to come up with
    something like that.


    Until we establish the foundation of correct reasoning in terms of a consistent and complete True(L,X) all AI systems will be anchored in the shifting sands of opinions.

    But go ahead and prove me wrong, write an actual paper on the basics
    of your "Correct Reasoning" and show how it actually works, and
    compare it to "Classical Logic" and show what is different. Then maybe
    you can start to work on showing it can actually do something useful.


    The most important aspect of the tiny little foundation of a formal
    system that I already specified immediately above is self-evident:
    True(L,X) can be defined and incompleteness is impossible.

    I don't think your system is anywhere near establish far enough for you
    to say that.


    People that spend 99.99% of their attention on trying to show errors in
    what I say rather than paying any attention understanding what I say
    might not notice these dead obvious things

    What, that we can tell when you are lying because your mouth is moving?

    You "logic" has been shown to be totally broken. I don't think you
    really know what you are talking about. You are just showing your own stupidity,



    Just shows you are ignorant.

    Too bad you are going to die in such disgrace.

    All you need to do is show that there exist a (possibly infinte)
    set of steps from the truth makers in F, using the rules of F, to
    G. Youy don't need to actually DO this in F, if you have a system
    that knowns about F.

    Your mind is just too small.






    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Mon Apr 24 09:40:06 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it
    erroneous.


    Since you don't understand the meaning of self-contradictory,
    that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in F that >>>>>> prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is true
    but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true.



    So, you don't understand how to prove that something is "True in F"
    by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox expressed in
    his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was true, then
    the Liar's paradox would be true, thus that assumption can not be true.

    Your


    We can do the same thing when G asserts its own unprovability in F.
    G cannot be proved in F because this requires a sequence of inference
    steps in F that prove that they themselves do not exist in F.

    Right, you can't prove, in F, that G is true, but you can prove, in
    Meta-F, that G is true in F, and that G is unprovable in F, which is
    what is required.

    You are just showing that your mind can't handle the basics of logic, or

    Proving that G is true in F requires a sequence of inference steps that
    prove that they themselves don't exist.

    You might be bright enough to understand that is self-contradictory.



    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Mon Apr 24 10:58:22 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/22/2023 7:27 PM, Richard Damon wrote:
    On 4/22/23 7:57 PM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it >>>>>>>>>>>> erroneous.


    Since you don't understand the meaning of self-contradictory, >>>>>>>>>>> that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in >>>>>>>>>> F that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is >>>>>>>>> true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true. >>>>>>>>


    So, you don't understand how to prove that something is "True in >>>>>>> F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox
    expressed in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was true,
    then the Liar's paradox would be true, thus that assumption can not
    be true.


    When one level of indirect reference is applied to the Liar Paradox it >>>> becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE.

    Your


    We can do the same thing when G asserts its own unprovability in F. >>>>>> G cannot be proved in F because this requires a sequence of inference >>>>>> steps in F that prove that they themselves do not exist in F.

    Right, you can't prove, in F, that G is true, but you can prove, in
    Meta-F, that G is true in F, and that G is unprovable in F, which
    is what is required.


    When G asserts its own unprovability in F it cannot be proved in F
    because this requires a sequence of inference steps in F that prove
    that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way Tarski's
    Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics of
    logic, or truth.


    It may seem that way to someone that learns things by rote and mistakes >>>> this for actual understanding of exactly how all of the elements of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have
    intentionaally hamstrung yourself to avoid being "polluted" by
    "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just condemned
    yourself into being a pathological liar because you just don't any
    better.


    I do at this point need to understand model theory very thoroughly.

    Learning the details of these things could have boxed me into a corner >>>> prior to my philosophical investigation of seeing how the key elements >>>> fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of semantic >>>> tautologies. It is true that formal systems grounded in this foundation >>>> cannot be incomplete nor have any expressions of language that are
    undecidable. Now that I have this foundation I have a way to see
    exactly
    how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY G
    cannot be
    proved in F. G cannot be proved in F for the same pathological
    self-reference(Olcott 2004) reason that the Liar Paradox cannot be >>>>>> proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand claissic
    arguement forms.


    It is not that I do not understand, it is that I can directly see
    where and how formal mathematical systems diverge from correct
    reasoning.

    But since you are discussing Formal Logic, you need to use the rules
    of Formal logic.


    I have never been talking about formal logic. I have always been talking
    about the philosophical foundations of correct reasoning.

    No, you have been talking about theorys DEEP in formal logic. You can't
    talk about the "errors" in those theories, with being in formal logic.

    IF you think you can somehow talk about the foundations, while working
    in the penthouse, you have just confirmed that you do not understand how
    ANY form of logic works.

    PERIOD.



    The other way to say it is that your "Correct Reasoning" diverges
    from the accepted and proven system of Formal Logic.


    It is correct reasoning in the absolute sense that I refer to.
    If anyone has the opinion that arithmetic does not exist they are
    incorrect in the absolute sense of the word: "incorrect".


    IF you reject the logic that a theory is based on, you need to reject
    the logic system, NOT the theory.

    You are just showing that you have wasted your LIFE because you don'tunderstnad how to work ligic.


    Because you are a learned-by-rote person you make sure to never examine >>>> whether or not any aspect of math diverges from correct reasoning, you >>>> simply assume that math is the gospel even when it contradicts itself.

    Nope, I know that with logic, if you follow the rules, you will get
    the correct answer by the rules.

    If you break the rules, you have no idea where you will go.


    In other words you never ever spend any time on making sure that these
    rules fit together coherently.

    The rules work together just fine.

    YOU don't like some of the results, but they work just fine for most of
    the field.

    You are just PROVING that you have no idea how to actually discuss a new foundation for logic, likely because you are incapable of actually
    comeing up with a consistent basis for working logic.


    As I have told you before, if you want to see what your "Correct  >
    Reasoning" can do as a replaceent logic system, you need to start at the >>> BEGINNING, and see wht it gets.


    The foundation of correct reasoning is that the entire body of
    analytical truth is a set of semantic tautologies.

    This means that all correct inference always requires determining the
    semantic consequence of expressions of language. This semantic
    consequence can be specified syntactically, and indeed must be
    represented syntactically to be computable
    Meaningless gobbledy-good until you actually define what you mean and
    spell out the actual rules that need to be followed.

    Note, "Computability" is actually a fairly late in the process concept.
    You first need to show that you logic can actually do something useful


    To just try to change things at the end is just PROOF that your
    "Correct Reasoning" has to not be based on any real principles of logic. >>>
    Since it is clear that you want to change some of the basics of how
    logic works, you are not allowed to just use ANY of classical logic
    until you actually show what part of it is still usable under your
    system and what changes happen.


    Whenever an expression of language is derived as the semantic
    consequence of other expressions of language we have valid inference.

    And, are you using the "classical" definition of "semantic" (which makes
    this sentence somewhat cirular) or do you mean something based on the
    concept you sometimes use of  "the meaning of the words".


    *Principle of explosion*
    An alternate argument for the principle stems from model theory. A
    sentence P is a semantic consequence of a set of sentences Γ only if
    every model of Γ is a model of P. However, there is no model of the contradictory set (P ∧ ¬P) A fortiori, there is no model of (P ∧ ¬P)
    that is not a model of Q. Thus, vacuously, every model of (P ∧ ¬P) is a model of Q. Thus, Q is a semantic consequence of (P ∧ ¬P). https://en.wikipedia.org/wiki/Principle_of_explosion

    Vacuous truth does not count as truth.
    All variables must be quantified

    "all cell phones in the room are turned off" will be true when no cell
    phones are in the room.

    ∃cp ∈ cell_phones (in_this_room(cp)) ∧ turned_off(cp))



    The semantic consequence must be specified syntactically so that it can
    be computed or examined in formal systems.

    Just like in sound deductive inference when the premises are known to be
    true, and the reasoning valid (a semantic consequence) then the
    conclusion is necessarily true.

    So, what is the difference in your system from classical Formal Logic?


    Semantic Necessity operator: ⊨□

    FALSE ⊨□ FALSE // POE abolished
    (P ∧ ¬P) ⊨□ FALSE // POE abolished

    ⇒ and → symbols are replaced by ⊨□

    The sets that the variables range over must be defined
    all variables must be quantified

    // x is a semantic consequence of its premises in L
    Provable(P,x) ≡ ∃x ∈ L, ∃P ⊆ L (P ⊨□ x)

    // x is a semantic consequence of the axioms of L
    True(L,x) ≡ ∃x ∈ L (Axioms(L) ⊨□ x)

    *The above is all that I know right now*


    The most important aspect of the tiny little foundation of a formal
    system that I already specified immediately above is self-evident:
    True(L,X) can be defined and incompleteness is impossible.

    I don't think your system is anywhere near establish far enough for you
    to say that.

    Try and show exceptions to this rule and I will fill in any gaps that
    you find.

    G asserts its own unprovability in F
    The reason that G cannot be proved in F is that this requires a
    sequence of inference steps in F that proves no such sequence
    of inference steps exists in F.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Mon Apr 24 10:25:00 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it >>>>>>>>>> erroneous.


    Since you don't understand the meaning of self-contradictory, >>>>>>>>> that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in F >>>>>>>> that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is true >>>>>>> but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true.



    So, you don't understand how to prove that something is "True in F"
    by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox expressed
    in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was true, then
    the Liar's paradox would be true, thus that assumption can not be true.


    When one level of indirect reference is applied to the Liar Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE.

    Your


    We can do the same thing when G asserts its own unprovability in F.
    G cannot be proved in F because this requires a sequence of inference
    steps in F that prove that they themselves do not exist in F.

    Right, you can't prove, in F, that G is true, but you can prove, in
    Meta-F, that G is true in F, and that G is unprovable in F, which is
    what is required.


    When G asserts its own unprovability in F it cannot be proved in F
    because this requires a sequence of inference steps in F that prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way Tarski's Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics of logic,
    or truth.


    It may seem that way to someone that learns things by rote and mistakes
    this for actual understanding of exactly how all of the elements of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have
    intentionaally hamstrung yourself to avoid being "polluted" by
    "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just condemned
    yourself into being a pathological liar because you just don't any
    better.


    I do at this point need to understand model theory very thoroughly.

    Learning the details of these things could have boxed me into a corner
    prior to my philosophical investigation of seeing how the key elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of semantic
    tautologies. It is true that formal systems grounded in this foundation
    cannot be incomplete nor have any expressions of language that are
    undecidable. Now that I have this foundation I have a way to see exactly
    how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY G cannot be >>>> proved in F. G cannot be proved in F for the same pathological
    self-reference(Olcott 2004) reason that the Liar Paradox cannot be
    proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand claissic
    arguement forms.


    It is not that I do not understand, it is that I can directly see
    where and how formal mathematical systems diverge from correct reasoning.

    But since you are discussing Formal Logic, you need to use the rules of Formal logic.

    The other way to say it is that your "Correct Reasoning" diverges from
    the accepted and proven system of Formal Logic.



    In classical logic, intuitionistic logic and similar logical systems,
    the principle of explosion

    ex falso [sequitur] quodlibet,
    'from falsehood, anything [follows]'

    ex contradictione [sequitur] quodlibet,
    'from contradiction, anything [follows]')

    https://en.wikipedia.org/wiki/Principle_of_explosion

    ∴ FALSE ⊢ Donald Trump is the Christ
    ∴ FALSE ⊢ Donald Trump is Satan

    *Correction abolishing the POE nonsense*
    Semantic Necessity operator: ⊨□
    FALSE ⊨□ FALSE
    (P ∧ ¬P) ⊨□ FALSE



    Because you are a learned-by-rote person you make sure to never examine
    whether or not any aspect of math diverges from correct reasoning, you
    simply assume that math is the gospel even when it contradicts itself.

    Nope, I know that with logic, if you follow the rules, you will get the correct answer by the rules.


    Then you must agree that Trump is the Christ and Trump is Satan both of
    those were derived from correct logic.

    If you break the rules, you have no idea where you will go.

    As I have told you before, if you want to see what your "Correct
    Reasoning" can do as a replaceent logic system, you need to start at the BEGINNING, and see wht it gets.


    I would be happy to talk this through with you.

    The beginning is that

    valid inference an expression X of language L must be a semantic
    consequence of its premises in L

    sound inference expression X of language L must be a semantic
    consequence of the axioms of L.

    For formal systems such as FOL the semantics is mostly the meaning of
    the logic symbols.

    These two logic symbols are abolished ⇒ → and replaced with this:
    Semantic Necessity operator: ⊨□


    To just try to change things at the end is just PROOF that your "Correct Reasoning" has to not be based on any real principles of logic.


    No logic must be based on correct reasoning any logic that prove Donal
    Trump is the Christ is incorrect reasoning, thus the POE is abolished

    These two logic symbols are abolished ⇒ → and replaced with this:
    Semantic Necessity operator: ⊨□

    Explosions have been abolished
    FALSE ⊨□ FALSE
    (P ∧ ¬P) ⊨□ FALSE

    Since it is clear that you want to change some of the basics of how
    logic works, you are not allowed to just use ANY of classical logic
    until you actually show what part of it is still usable under your
    system and what changes happen.


    Yes lets apply my ideas to FOL. I have already sketched out many
    details.

    Considering your current status, I would start working hard on that
    right away, as with your current reputation, once you go, NO ONE is
    going to want to look at your ideas, because you have done such a good
    job showing that you don't understand how things work.

    I haven't been able to get out of you exactly what you want to do with
    your "Correct Reasoning", and until you show a heart to actually try to
    do something constructive with it, and not just use it as an excuse for
    bad logic, I don't care what it might be able to do, because, frankly, I don't think you have the intellect to come up with something like that.


    I showed how the POE is easily abolished.
    I showed how Provable(L,x) and True(L,x) are defined.

    But go ahead and prove me wrong, write an actual paper on the basics of
    your "Correct Reasoning" and show how it actually works, and compare it
    to "Classical Logic" and show what is different. Then maybe you can
    start to work on showing it can actually do something useful.


    I need a dialogue to vet aspects of my ideas.
    The key thing that I have not yet filled in is how to specify the
    semantics of every FOL expression.

    This semantics seems fully specified:
    ∀n ∈ ℕ ∀m ∈ ℕ ((n > m) ⊨□ (n+1 > m))

    Just shows you are ignorant.

    Too bad you are going to die in such disgrace.

    All you need to do is show that there exist a (possibly infinte)
    set of steps from the truth makers in F, using the rules of F, to
    G. Youy don't need to actually DO this in F, if you have a system
    that knowns about F.

    Your mind is just too small.





    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Mon Apr 24 11:13:05 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/24/2023 10:58 AM, olcott wrote:
    On 4/22/2023 7:27 PM, Richard Damon wrote:
    On 4/22/23 7:57 PM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it >>>>>>>>>>>>> erroneous.


    Since you don't understand the meaning of
    self-contradictory, that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in >>>>>>>>>>> F that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is >>>>>>>>>> true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true. >>>>>>>>>


    So, you don't understand how to prove that something is "True in >>>>>>>> F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox
    expressed in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was true,
    then the Liar's paradox would be true, thus that assumption can
    not be true.


    When one level of indirect reference is applied to the Liar Paradox it >>>>> becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE.

    Your


    We can do the same thing when G asserts its own unprovability in F. >>>>>>> G cannot be proved in F because this requires a sequence of
    inference
    steps in F that prove that they themselves do not exist in F.

    Right, you can't prove, in F, that G is true, but you can prove,
    in Meta-F, that G is true in F, and that G is unprovable in F,
    which is what is required.


    When G asserts its own unprovability in F it cannot be proved in F
    because this requires a sequence of inference steps in F that prove
    that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way Tarski's
    Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics of
    logic, or truth.


    It may seem that way to someone that learns things by rote and
    mistakes
    this for actual understanding of exactly how all of the elements of a >>>>> proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have
    intentionaally hamstrung yourself to avoid being "polluted" by
    "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just condemned >>>>>> yourself into being a pathological liar because you just don't any >>>>>> better.


    I do at this point need to understand model theory very thoroughly.

    Learning the details of these things could have boxed me into a corner >>>>> prior to my philosophical investigation of seeing how the key elements >>>>> fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of
    semantic
    tautologies. It is true that formal systems grounded in this
    foundation
    cannot be incomplete nor have any expressions of language that are
    undecidable. Now that I have this foundation I have a way to see
    exactly
    how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY G
    cannot be
    proved in F. G cannot be proved in F for the same pathological
    self-reference(Olcott 2004) reason that the Liar Paradox cannot
    be proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand claissic
    arguement forms.


    It is not that I do not understand, it is that I can directly see
    where and how formal mathematical systems diverge from correct
    reasoning.

    But since you are discussing Formal Logic, you need to use the rules
    of Formal logic.


    I have never been talking about formal logic. I have always been talking >>> about the philosophical foundations of correct reasoning.

    No, you have been talking about theorys DEEP in formal logic. You
    can't talk about the "errors" in those theories, with being in formal
    logic.

    IF you think you can somehow talk about the foundations, while working
    in the penthouse, you have just confirmed that you do not understand
    how ANY form of logic works.

    PERIOD.



    The other way to say it is that your "Correct Reasoning" diverges
    from the accepted and proven system of Formal Logic.


    It is correct reasoning in the absolute sense that I refer to.
    If anyone has the opinion that arithmetic does not exist they are
    incorrect in the absolute sense of the word: "incorrect".


    IF you reject the logic that a theory is based on, you need to reject
    the logic system, NOT the theory.

    You are just showing that you have wasted your LIFE because you
    don'tunderstnad how to work ligic.


    Because you are a learned-by-rote person you make sure to never
    examine
    whether or not any aspect of math diverges from correct reasoning, you >>>>> simply assume that math is the gospel even when it contradicts itself. >>>>
    Nope, I know that with logic, if you follow the rules, you will get
    the correct answer by the rules.

    If you break the rules, you have no idea where you will go.


    In other words you never ever spend any time on making sure that these
    rules fit together coherently.

    The rules work together just fine.

    YOU don't like some of the results, but they work just fine for most
    of the field.

    You are just PROVING that you have no idea how to actually discuss a
    new foundation for logic, likely because you are incapable of actually
    comeing up with a consistent basis for working logic.


    As I have told you before, if you want to see what your "Correct  >
    Reasoning" can do as a replaceent logic system, you need to start at
    the
    BEGINNING, and see wht it gets.


    The foundation of correct reasoning is that the entire body of
    analytical truth is a set of semantic tautologies.

    This means that all correct inference always requires determining the
    semantic consequence of expressions of language. This semantic
    consequence can be specified syntactically, and indeed must be
    represented syntactically to be computable
    Meaningless gobbledy-good until you actually define what you mean and
    spell out the actual rules that need to be followed.

    Note, "Computability" is actually a fairly late in the process
    concept. You first need to show that you logic can actually do
    something useful


    To just try to change things at the end is just PROOF that your
    "Correct Reasoning" has to not be based on any real principles of
    logic.

    Since it is clear that you want to change some of the basics of how
    logic works, you are not allowed to just use ANY of classical logic
    until you actually show what part of it is still usable under your
    system and what changes happen.


    Whenever an expression of language is derived as the semantic
    consequence of other expressions of language we have valid inference.

    And, are you using the "classical" definition of "semantic" (which
    makes this sentence somewhat cirular) or do you mean something based
    on the concept you sometimes use of  "the meaning of the words".


    *Principle of explosion*
    An alternate argument for the principle stems from model theory. A
    sentence P is a semantic consequence of a set of sentences Γ only if
    every model of Γ is a model of P. However, there is no model of the contradictory set (P ∧ ¬P) A fortiori, there is no model of (P ∧ ¬P) that is not a model of Q. Thus, vacuously, every model of (P ∧ ¬P) is a model of Q. Thus, Q is a semantic consequence of (P ∧ ¬P). https://en.wikipedia.org/wiki/Principle_of_explosion

    Vacuous truth does not count as truth.
    All variables must be quantified

    "all cell phones in the room are turned off" will be true when no cell
    phones are in the room.

    ∃cp ∈ cell_phones (in_this_room(cp)) ∧ turned_off(cp))



    The semantic consequence must be specified syntactically so that it can
    be computed or examined in formal systems.

    Just like in sound deductive inference when the premises are known to be >>> true, and the reasoning valid (a semantic consequence) then the
    conclusion is necessarily true.

    So, what is the difference in your system from classical Formal Logic?


    Semantic Necessity operator: ⊨□

       FALSE ⊨□ FALSE // POE abolished
    (P ∧ ¬P) ⊨□ FALSE // POE abolished

    ⇒ and → symbols are replaced by ⊨□

    The sets that the variables range over must be defined
    all variables must be quantified

    // x is a semantic consequence of its premises in L
    Provable(P,x) ≡ ∃x ∈ L, ∃P ⊆ L (P ⊨□ x)

    // x is a semantic consequence of the axioms of L
    True(L,x) ≡ ∃x ∈ L (Axioms(L) ⊨□ x)

    *The above is all that I know right now*


    The most important aspect of the tiny little foundation of a formal
    system that I already specified immediately above is self-evident:
    True(L,X) can be defined and incompleteness is impossible.

    I don't think your system is anywhere near establish far enough for
    you to say that.

    Try and show exceptions to this rule and I will fill in any gaps that
    you find.

    G asserts its own unprovability in F
    The reason that G cannot be proved in F is that this requires a
    sequence of inference steps in F that proves no such sequence
    of inference steps exists in F.


    ∃sequence_of_inference_steps ⊆ F (sequence_of_inference_steps ⊢ ∄sequence_of_inference_steps ⊆ F)


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Apr 24 19:35:35 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/24/23 10:40 AM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it
    erroneous.


    Since you don't understand the meaning of self-contradictory,
    that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in F that >>>>>>> prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is true
    but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true.



    So, you don't understand how to prove that something is "True in F"
    by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox expressed
    in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was true, then
    the Liar's paradox would be true, thus that assumption can not be true.

    Your


    We can do the same thing when G asserts its own unprovability in F.
    G cannot be proved in F because this requires a sequence of inference
    steps in F that prove that they themselves do not exist in F.

    Right, you can't prove, in F, that G is true, but you can prove, in
    Meta-F, that G is true in F, and that G is unprovable in F, which is
    what is required.

    You are just showing that your mind can't handle the basics of logic, or

    Proving that G is true in F requires a sequence of inference steps that
    prove that they themselves don't exist.

    You might be bright enough to understand that is self-contradictory.


    Except that G is proved in Meta-F to be "True in F".

    With a finite number of steps in Meta-F, we can prove that the infinite
    number of steps in F exist and are true.

    In particular, in F, we need to check every number individual to see if
    it satisfies the relationship, and we have no short cut to make this
    operation finite, so we can't prove it. But in Meta-F, we know something
    about the relationship, and are able to prove that no number can satisfy
    the relationship, and do so in a finite number of steps.

    Thus, we can prove in Meta-F that G must be true in F.

    The sequence of steps in F is infinite, so not a proof in F.

    In fact, in Meta-F we are also able to prove that there CAN'T be a
    finite sequence set of steps that prove G true in F.

    Thus, with logic in Meta-F, we can prove that, G is True in F and can
    not be proven in F.

    You just don't seem to understand how Meta-Logic works. And, it turns
    out, that meta-logic is a very important tool for proving things, so
    this is one of your Kryponites.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Mon Apr 24 22:28:07 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 12:13 PM, olcott wrote:
    On 4/24/2023 10:58 AM, olcott wrote:
    On 4/22/2023 7:27 PM, Richard Damon wrote:
    On 4/22/23 7:57 PM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making >>>>>>>>>>>>>>> it erroneous.


    Since you don't understand the meaning of
    self-contradictory, that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps >>>>>>>>>>>>> in F that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is >>>>>>>>>>>> true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true. >>>>>>>>>>>


    So, you don't understand how to prove that something is "True >>>>>>>>>> in F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox
    expressed in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was true, >>>>>>>> then the Liar's paradox would be true, thus that assumption can >>>>>>>> not be true.


    When one level of indirect reference is applied to the Liar
    Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE. >>>>>>>
    Your


    We can do the same thing when G asserts its own unprovability >>>>>>>>> in F.
    G cannot be proved in F because this requires a sequence of
    inference
    steps in F that prove that they themselves do not exist in F. >>>>>>>>
    Right, you can't prove, in F, that G is true, but you can prove, >>>>>>>> in Meta-F, that G is true in F, and that G is unprovable in F, >>>>>>>> which is what is required.


    When G asserts its own unprovability in F it cannot be proved in F >>>>>>> because this requires a sequence of inference steps in F that
    prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way
    Tarski's Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics of >>>>>>>> logic, or truth.


    It may seem that way to someone that learns things by rote and
    mistakes
    this for actual understanding of exactly how all of the elements >>>>>>> of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have
    intentionaally hamstrung yourself to avoid being "polluted" by >>>>>>>> "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just
    condemned yourself into being a pathological liar because you
    just don't any better.


    I do at this point need to understand model theory very thoroughly. >>>>>>>
    Learning the details of these things could have boxed me into a
    corner
    prior to my philosophical investigation of seeing how the key
    elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of
    semantic
    tautologies. It is true that formal systems grounded in this
    foundation
    cannot be incomplete nor have any expressions of language that are >>>>>>> undecidable. Now that I have this foundation I have a way to see >>>>>>> exactly
    how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY G >>>>>>>>> cannot be
    proved in F. G cannot be proved in F for the same pathological >>>>>>>>> self-reference(Olcott 2004) reason that the Liar Paradox cannot >>>>>>>>> be proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand
    claissic arguement forms.


    It is not that I do not understand, it is that I can directly see >>>>>>> where and how formal mathematical systems diverge from correct
    reasoning.

    But since you are discussing Formal Logic, you need to use the
    rules of Formal logic.


    I have never been talking about formal logic. I have always been
    talking
    about the philosophical foundations of correct reasoning.

    No, you have been talking about theorys DEEP in formal logic. You
    can't talk about the "errors" in those theories, with being in
    formal logic.

    IF you think you can somehow talk about the foundations, while
    working in the penthouse, you have just confirmed that you do not
    understand how ANY form of logic works.

    PERIOD.



    The other way to say it is that your "Correct Reasoning" diverges
    from the accepted and proven system of Formal Logic.


    It is correct reasoning in the absolute sense that I refer to.
    If anyone has the opinion that arithmetic does not exist they are
    incorrect in the absolute sense of the word: "incorrect".


    IF you reject the logic that a theory is based on, you need to
    reject the logic system, NOT the theory.

    You are just showing that you have wasted your LIFE because you
    don'tunderstnad how to work ligic.


    Because you are a learned-by-rote person you make sure to never
    examine
    whether or not any aspect of math diverges from correct
    reasoning, you
    simply assume that math is the gospel even when it contradicts
    itself.

    Nope, I know that with logic, if you follow the rules, you will
    get the correct answer by the rules.

    If you break the rules, you have no idea where you will go.


    In other words you never ever spend any time on making sure that these >>>>> rules fit together coherently.

    The rules work together just fine.

    YOU don't like some of the results, but they work just fine for most
    of the field.

    You are just PROVING that you have no idea how to actually discuss a
    new foundation for logic, likely because you are incapable of
    actually comeing up with a consistent basis for working logic.


    As I have told you before, if you want to see what your "Correct
    Reasoning" can do as a replaceent logic system, you need to
    start at the
    BEGINNING, and see wht it gets.


    The foundation of correct reasoning is that the entire body of
    analytical truth is a set of semantic tautologies.

    This means that all correct inference always requires determining the >>>>> semantic consequence of expressions of language. This semantic
    consequence can be specified syntactically, and indeed must be
    represented syntactically to be computable
    Meaningless gobbledy-good until you actually define what you mean
    and spell out the actual rules that need to be followed.

    Note, "Computability" is actually a fairly late in the process
    concept. You first need to show that you logic can actually do
    something useful


    To just try to change things at the end is just PROOF that your
    "Correct Reasoning" has to not be based on any real principles of
    logic.

    Since it is clear that you want to change some of the basics of
    how logic works, you are not allowed to just use ANY of classical
    logic until you actually show what part of it is still usable
    under your system and what changes happen.


    Whenever an expression of language is derived as the semantic
    consequence of other expressions of language we have valid inference. >>>>
    And, are you using the "classical" definition of "semantic" (which
    makes this sentence somewhat cirular) or do you mean something based
    on the concept you sometimes use of  "the meaning of the words".


    *Principle of explosion*
    An alternate argument for the principle stems from model theory. A
    sentence P is a semantic consequence of a set of sentences Γ only if
    every model of Γ is a model of P. However, there is no model of the
    contradictory set (P ∧ ¬P) A fortiori, there is no model of (P ∧ ¬P) >>> that is not a model of Q. Thus, vacuously, every model of (P ∧ ¬P) is a >>> model of Q. Thus, Q is a semantic consequence of (P ∧ ¬P).
    https://en.wikipedia.org/wiki/Principle_of_explosion

    Vacuous truth does not count as truth.
    All variables must be quantified

    "all cell phones in the room are turned off" will be true when no
    cell phones are in the room.

    ∃cp ∈ cell_phones (in_this_room(cp)) ∧ turned_off(cp))



    The semantic consequence must be specified syntactically so that it
    can
    be computed or examined in formal systems.

    Just like in sound deductive inference when the premises are known
    to be
    true, and the reasoning valid (a semantic consequence) then the
    conclusion is necessarily true.

    So, what is the difference in your system from classical Formal Logic? >>>>

    Semantic Necessity operator: ⊨□

        FALSE ⊨□ FALSE // POE abolished
    (P ∧ ¬P) ⊨□ FALSE // POE abolished

    ⇒ and → symbols are replaced by ⊨□

    The sets that the variables range over must be defined
    all variables must be quantified

    // x is a semantic consequence of its premises in L
    Provable(P,x) ≡ ∃x ∈ L, ∃P ⊆ L (P ⊨□ x)

    // x is a semantic consequence of the axioms of L
    True(L,x) ≡ ∃x ∈ L (Axioms(L) ⊨□ x)

    *The above is all that I know right now*


    The most important aspect of the tiny little foundation of a formal
    system that I already specified immediately above is self-evident:
    True(L,X) can be defined and incompleteness is impossible.

    I don't think your system is anywhere near establish far enough for
    you to say that.

    Try and show exceptions to this rule and I will fill in any gaps that
    you find.

    G asserts its own unprovability in F
    The reason that G cannot be proved in F is that this requires a
    sequence of inference steps in F that proves no such sequence
    of inference steps exists in F.


    ∃sequence_of_inference_steps ⊆ F (sequence_of_inference_steps ⊢
    ∄sequence_of_inference_steps ⊆ F)



    So, you don't understand the differnce between the INFINITE set of
    sequence steps that show that G is True, and the FINITE number of steps
    that need to be shown to make G provable.


    The experts seem to believe that unless a proof can be transformed into
    a finite sequence of steps it is no actual proof at all. Try and cite a
    source that says otherwise.

    We can imagine an Oracle machine that can complete these proofs in the
    same sort of way that we can imagine a magic fairy that waves a magic
    wand.

    You are just showing you don't understand what you talking about and
    just spouting word (or symbol) salad.

    You are oriving you are an IDIOT.

    I am seeing these things at a deeper philosophical level than you are. I
    know that is hard to believe.

    You are so sure that I must be wrong that you don't bother to understand
    what I am saying.

    It seem that the time has come for me to spend the little time that it
    takes to understand the technical details of Gödel's proof.

    I am estimating that have very good understanding of the preface to the
    proof and the SEP article should provide this.

    https://mavdisk.mnsu.edu/pj2943kt/Fall%202015/Promotion%20Application/Previous%20Years%20Article%2022%20Materials/godel-1931.pdf


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Mon Apr 24 23:03:10 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 11:25 AM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it >>>>>>>>>>>> erroneous.


    Since you don't understand the meaning of self-contradictory, >>>>>>>>>>> that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in >>>>>>>>>> F that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is >>>>>>>>> true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true. >>>>>>>>


    So, you don't understand how to prove that something is "True in >>>>>>> F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox
    expressed in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was true,
    then the Liar's paradox would be true, thus that assumption can not
    be true.


    When one level of indirect reference is applied to the Liar Paradox it >>>> becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE.

    Your


    We can do the same thing when G asserts its own unprovability in F. >>>>>> G cannot be proved in F because this requires a sequence of inference >>>>>> steps in F that prove that they themselves do not exist in F.

    Right, you can't prove, in F, that G is true, but you can prove, in
    Meta-F, that G is true in F, and that G is unprovable in F, which
    is what is required.


    When G asserts its own unprovability in F it cannot be proved in F
    because this requires a sequence of inference steps in F that prove
    that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way Tarski's
    Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics of
    logic, or truth.


    It may seem that way to someone that learns things by rote and mistakes >>>> this for actual understanding of exactly how all of the elements of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have
    intentionaally hamstrung yourself to avoid being "polluted" by
    "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just condemned
    yourself into being a pathological liar because you just don't any
    better.


    I do at this point need to understand model theory very thoroughly.

    Learning the details of these things could have boxed me into a corner >>>> prior to my philosophical investigation of seeing how the key elements >>>> fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of semantic >>>> tautologies. It is true that formal systems grounded in this foundation >>>> cannot be incomplete nor have any expressions of language that are
    undecidable. Now that I have this foundation I have a way to see
    exactly
    how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY G
    cannot be
    proved in F. G cannot be proved in F for the same pathological
    self-reference(Olcott 2004) reason that the Liar Paradox cannot be >>>>>> proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand claissic
    arguement forms.


    It is not that I do not understand, it is that I can directly see
    where and how formal mathematical systems diverge from correct
    reasoning.

    But since you are discussing Formal Logic, you need to use the rules
    of Formal logic.

    The other way to say it is that your "Correct Reasoning" diverges
    from the accepted and proven system of Formal Logic.



    In classical logic, intuitionistic logic and similar logical systems,
    the principle of explosion

    ex falso [sequitur] quodlibet,
    'from falsehood, anything [follows]'

    ex contradictione [sequitur] quodlibet,
    'from contradiction, anything [follows]')

    Right, if a logic system can prove a contradiction, that out of that contradiction you can prove anything


    https://en.wikipedia.org/wiki/Principle_of_explosion

    ∴ FALSE ⊢ Donald Trump is the Christ
    ∴ FALSE ⊢ Donald Trump is Satan

    Which isn't what was being talked about.

    You clearly don't understand how the principle of explosion works, which isn't surprising considering how many misconseptions you have about how
    logic works.


    ex falso [sequitur] quodlibet,'from falsehood, anything [follows]'
    ∴ FALSE ⊢ Donald Trump is the Christ

    Right now, I would say you are to ignorant on that basics of logic to be
    able to explain, even in basic terms, how it works, you have shown
    yourself to be that stupid.



    *Correction abolishing the POE nonsense*
    Semantic Necessity operator: ⊨□
    FALSE ⊨□ FALSE
    (P ∧ ¬P) ⊨□ FALSE


    So, FULLY define what you mean by that.


    The two logic symbols already say semantic necessity, model theory may
    have screwed up the idea of semantics by allowing vacuous truth.
    I must become a master expert of at least basic model theory.



    Because you are a learned-by-rote person you make sure to never examine >>>> whether or not any aspect of math diverges from correct reasoning, you >>>> simply assume that math is the gospel even when it contradicts itself.

    Nope, I know that with logic, if you follow the rules, you will get
    the correct answer by the rules.


    Then you must agree that Trump is the Christ and Trump is Satan both of
    those were derived from correct logic.

    If you break the rules, you have no idea where you will go.

    As I have told you before, if you want to see what your "Correct
    Reasoning" can do as a replaceent logic system, you need to start at
    the BEGINNING, and see wht it gets.


    I would be happy to talk this through with you.

    The beginning is that

    valid inference an expression X of language L must be a semantic
    consequence of its premises in L


    And what do you mean by "semantic"


    What does meaning mean?
    The premise that
    the Moon if made from green cheese ⊨□ The Moon is made from cheese.

    All of the conventional logic symbols retain their original meaning.
    Variable are quantified and of a specific type.
    Meaning postulates can axiomatise meaning.

    The connection between elements of the proof must be at least as good as relevance logic.

    because, conventional logic defines semantic consequence as the
    conclusion must be true if the premise is true.

    You seem to mean something diffent, but haven't explained what you mean
    by that.


    You never heard of ordinary sound deductive inference?


    sound inference expression X of language L must be a semantic
    consequence of the axioms of L.

    For formal systems such as FOL the semantics is mostly the meaning of
    the logic symbols.

    These two logic symbols are abolished ⇒ → and replaced with this:
    Semantic Necessity operator: ⊨□

    Why do you need to abolish shows symbols?

    They seem to lead to the principle of explosion.

    You do understand that the
    statment that A -> B is equivalent to the asserting of (~A | B) is
    ALWAYS TRUE (which might be part of your problem, as you don't seem to understand that categorical meaning of ALL and NO), so either you need
    to outlaw the negation operator, or the or operator to do this.

    Again, what does "Semantic Necessity" operator mean?

    A ⊨□ B the meaning of B is an aspect of the meaning of A.



    Note, one issue with your use of symbols, so many of the symbols can
    have slightly diffferent meanings based on the context and system you
    are working in.


    I don't see this can you provide examples?
    I am stipulating standard meanings.



    To just try to change things at the end is just PROOF that your
    "Correct Reasoning" has to not be based on any real principles of logic. >>>

    No logic must be based on correct reasoning any logic that prove Donal
    Trump is the Christ is incorrect reasoning, thus the POE is abolished

    You CAN'T abolish the Principle of Explosion unless you greatly restrict
    the power of your logic.


    My two axioms abolish it neatly. All that I am getting rid of is
    incompleteness and undecidability and I am gaining a universal True(L,x) predicate.


    These two logic symbols are abolished ⇒ → and replaced with this:
    Semantic Necessity operator: ⊨□

    Explosions have been abolished

    Nope.

    FALSE ⊨□ FALSE
    (P ∧ ¬P) ⊨□ FALSE

    Again DEFINE this operator, and the words you use to define it.


    The semantic meaning of B is necessitated by the semantic meaning of A.
    If I have a dog then I have an animal because a dog is an animal.


    Since it is clear that you want to change some of the basics of how
    logic works, you are not allowed to just use ANY of classical logic
    until you actually show what part of it is still usable under your
    system and what changes happen.


    Yes lets apply my ideas to FOL. I have already sketched out many
    details.

    Go ahead, try to fully define your ideas.

    Remember, until you get to supporting the Higher Order Logics, you can't
    get to the incompleteness, as that has been only established for systems

    I have always been talking about HOL in terms of MTT

    with second order logic, which is also needed for the needed properties
    of the whole numbers. First Order Peano Arithmatic might be complete,
    but can't be proved (within itself) to be consistent. Second Order
    Peaano Arithmatic (which adds the principle of Induction) IS incomplete
    as it supports enough of the natural number to support Godel's proof.


    Considering your current status, I would start working hard on that
    right away, as with your current reputation, once you go, NO ONE is
    going to want to look at your ideas, because you have done such a
    good job showing that you don't understand how things work.

    I haven't been able to get out of you exactly what you want to do
    with your "Correct Reasoning", and until you show a heart to actually
    try to do something constructive with it, and not just use it as an
    excuse for bad logic, I don't care what it might be able to do,
    because, frankly, I don't think you have the intellect to come up
    with something like that.


    I showed how the POE is easily abolished.

    Nope.


    If axioms stipulate that explosion cannot occur then it cannot occur.

    I showed how Provable(L,x) and True(L,x) are defined.

    Note clearly. For instance, A statment x is provable or True in a SYSTEM/THEORY (depending on your terminology) and NOT dependent on some
    other statement in the system, as you definition seemed to imply. You
    don't "Prove" something based on a statement, but in a System/Theory.


    F ⊢ A is used to express (in the meta-level) that A
    is derivable in F, that is, that there is a proof of
    A in F, or, in other words, that A is a theorem of F. https://plato.stanford.edu/entries/goedel-incompleteness/


    But go ahead and prove me wrong, write an actual paper on the basics
    of your "Correct Reasoning" and show how it actually works, and
    compare it to "Classical Logic" and show what is different. Then
    maybe you can start to work on showing it can actually do something
    useful.


    I need a dialogue to vet aspects of my ideas.
    The key thing that I have not yet filled in is how to specify the
    semantics of every FOL expression.

    This semantics seems fully specified:
    ∀n ∈ ℕ ∀m ∈ ℕ ((n > m) ⊨□ (n+1 > m))

    Nope, you need to actually FULLY DEFINE what you mean by your symbols,
    you can't just rely on refering to classical meaning since you clearly disagree with some of the classical meanings.

    in the above case I can switch to conventional symbols without losing
    anything ∀n ∈ ℕ ∀m ∈ ℕ ((n > m) ⊢ (n+1 > m))

    Implication does a poor job of if-then
    p---q---(p ⇒ q)---(if p then q)
    T---T------T------------T
    T---F------F------------F
    F---T------T------------undefined
    F---F------T------------undefined

    You seem to have some disjoint ideas, but seem to be unable to come up
    with a cohesive whole. You use words that you don't seem to be able to actually fully define.

    Since you are trying to reject some of the basics of classical logic,
    you need to FULLY define how your logic works. Name ALL the basic
    operation that you allow. Do you allow "Not", "Or", "And", "Equals",
    etc. What are your rules for logical inference. How do you ACTUALLY
    prove a statement given a set of "Truthmakers".


    All of the details that I provided are all of the detail that I know
    right now.

    Remember, if you want to reject classical logic, you can't use it to
    define your system.

    I can take it as a basis and add and subtract things from it

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Mon Apr 24 23:17:11 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 10:40 AM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making it >>>>>>>>>> erroneous.


    Since you don't understand the meaning of self-contradictory, >>>>>>>>> that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps in F >>>>>>>> that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is true >>>>>>> but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true.



    So, you don't understand how to prove that something is "True in F"
    by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox expressed
    in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was true, then
    the Liar's paradox would be true, thus that assumption can not be true.

    Your


    We can do the same thing when G asserts its own unprovability in F.
    G cannot be proved in F because this requires a sequence of inference
    steps in F that prove that they themselves do not exist in F.

    Right, you can't prove, in F, that G is true, but you can prove, in
    Meta-F, that G is true in F, and that G is unprovable in F, which is
    what is required.

    You are just showing that your mind can't handle the basics of logic, or

    Proving that G is true in F requires a sequence of inference steps that
    prove that they themselves don't exist.

    You might be bright enough to understand that is self-contradictory.


    Except that G is proved in Meta-F to be "True in F".

    When you assume that infinite proofs are not a thing then
    When you understood rather than ignore that a proof of G in F requires a sequence of inference steps in F that prove that they themselves don't
    exist then and only then is it possible to understand that a proof of G
    in F cannot be done only because it is self-contradictory.

    Ignoring this doesn't make it go away. Assuming this it is not needed
    requires another way of proving in F that G cannot be proved in F.
    An infinite proof is always impossible so that way is out.


    With a finite number of steps in Meta-F, we can prove that the infinite number of steps in F exist and are true.


    That is cheating. The purpose here is to see WHY rather than merely THAT
    G is unprovable in F.

    In particular, in F, we need to check every number individual to see if
    it satisfies the relationship, and we have no short cut to make this operation finite,

    Then it doesn't count for jack shit. You might as well resort to a magic
    fairy waving a magic wand.

    so we can't prove it. But in Meta-F, we know something
    about the relationship, and are able to prove that no number can satisfy
    the relationship, and do so in a finite number of steps.


    When we write G in Meta-F to begin with then Meta-F can recognize the contradiction and report the non-sequitur error.

    Thus, we can prove in Meta-F that G must be true in F.

    The sequence of steps in F is infinite, so not a proof in F.

    In fact, in Meta-F we are also able to prove that there CAN'T be a
    finite sequence set of steps that prove G true in F.

    Thus, with logic in Meta-F, we can prove that, G is True in F and can
    not be proven in F.

    You just don't seem to understand how Meta-Logic works. And, it turns
    out, that meta-logic is a very important tool for proving things, so
    this is one of your Kryponites.


    Actually I understand how Meta-F works better than most. We need no F
    and Meta-F we write G in Meta-F to begin with and it rejects G as
    semantically erroneous.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Tue Apr 25 07:56:22 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/24/23 11:28 PM, olcott wrote:
    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 12:13 PM, olcott wrote:
    On 4/24/2023 10:58 AM, olcott wrote:
    On 4/22/2023 7:27 PM, Richard Damon wrote:
    On 4/22/23 7:57 PM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making >>>>>>>>>>>>>>>> it erroneous.


    Since you don't understand the meaning of
    self-contradictory, that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps >>>>>>>>>>>>>> in F that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement >>>>>>>>>>>>> is true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>> This sentence is not true: "This sentence is not true" is true. >>>>>>>>>>>>


    So, you don't understand how to prove that something is "True >>>>>>>>>>> in F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox
    expressed in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was
    true, then the Liar's paradox would be true, thus that
    assumption can not be true.


    When one level of indirect reference is applied to the Liar
    Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE. >>>>>>>>
    Your


    We can do the same thing when G asserts its own unprovability >>>>>>>>>> in F.
    G cannot be proved in F because this requires a sequence of >>>>>>>>>> inference
    steps in F that prove that they themselves do not exist in F. >>>>>>>>>
    Right, you can't prove, in F, that G is true, but you can
    prove, in Meta-F, that G is true in F, and that G is unprovable >>>>>>>>> in F, which is what is required.


    When G asserts its own unprovability in F it cannot be proved in F >>>>>>>> because this requires a sequence of inference steps in F that
    prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way
    Tarski's Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics of >>>>>>>>> logic, or truth.


    It may seem that way to someone that learns things by rote and >>>>>>>> mistakes
    this for actual understanding of exactly how all of the elements >>>>>>>> of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have >>>>>>>>> intentionaally hamstrung yourself to avoid being "polluted" by >>>>>>>>> "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just
    condemned yourself into being a pathological liar because you >>>>>>>>> just don't any better.


    I do at this point need to understand model theory very thoroughly. >>>>>>>>
    Learning the details of these things could have boxed me into a >>>>>>>> corner
    prior to my philosophical investigation of seeing how the key
    elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of >>>>>>>> semantic
    tautologies. It is true that formal systems grounded in this
    foundation
    cannot be incomplete nor have any expressions of language that are >>>>>>>> undecidable. Now that I have this foundation I have a way to see >>>>>>>> exactly
    how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY G >>>>>>>>>> cannot be
    proved in F. G cannot be proved in F for the same pathological >>>>>>>>>> self-reference(Olcott 2004) reason that the Liar Paradox
    cannot be proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand
    claissic arguement forms.


    It is not that I do not understand, it is that I can directly
    see where and how formal mathematical systems diverge from
    correct reasoning.

    But since you are discussing Formal Logic, you need to use the
    rules of Formal logic.


    I have never been talking about formal logic. I have always been
    talking
    about the philosophical foundations of correct reasoning.

    No, you have been talking about theorys DEEP in formal logic. You
    can't talk about the "errors" in those theories, with being in
    formal logic.

    IF you think you can somehow talk about the foundations, while
    working in the penthouse, you have just confirmed that you do not
    understand how ANY form of logic works.

    PERIOD.



    The other way to say it is that your "Correct Reasoning" diverges >>>>>>> from the accepted and proven system of Formal Logic.


    It is correct reasoning in the absolute sense that I refer to.
    If anyone has the opinion that arithmetic does not exist they are
    incorrect in the absolute sense of the word: "incorrect".


    IF you reject the logic that a theory is based on, you need to
    reject the logic system, NOT the theory.

    You are just showing that you have wasted your LIFE because you
    don'tunderstnad how to work ligic.


    Because you are a learned-by-rote person you make sure to never >>>>>>>> examine
    whether or not any aspect of math diverges from correct
    reasoning, you
    simply assume that math is the gospel even when it contradicts >>>>>>>> itself.

    Nope, I know that with logic, if you follow the rules, you will
    get the correct answer by the rules.

    If you break the rules, you have no idea where you will go.


    In other words you never ever spend any time on making sure that
    these
    rules fit together coherently.

    The rules work together just fine.

    YOU don't like some of the results, but they work just fine for
    most of the field.

    You are just PROVING that you have no idea how to actually discuss
    a new foundation for logic, likely because you are incapable of
    actually comeing up with a consistent basis for working logic.


    As I have told you before, if you want to see what your "Correct >>>>>>> > Reasoning" can do as a replaceent logic system, you need to
    start at the
    BEGINNING, and see wht it gets.


    The foundation of correct reasoning is that the entire body of
    analytical truth is a set of semantic tautologies.

    This means that all correct inference always requires determining the >>>>>> semantic consequence of expressions of language. This semantic
    consequence can be specified syntactically, and indeed must be
    represented syntactically to be computable
    Meaningless gobbledy-good until you actually define what you mean
    and spell out the actual rules that need to be followed.

    Note, "Computability" is actually a fairly late in the process
    concept. You first need to show that you logic can actually do
    something useful


    To just try to change things at the end is just PROOF that your
    "Correct Reasoning" has to not be based on any real principles of >>>>>>> logic.

    Since it is clear that you want to change some of the basics of
    how logic works, you are not allowed to just use ANY of classical >>>>>>> logic until you actually show what part of it is still usable
    under your system and what changes happen.


    Whenever an expression of language is derived as the semantic
    consequence of other expressions of language we have valid inference. >>>>>
    And, are you using the "classical" definition of "semantic" (which
    makes this sentence somewhat cirular) or do you mean something
    based on the concept you sometimes use of  "the meaning of the words". >>>>>

    *Principle of explosion*
    An alternate argument for the principle stems from model theory. A
    sentence P is a semantic consequence of a set of sentences Γ only if
    every model of Γ is a model of P. However, there is no model of the
    contradictory set (P ∧ ¬P) A fortiori, there is no model of (P ∧ ¬P) >>>> that is not a model of Q. Thus, vacuously, every model of (P ∧ ¬P) is a >>>> model of Q. Thus, Q is a semantic consequence of (P ∧ ¬P).
    https://en.wikipedia.org/wiki/Principle_of_explosion

    Vacuous truth does not count as truth.
    All variables must be quantified

    "all cell phones in the room are turned off" will be true when no
    cell phones are in the room.

    ∃cp ∈ cell_phones (in_this_room(cp)) ∧ turned_off(cp))



    The semantic consequence must be specified syntactically so that
    it can
    be computed or examined in formal systems.

    Just like in sound deductive inference when the premises are known >>>>>> to be
    true, and the reasoning valid (a semantic consequence) then the
    conclusion is necessarily true.

    So, what is the difference in your system from classical Formal Logic? >>>>>

    Semantic Necessity operator: ⊨□

        FALSE ⊨□ FALSE // POE abolished
    (P ∧ ¬P) ⊨□ FALSE // POE abolished

    ⇒ and → symbols are replaced by ⊨□

    The sets that the variables range over must be defined
    all variables must be quantified

    // x is a semantic consequence of its premises in L
    Provable(P,x) ≡ ∃x ∈ L, ∃P ⊆ L (P ⊨□ x)

    // x is a semantic consequence of the axioms of L
    True(L,x) ≡ ∃x ∈ L (Axioms(L) ⊨□ x)

    *The above is all that I know right now*


    The most important aspect of the tiny little foundation of a formal >>>>>> system that I already specified immediately above is self-evident: >>>>>> True(L,X) can be defined and incompleteness is impossible.

    I don't think your system is anywhere near establish far enough for
    you to say that.

    Try and show exceptions to this rule and I will fill in any gaps that
    you find.

    G asserts its own unprovability in F
    The reason that G cannot be proved in F is that this requires a
    sequence of inference steps in F that proves no such sequence
    of inference steps exists in F.


    ∃sequence_of_inference_steps ⊆ F (sequence_of_inference_steps ⊢
    ∄sequence_of_inference_steps ⊆ F)



    So, you don't understand the differnce between the INFINITE set of
    sequence steps that show that G is True, and the FINITE number of
    steps that need to be shown to make G provable.


    The experts seem to believe that unless a proof can be transformed into
    a finite sequence of steps it is no actual proof at all. Try and cite a source that says otherwise.

    WHy? Because I agree with that. A Proof needs to be done in a finite
    number of steps.

    The question is why the infinite number of steps in F that makes G true
    don't count for making it true.

    Yes, you can't write that out to KNOW it to be true, but that is the
    differece between knowledge and fact.


    We can imagine an Oracle machine that can complete these proofs in the
    same sort of way that we can imagine a magic fairy that waves a magic
    wand.

    You are just showing you don't understand what you talking about and
    just spouting word (or symbol) salad.

    You are oriving you are an IDIOT.

    I am seeing these things at a deeper philosophical level than you are. I
    know that is hard to believe.

    But not according to the rules of the system you are talking about.

    You don't get to change the rules on a system.


    You are so sure that I must be wrong that you don't bother to understand
    what I am saying.

    No, I understand what you are saying and see where you are WRONG.


    It seem that the time has come for me to spend the little time that it
    takes to understand the technical details of Gödel's proof.

    I am estimating that have very good understanding of the preface to the
    proof and the SEP article should provide this.

    https://mavdisk.mnsu.edu/pj2943kt/Fall%202015/Promotion%20Application/Previous%20Years%20Article%2022%20Materials/godel-1931.pdf



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Tue Apr 25 23:38:57 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/25/2023 6:56 AM, Richard Damon wrote:
    On 4/24/23 11:28 PM, olcott wrote:
    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 12:13 PM, olcott wrote:
    On 4/24/2023 10:58 AM, olcott wrote:
    On 4/22/2023 7:27 PM, Richard Damon wrote:
    On 4/22/23 7:57 PM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, >>>>>>>>>>>>>>>>> making it erroneous.


    Since you don't understand the meaning of
    self-contradictory, that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference >>>>>>>>>>>>>>> steps in F that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement >>>>>>>>>>>>>> is true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>>> This sentence is not true: "This sentence is not true" is >>>>>>>>>>>>> true.



    So, you don't understand how to prove that something is >>>>>>>>>>>> "True in F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox >>>>>>>>>>> expressed in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was >>>>>>>>>> true, then the Liar's paradox would be true, thus that
    assumption can not be true.


    When one level of indirect reference is applied to the Liar
    Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE. >>>>>>>>>
    Your


    We can do the same thing when G asserts its own unprovability >>>>>>>>>>> in F.
    G cannot be proved in F because this requires a sequence of >>>>>>>>>>> inference
    steps in F that prove that they themselves do not exist in F. >>>>>>>>>>
    Right, you can't prove, in F, that G is true, but you can
    prove, in Meta-F, that G is true in F, and that G is
    unprovable in F, which is what is required.


    When G asserts its own unprovability in F it cannot be proved in F >>>>>>>>> because this requires a sequence of inference steps in F that >>>>>>>>> prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way
    Tarski's Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics of >>>>>>>>>> logic, or truth.


    It may seem that way to someone that learns things by rote and >>>>>>>>> mistakes
    this for actual understanding of exactly how all of the
    elements of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have >>>>>>>>>> intentionaally hamstrung yourself to avoid being "polluted" by >>>>>>>>>> "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just
    condemned yourself into being a pathological liar because you >>>>>>>>>> just don't any better.


    I do at this point need to understand model theory very
    thoroughly.

    Learning the details of these things could have boxed me into a >>>>>>>>> corner
    prior to my philosophical investigation of seeing how the key >>>>>>>>> elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of >>>>>>>>> semantic
    tautologies. It is true that formal systems grounded in this >>>>>>>>> foundation
    cannot be incomplete nor have any expressions of language that are >>>>>>>>> undecidable. Now that I have this foundation I have a way to >>>>>>>>> see exactly
    how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY G >>>>>>>>>>> cannot be
    proved in F. G cannot be proved in F for the same
    pathological self-reference(Olcott 2004) reason that the Liar >>>>>>>>>>> Paradox cannot be proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand
    claissic arguement forms.


    It is not that I do not understand, it is that I can directly >>>>>>>>> see where and how formal mathematical systems diverge from
    correct reasoning.

    But since you are discussing Formal Logic, you need to use the >>>>>>>> rules of Formal logic.


    I have never been talking about formal logic. I have always been >>>>>>> talking
    about the philosophical foundations of correct reasoning.

    No, you have been talking about theorys DEEP in formal logic. You
    can't talk about the "errors" in those theories, with being in
    formal logic.

    IF you think you can somehow talk about the foundations, while
    working in the penthouse, you have just confirmed that you do not
    understand how ANY form of logic works.

    PERIOD.



    The other way to say it is that your "Correct Reasoning"
    diverges from the accepted and proven system of Formal Logic.


    It is correct reasoning in the absolute sense that I refer to.
    If anyone has the opinion that arithmetic does not exist they are >>>>>>> incorrect in the absolute sense of the word: "incorrect".


    IF you reject the logic that a theory is based on, you need to
    reject the logic system, NOT the theory.

    You are just showing that you have wasted your LIFE because you
    don'tunderstnad how to work ligic.


    Because you are a learned-by-rote person you make sure to never >>>>>>>>> examine
    whether or not any aspect of math diverges from correct
    reasoning, you
    simply assume that math is the gospel even when it contradicts >>>>>>>>> itself.

    Nope, I know that with logic, if you follow the rules, you will >>>>>>>> get the correct answer by the rules.

    If you break the rules, you have no idea where you will go.


    In other words you never ever spend any time on making sure that >>>>>>> these
    rules fit together coherently.

    The rules work together just fine.

    YOU don't like some of the results, but they work just fine for
    most of the field.

    You are just PROVING that you have no idea how to actually discuss >>>>>> a new foundation for logic, likely because you are incapable of
    actually comeing up with a consistent basis for working logic.


    As I have told you before, if you want to see what your "Correct >>>>>>>> > Reasoning" can do as a replaceent logic system, you need to
    start at the
    BEGINNING, and see wht it gets.


    The foundation of correct reasoning is that the entire body of
    analytical truth is a set of semantic tautologies.

    This means that all correct inference always requires determining >>>>>>> the
    semantic consequence of expressions of language. This semantic
    consequence can be specified syntactically, and indeed must be
    represented syntactically to be computable
    Meaningless gobbledy-good until you actually define what you mean
    and spell out the actual rules that need to be followed.

    Note, "Computability" is actually a fairly late in the process
    concept. You first need to show that you logic can actually do
    something useful


    To just try to change things at the end is just PROOF that your >>>>>>>> "Correct Reasoning" has to not be based on any real principles >>>>>>>> of logic.

    Since it is clear that you want to change some of the basics of >>>>>>>> how logic works, you are not allowed to just use ANY of
    classical logic until you actually show what part of it is still >>>>>>>> usable under your system and what changes happen.


    Whenever an expression of language is derived as the semantic
    consequence of other expressions of language we have valid
    inference.

    And, are you using the "classical" definition of "semantic" (which >>>>>> makes this sentence somewhat cirular) or do you mean something
    based on the concept you sometimes use of  "the meaning of the
    words".


    *Principle of explosion*
    An alternate argument for the principle stems from model theory. A
    sentence P is a semantic consequence of a set of sentences Γ only if >>>>> every model of Γ is a model of P. However, there is no model of the >>>>> contradictory set (P ∧ ¬P) A fortiori, there is no model of (P ∧ ¬P)
    that is not a model of Q. Thus, vacuously, every model of (P ∧ ¬P) >>>>> is a
    model of Q. Thus, Q is a semantic consequence of (P ∧ ¬P).
    https://en.wikipedia.org/wiki/Principle_of_explosion

    Vacuous truth does not count as truth.
    All variables must be quantified

    "all cell phones in the room are turned off" will be true when no
    cell phones are in the room.

    ∃cp ∈ cell_phones (in_this_room(cp)) ∧ turned_off(cp))



    The semantic consequence must be specified syntactically so that >>>>>>> it can
    be computed or examined in formal systems.

    Just like in sound deductive inference when the premises are
    known to be
    true, and the reasoning valid (a semantic consequence) then the
    conclusion is necessarily true.

    So, what is the difference in your system from classical Formal
    Logic?


    Semantic Necessity operator: ⊨□

        FALSE ⊨□ FALSE // POE abolished
    (P ∧ ¬P) ⊨□ FALSE // POE abolished

    ⇒ and → symbols are replaced by ⊨□

    The sets that the variables range over must be defined
    all variables must be quantified

    // x is a semantic consequence of its premises in L
    Provable(P,x) ≡ ∃x ∈ L, ∃P ⊆ L (P ⊨□ x)

    // x is a semantic consequence of the axioms of L
    True(L,x) ≡ ∃x ∈ L (Axioms(L) ⊨□ x)

    *The above is all that I know right now*


    The most important aspect of the tiny little foundation of a formal >>>>>>> system that I already specified immediately above is self-evident: >>>>>>> True(L,X) can be defined and incompleteness is impossible.

    I don't think your system is anywhere near establish far enough
    for you to say that.

    Try and show exceptions to this rule and I will fill in any gaps that >>>>> you find.

    G asserts its own unprovability in F
    The reason that G cannot be proved in F is that this requires a
    sequence of inference steps in F that proves no such sequence
    of inference steps exists in F.


    ∃sequence_of_inference_steps ⊆ F (sequence_of_inference_steps ⊢
    ∄sequence_of_inference_steps ⊆ F)



    So, you don't understand the differnce between the INFINITE set of
    sequence steps that show that G is True, and the FINITE number of
    steps that need to be shown to make G provable.


    The experts seem to believe that unless a proof can be transformed into
    a finite sequence of steps it is no actual proof at all. Try and cite a
    source that says otherwise.

    WHy? Because I agree with that. A Proof needs to be done in a finite
    number of steps.

    The question is why the infinite number of steps in F that makes G true
    don't count for making it true.

    Yes, you can't write that out to KNOW it to be true, but that is the differece between knowledge and fact.


    Infinite proof are not allowed: Because they can't possibly ever occur.


    We can imagine an Oracle machine that can complete these proofs in the
    same sort of way that we can imagine a magic fairy that waves a magic
    wand.

    You are just showing you don't understand what you talking about and
    just spouting word (or symbol) salad.

    You are oriving you are an IDIOT.

    I am seeing these things at a deeper philosophical level than you are.
    I know that is hard to believe.

    But not according to the rules of the system you are talking about.

    You don't get to change the rules on a system.


    YES I DO !!!
    My whole purpose to provide the *correct reasoning* foundation such that
    formal systems can be defined without undecidability or undefinability,
    or inconsistently.


    You are so sure that I must be wrong that you don't bother to understand
    what I am saying.

    No, I understand what you are saying and see where you are WRONG.


    Yet will continue to dodge this because you are only bluffing.
    You cannot even begin to show that I am wrong.


    It seem that the time has come for me to spend the little time that it
    takes to understand the technical details of Gödel's proof.

    I am estimating that have very good understanding of the preface to
    the proof and the SEP article should provide this.

    https://mavdisk.mnsu.edu/pj2943kt/Fall%202015/Promotion%20Application/Previous%20Years%20Article%2022%20Materials/godel-1931.pdf




    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Wed Apr 26 01:07:47 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/25/2023 6:56 AM, Richard Damon wrote:
    On 4/25/23 12:03 AM, olcott wrote:
    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 11:25 AM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making >>>>>>>>>>>>>> it erroneous.


    Since you don't understand the meaning of
    self-contradictory, that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps >>>>>>>>>>>> in F that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is >>>>>>>>>>> true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true. >>>>>>>>>>


    So, you don't understand how to prove that something is "True >>>>>>>>> in F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox
    expressed in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was true, >>>>>>> then the Liar's paradox would be true, thus that assumption can
    not be true.


    When one level of indirect reference is applied to the Liar
    Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE.

    Your


    We can do the same thing when G asserts its own unprovability in F. >>>>>>>> G cannot be proved in F because this requires a sequence of
    inference
    steps in F that prove that they themselves do not exist in F.

    Right, you can't prove, in F, that G is true, but you can prove, >>>>>>> in Meta-F, that G is true in F, and that G is unprovable in F,
    which is what is required.


    When G asserts its own unprovability in F it cannot be proved in F >>>>>> because this requires a sequence of inference steps in F that
    prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way Tarski's >>>>>> Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics of
    logic, or truth.


    It may seem that way to someone that learns things by rote and
    mistakes
    this for actual understanding of exactly how all of the elements of a >>>>>> proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have
    intentionaally hamstrung yourself to avoid being "polluted" by
    "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just
    condemned yourself into being a pathological liar because you
    just don't any better.


    I do at this point need to understand model theory very thoroughly. >>>>>>
    Learning the details of these things could have boxed me into a
    corner
    prior to my philosophical investigation of seeing how the key
    elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of
    semantic
    tautologies. It is true that formal systems grounded in this
    foundation
    cannot be incomplete nor have any expressions of language that are >>>>>> undecidable. Now that I have this foundation I have a way to see
    exactly
    how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY G
    cannot be
    proved in F. G cannot be proved in F for the same pathological >>>>>>>> self-reference(Olcott 2004) reason that the Liar Paradox cannot >>>>>>>> be proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand claissic >>>>>>> arguement forms.


    It is not that I do not understand, it is that I can directly see
    where and how formal mathematical systems diverge from correct
    reasoning.

    But since you are discussing Formal Logic, you need to use the
    rules of Formal logic.

    The other way to say it is that your "Correct Reasoning" diverges
    from the accepted and proven system of Formal Logic.



    In classical logic, intuitionistic logic and similar logical
    systems, the principle of explosion

    ex falso [sequitur] quodlibet,
    'from falsehood, anything [follows]'

    ex contradictione [sequitur] quodlibet,
    'from contradiction, anything [follows]')

    Right, if a logic system can prove a contradiction, that out of that
    contradiction you can prove anything


    https://en.wikipedia.org/wiki/Principle_of_explosion

    ∴ FALSE ⊢ Donald Trump is the Christ
    ∴ FALSE ⊢ Donald Trump is Satan

    Which isn't what was being talked about.

    You clearly don't understand how the principle of explosion works,
    which isn't surprising considering how many misconseptions you have
    about how logic works.


    ex falso [sequitur] quodlibet,'from falsehood, anything [follows]'
    ∴ FALSE ⊢ Donald Trump is the Christ

    But you are using the wrong symbol



    'from falsehood, anything [follows]'
    FALSE Proves that Donald Trump is the Christ

    It is jack ass nonsense like this that proves the
    principle of explosion is nothing even kludge


    False -> Donald Trump is the Christ



    Semantic Necessity operator: ⊨□
    FALSE ⊨□ FALSE // POE abolished
    (P ∧ ¬P) ⊨□ FALSE // POE abolished

    From False only False follows.
    From Contradiction only False follows.

    Is the statement that this is implying.


    I reject implication and replace it with this
    Semantic Necessity operator: ⊨□

    Or this Archimedes Plutonium's:
    If--> then
    T --> T = T
    T --> F = F
    F --> T = U (unknown or uncertain)
    F --> F = U (unknown or uncertain)

    You seem to have a confusion between the implication operator and the
    proves operator.


    I meant the stronger meaning The best hing to use might be :
    Archimedes Plutonium's: If--> then (see above).

    The whole idea is to formalize the notion of correct reasoning and use
    this model to correct the issues with formal logic.


    Right now, I would say you are to ignorant on that basics of logic to
    be able to explain, even in basic terms, how it works, you have shown
    yourself to be that stupid.



    *Correction abolishing the POE nonsense*
    Semantic Necessity operator: ⊨□
    FALSE ⊨□ FALSE
    (P ∧ ¬P) ⊨□ FALSE


    So, FULLY define what you mean by that.


    The two logic symbols already say semantic necessity, model theory may
    have screwed up the idea of semantics by allowing vacuous truth.
    I must become a master expert of at least basic model theory.

    So, you don't understand what it means to DEFINE something.

    I guess your theory is dead them.


    Vacuous truth is eliminated by requiring every variable to be quantified.

    By your examples, your logical necessisty operator can only establish a falsehood.  Seems about right for the arguments you have been making.


    Try and explain what you mean by that.

    The big picture of what I am doing is defining the foundation of the formalization of correct reasoning.

    I am doing this on the basis of existing systems, then adding, removing
    or changing things as needed to conform the system to correct reasoning.




    Because you are a learned-by-rote person you make sure to never
    examine
    whether or not any aspect of math diverges from correct reasoning, >>>>>> you
    simply assume that math is the gospel even when it contradicts
    itself.

    Nope, I know that with logic, if you follow the rules, you will get
    the correct answer by the rules.


    Then you must agree that Trump is the Christ and Trump is Satan both of >>>> those were derived from correct logic.

    If you break the rules, you have no idea where you will go.

    As I have told you before, if you want to see what your "Correct
    Reasoning" can do as a replaceent logic system, you need to start
    at the BEGINNING, and see wht it gets.


    I would be happy to talk this through with you.

    The beginning is that

    valid inference an expression X of language L must be a semantic
    consequence of its premises in L


    And what do you mean by "semantic"


    What does meaning mean?
    The premise that
    the Moon if made from green cheese ⊨□ The Moon is made from cheese.

    All of the conventional logic symbols retain their original meaning.
    Variable are quantified and of a specific type.
    Meaning postulates can axiomatise meaning.

    So, you have no "Formal Logic" since you are allowing the addition of
    new "axioms" based on "meaning" (which you admit you can't define).


    When I am redefining current systems so that they conform to correct
    reasoning I make minimal changes to existing notions.


    The connection between elements of the proof must be at least as good as
    relevance logic.

    So, your logic system is WEAKER than stadard logic. Have you gone back
    to the formal proofs that establish fields like Computability Theory and
    see what still remains after the requirement of relevance logic?


    No it is not and you cannot show that it is.



    because, conventional logic defines semantic consequence as the
    conclusion must be true if the premise is true.

    You seem to mean something diffent, but haven't explained what you
    mean by that.


    You never heard of ordinary sound deductive inference?

    Yes, I have, but you aren't using it. for instance, you allow a logical conclusion to be made from an false premise.

    False does Derive False, Please try to back up all of your assertions
    with reasoning. For statements like the one above you need a time
    stamped quote of exactly what I said.


    You seem to want to remove
    parts of the logic, but can't actually define what you mean.



    I have defined many key aspects many times: True/False/Non Sequitur
    abolishes incompleteness and undefinability while maintaining consistency.



    sound inference expression X of language L must be a semantic
    consequence of the axioms of L.

    For formal systems such as FOL the semantics is mostly the meaning
    of the logic symbols.

    These two logic symbols are abolished ⇒ → and replaced with this:
    Semantic Necessity operator: ⊨□

    Why do you need to abolish shows symbols?

    They seem to lead to the principle of explosion.

    No, they are often used in the proof, but the mere ability to assert
    simple logic.

    Allowing the following sort of logic is enough:

    IT is True that A
    Therefore it is True that A | B

    and

    It is True that A | B
    It is False that A
    Therefore B must be True.

    You can build the principle of explosion from simple logic like that, so unless you eliminate the "and" and the "or" predicate, you get the
    principle of explosion.


    Show me how and I will point out how it is fixed.



    You do understand that the statment that A -> B is equivalent to the
    asserting of (~A | B) is ALWAYS TRUE (which might be part of your
    problem, as you don't seem to understand that categorical meaning of
    ALL and NO), so either you need to outlaw the negation operator, or
    the or operator to do this.

    Again, what does "Semantic Necessity" operator mean?

    A ⊨□ B the meaning of B is an aspect of the meaning of A.


    So, you seem to be saying that you will not be able to prove the
    pythogrean theorem, since the conclusion doesn't have a "meaning" that
    is an aspect of the "meaning" of the conditions.


    It has plenty of geometric meaning.



    Note, one issue with your use of symbols, so many of the symbols can
    have slightly diffferent meanings based on the context and system you
    are working in.


    I don't see this can you provide examples?
    I am stipulating standard meanings.

    WHICH standard meaning.


    All of the logic symbols have their standard meaning.

    That is your problem, you don't seem to know enough to understand that
    there are shades of meaning in things.




    To just try to change things at the end is just PROOF that your
    "Correct Reasoning" has to not be based on any real principles of
    logic.


    No logic must be based on correct reasoning any logic that prove
    Donal Trump is the Christ is incorrect reasoning, thus the POE is
    abolished

    You CAN'T abolish the Principle of Explosion unless you greatly
    restrict the power of your logic.


    My two axioms abolish it neatly. All that I am getting rid of is
    incompleteness and undecidability and I am gaining a universal True(L,x)
    predicate.

    Nope, you don't understand how the Principle of Explosion works.


    These are stipulated
    FALSE ⊨□ FALSE
    (P ∧ ¬P) ⊨□ FALSE

    No AXIOMS can affect it,


    That sounds ridiculous to me. Can you show what you mean?

    as it comes out of a couple of simple logical
    rules.



    These two logic symbols are abolished ⇒ → and replaced with this:
    Semantic Necessity operator: ⊨□

    Explosions have been abolished

    Nope.

    FALSE ⊨□ FALSE
    (P ∧ ¬P) ⊨□ FALSE

    Again DEFINE this operator, and the words you use to define it.


    The semantic meaning of B is necessitated by the semantic meaning of A.
    If I have a dog then I have an animal because a dog is an animal.

    So you seem to be limited to categorical logic only. As I have pointed
    out, this means you can't prove the Pythagorean theorem, since the
    conclusion isn't "semantically" related to the premises.


    I am simply using that as a concrete starting point to show one example
    of how it works.



    Since it is clear that you want to change some of the basics of how
    logic works, you are not allowed to just use ANY of classical logic
    until you actually show what part of it is still usable under your
    system and what changes happen.


    Yes lets apply my ideas to FOL. I have already sketched out many
    details.

    Go ahead, try to fully define your ideas.

    Remember, until you get to supporting the Higher Order Logics, you
    can't get to the incompleteness, as that has been only established
    for systems

    I have always been talking about HOL in terms of MTT

    Which doesn't work.

    MTT does work. The earlier version translated even very complex logic expressions into the equivalent direct graph.

    I think that the current version only does a parse tree.



    with second order logic, which is also needed for the needed
    properties of the whole numbers. First Order Peano Arithmatic might
    be complete, but can't be proved (within itself) to be consistent.
    Second Order Peaano Arithmatic (which adds the principle of
    Induction) IS incomplete as it supports enough of the natural number
    to support Godel's proof.


    Considering your current status, I would start working hard on that
    right away, as with your current reputation, once you go, NO ONE is
    going to want to look at your ideas, because you have done such a
    good job showing that you don't understand how things work.

    I haven't been able to get out of you exactly what you want to do
    with your "Correct Reasoning", and until you show a heart to
    actually try to do something constructive with it, and not just use
    it as an excuse for bad logic, I don't care what it might be able
    to do, because, frankly, I don't think you have the intellect to
    come up with something like that.


    I showed how the POE is easily abolished.

    Nope.


    If axioms stipulate that explosion cannot occur then it cannot occur.

    Nope. Such an axiom just make your system inconsistant and exploded.



    Remember, you never NEED to use a given axiom, so adding an axiom can't
    keep you from showing something.


    I showed how Provable(L,x) and True(L,x) are defined.

    Note clearly. For instance, A statment x is provable or True in a
    SYSTEM/THEORY (depending on your terminology) and NOT dependent on
    some other statement in the system, as you definition seemed to
    imply. You don't "Prove" something based on a statement, but in a
    System/Theory.


    F ⊢ A is used to express (in the meta-level) that A
    is derivable in F, that is, that there is a proof of
    A in F, or, in other words, that A is a theorem of F.
    https://plato.stanford.edu/entries/goedel-incompleteness/

    Right, but you have shown examples where you "L" above was a STATEMENT,
    not a FIELD/THEORY.

    You also don't understand that the difference between Provable and True
    is that Provable requires a finite series of steps, but True can be
    satisfied by an infinite series of steps.



    But go ahead and prove me wrong, write an actual paper on the
    basics of your "Correct Reasoning" and show how it actually works,
    and compare it to "Classical Logic" and show what is different.
    Then maybe you can start to work on showing it can actually do
    something useful.


    I need a dialogue to vet aspects of my ideas.
    The key thing that I have not yet filled in is how to specify the
    semantics of every FOL expression.

    This semantics seems fully specified:
    ∀n ∈ ℕ ∀m ∈ ℕ ((n > m) ⊨□ (n+1 > m))

    Nope, you need to actually FULLY DEFINE what you mean by your
    symbols, you can't just rely on refering to classical meaning since
    you clearly disagree with some of the classical meanings.

    in the above case I can switch to conventional symbols without losing
    anything ∀n ∈ ℕ ∀m ∈ ℕ ((n > m) ⊢ (n+1 > m))

    Except that is a domain error, as the ⊢ operator needs a field/theory as its left operand.


    Implication does a poor job of if-then
    p---q---(p ⇒ q)---(if p then q)
    T---T------T------------T
    T---F------F------------F
    F---T------T------------undefined
                              T
    F---F------T------------undefined
                              T

    Not undefined at all.

    From falsity, anything follows.

    The statement "If false then B" makes no assertion at all for this case. Think of your programming languages, a false condition in an if
    statement ignores the conditional statements after it.


    You seem to have some disjoint ideas, but seem to be unable to come
    up with a cohesive whole. You use words that you don't seem to be
    able to actually fully define.

    Since you are trying to reject some of the basics of classical logic,
    you need to FULLY define how your logic works. Name ALL the basic
    operation that you allow. Do you allow "Not", "Or", "And", "Equals",
    etc. What are your rules for logical inference. How do you ACTUALLY
    prove a statement given a set of "Truthmakers".


    All of the details that I provided are all of the detail that I know
    right now.

    Which is your problem. You seem incapable of understanding how the
    changes you want to make affect the whole system, because you don't know
    it well enough,



    Remember, if you want to reject classical logic, you can't use it to
    define your system.

    I can take it as a basis and add and subtract things from it


    And if you change it at all, you need to go back and see what all the
    effects are. You can't assume the tree remains the same if you change
    its roots.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Apr 26 08:07:35 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/26/23 2:07 AM, olcott wrote:
    On 4/25/2023 6:56 AM, Richard Damon wrote:
    On 4/25/23 12:03 AM, olcott wrote:
    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 11:25 AM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making >>>>>>>>>>>>>>> it erroneous.


    Since you don't understand the meaning of
    self-contradictory, that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps >>>>>>>>>>>>> in F that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement is >>>>>>>>>>>> true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski:
    This sentence is not true: "This sentence is not true" is true. >>>>>>>>>>>


    So, you don't understand how to prove that something is "True >>>>>>>>>> in F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox
    expressed in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was true, >>>>>>>> then the Liar's paradox would be true, thus that assumption can >>>>>>>> not be true.


    When one level of indirect reference is applied to the Liar
    Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE. >>>>>>>
    Your


    We can do the same thing when G asserts its own unprovability >>>>>>>>> in F.
    G cannot be proved in F because this requires a sequence of
    inference
    steps in F that prove that they themselves do not exist in F. >>>>>>>>
    Right, you can't prove, in F, that G is true, but you can prove, >>>>>>>> in Meta-F, that G is true in F, and that G is unprovable in F, >>>>>>>> which is what is required.


    When G asserts its own unprovability in F it cannot be proved in F >>>>>>> because this requires a sequence of inference steps in F that
    prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way
    Tarski's Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics of >>>>>>>> logic, or truth.


    It may seem that way to someone that learns things by rote and
    mistakes
    this for actual understanding of exactly how all of the elements >>>>>>> of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have
    intentionaally hamstrung yourself to avoid being "polluted" by >>>>>>>> "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just
    condemned yourself into being a pathological liar because you
    just don't any better.


    I do at this point need to understand model theory very thoroughly. >>>>>>>
    Learning the details of these things could have boxed me into a
    corner
    prior to my philosophical investigation of seeing how the key
    elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of
    semantic
    tautologies. It is true that formal systems grounded in this
    foundation
    cannot be incomplete nor have any expressions of language that are >>>>>>> undecidable. Now that I have this foundation I have a way to see >>>>>>> exactly
    how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY G >>>>>>>>> cannot be
    proved in F. G cannot be proved in F for the same pathological >>>>>>>>> self-reference(Olcott 2004) reason that the Liar Paradox cannot >>>>>>>>> be proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand
    claissic arguement forms.


    It is not that I do not understand, it is that I can directly see >>>>>>> where and how formal mathematical systems diverge from correct
    reasoning.

    But since you are discussing Formal Logic, you need to use the
    rules of Formal logic.

    The other way to say it is that your "Correct Reasoning" diverges
    from the accepted and proven system of Formal Logic.



    In classical logic, intuitionistic logic and similar logical
    systems, the principle of explosion

    ex falso [sequitur] quodlibet,
    'from falsehood, anything [follows]'

    ex contradictione [sequitur] quodlibet,
    'from contradiction, anything [follows]')

    Right, if a logic system can prove a contradiction, that out of that
    contradiction you can prove anything


    https://en.wikipedia.org/wiki/Principle_of_explosion

    ∴ FALSE ⊢ Donald Trump is the Christ
    ∴ FALSE ⊢ Donald Trump is Satan

    Which isn't what was being talked about.

    You clearly don't understand how the principle of explosion works,
    which isn't surprising considering how many misconseptions you have
    about how logic works.


    ex falso [sequitur] quodlibet,'from falsehood, anything [follows]'
    ∴ FALSE ⊢ Donald Trump is the Christ

    But you are using the wrong symbol



    'from falsehood, anything [follows]'
    FALSE Proves that Donald Trump is the Christ

    That isn't what the statment actually means, so you are just stupid.



    It is jack ass nonsense like this that proves the
    principle of explosion is nothing even kludge

    Right, false doesn't PROVE anything, but implies anything,

    The difference is "Proves" takes a Field as its input, so "False" isn't
    a field.

    Implication takes a statement as its input, so can take false.

    Since False implies eithert true or false, as false statement can imply anything,.




    False -> Donald Trump is the Christ



    Semantic Necessity operator: ⊨□
        FALSE ⊨□ FALSE // POE abolished

    So, what do you actually mean by that?

    (P ∧ ¬P) ⊨□ FALSE // POE abolished

    From False only False follows.
    From Contradiction only False follows.

    Is the statement that this is implying.


    I reject implication and replace it with this
    Semantic Necessity operator: ⊨□

    Or this Archimedes Plutonium's:
    If--> then
    T --> T = T
    T --> F = F
    F --> T = U (unknown or uncertain)
    F --> F = U (unknown or uncertain)

    So, you don't understand how a truth table works or how logic works with
    them.

    If that WAS the truth table for implication, then no implication would
    be valid unless its premise was a tautology, something true in all the
    models of the system, so if its own truth is 'uncertain' for some cases,
    it can't be taken as true.

    Note, A -> B, with A false, just means that we are uncertain about the
    truth of B, as by the truth table, B could be either True or False.




    You seem to have a confusion between the implication operator and the
    proves operator.


    I meant the stronger meaning The best hing to use might be :
    Archimedes Plutonium's: If--> then (see above).

    So, you ARE confused about the meaning of the implication operator.


    The whole idea is to formalize the notion of correct reasoning and use
    this model to correct the issues with formal logic.


    And if you want to do that, you need to go to the beginning and start there.

    You can't change it later down the chain.


    Right now, I would say you are to ignorant on that basics of logic
    to be able to explain, even in basic terms, how it works, you have
    shown yourself to be that stupid.



    *Correction abolishing the POE nonsense*
    Semantic Necessity operator: ⊨□
    FALSE ⊨□ FALSE
    (P ∧ ¬P) ⊨□ FALSE


    So, FULLY define what you mean by that.


    The two logic symbols already say semantic necessity, model theory may
    have screwed up the idea of semantics by allowing vacuous truth.
    I must become a master expert of at least basic model theory.

    So, you don't understand what it means to DEFINE something.

    I guess your theory is dead them.


    Vacuous truth is eliminated by requiring every variable to be quantified.

    So, since you can't actually define any of your terms, your whole system
    is eliminated.


    By your examples, your logical necessisty operator can only establish
    a falsehood.  Seems about right for the arguments you have been making.


    Try and explain what you mean by that.

    You have only defined it for case where the input is FALSE.


    The big picture of what I am doing is defining the foundation of the formalization of correct reasoning.

    So DEFINE it, not just talk about it.


    I am doing this on the basis of existing systems, then adding, removing
    or changing things as needed to conform the system to correct reasoning.

    Which doesn't work, as everything that you take might (and likely has
    been) established by rules you reject.

    You have no idea what you can actually use are being derived from your
    logic system, as you have never actually applied it to the base.





    Because you are a learned-by-rote person you make sure to never
    examine
    whether or not any aspect of math diverges from correct
    reasoning, you
    simply assume that math is the gospel even when it contradicts
    itself.

    Nope, I know that with logic, if you follow the rules, you will
    get the correct answer by the rules.


    Then you must agree that Trump is the Christ and Trump is Satan
    both of
    those were derived from correct logic.

    If you break the rules, you have no idea where you will go.

    As I have told you before, if you want to see what your "Correct
    Reasoning" can do as a replaceent logic system, you need to start
    at the BEGINNING, and see wht it gets.


    I would be happy to talk this through with you.

    The beginning is that

    valid inference an expression X of language L must be a semantic
    consequence of its premises in L


    And what do you mean by "semantic"


    What does meaning mean?
    The premise that
    the Moon if made from green cheese ⊨□ The Moon is made from cheese.

    All of the conventional logic symbols retain their original meaning.
    Variable are quantified and of a specific type.
    Meaning postulates can axiomatise meaning.

    So, you have no "Formal Logic" since you are allowing the addition of
    new "axioms" based on "meaning" (which you admit you can't define).


    When I am redefining current systems so that they conform to correct reasoning I make minimal changes to existing notions.

    But claim to make major changes to their operation, so you have no idea
    what parts of the curret system are still valid.

    You are just building your argument on a lie, the lie that you are
    working in a logic system build on your rules.



    The connection between elements of the proof must be at least as good as >>> relevance logic.

    So, your logic system is WEAKER than stadard logic. Have you gone back
    to the formal proofs that establish fields like Computability Theory
    and see what still remains after the requirement of relevance logic?


    No it is not and you cannot show that it is.

    You are disallowing proofs that are currently allowed.

    That makes it weaker.




    because, conventional logic defines semantic consequence as the
    conclusion must be true if the premise is true.

    You seem to mean something diffent, but haven't explained what you
    mean by that.


    You never heard of ordinary sound deductive inference?

    Yes, I have, but you aren't using it. for instance, you allow a
    logical conclusion to be made from an false premise.

    False does Derive False, Please try to back up all of your assertions
    with reasoning. For statements like the one above you need a time
    stamped quote of exactly what I said.

    But you claim something to be TRUE based on a false preise.

    Maybe you don't even understand your own arguement.



    You seem to want to remove parts of the logic, but can't actually
    define what you mean.



    I have defined many key aspects many times: True/False/Non Sequitur
    abolishes incompleteness and undefinability while maintaining consistency.

    Nope, you have scattered ideas, but not actually establish how to
    actually build and use your system.

    The likely issue is you just don't understand how logic works well
    enough to understand what you need to define.




    sound inference expression X of language L must be a semantic
    consequence of the axioms of L.

    For formal systems such as FOL the semantics is mostly the meaning
    of the logic symbols.

    These two logic symbols are abolished ⇒ → and replaced with this: >>>>> Semantic Necessity operator: ⊨□

    Why do you need to abolish shows symbols?

    They seem to lead to the principle of explosion.

    No, they are often used in the proof, but the mere ability to assert
    simple logic.

    Allowing the following sort of logic is enough:

    IT is True that A
    Therefore it is True that A | B

    and

    It is True that A | B
    It is False that A
    Therefore B must be True.

    You can build the principle of explosion from simple logic like that,
    so unless you eliminate the "and" and the "or" predicate, you get the
    principle of explosion.


    Show me how and I will point out how it is fixed.

    So, you can't see it from the above. If we can prove that statement A is
    both True and False, which is the meaning of proving a contradiction, we
    can use the above logic to prove B is true, no matter what it is.

    Having proved A, we can prove that A | B is true, and if A | B is true,
    and A is false, B must be True.

    Either you can't have compound terms using "and" or "or", or you can't
    allow contradictions.




    You do understand that the statment that A -> B is equivalent to the
    asserting of (~A | B) is ALWAYS TRUE (which might be part of your
    problem, as you don't seem to understand that categorical meaning of
    ALL and NO), so either you need to outlaw the negation operator, or
    the or operator to do this.

    Again, what does "Semantic Necessity" operator mean?

    A ⊨□ B the meaning of B is an aspect of the meaning of A.


    So, you seem to be saying that you will not be able to prove the
    pythogrean theorem, since the conclusion doesn't have a "meaning" that
    is an aspect of the "meaning" of the conditions.


    It has plenty of geometric meaning.

    Nothing from the definitions of the terms. Yes, from what can be derived
    from it, but not from itself.




    Note, one issue with your use of symbols, so many of the symbols can
    have slightly diffferent meanings based on the context and system
    you are working in.


    I don't see this can you provide examples?
    I am stipulating standard meanings.

    WHICH standard meaning.


    All of the logic symbols have their standard meaning.

    WHICH standard meaning.


    That is your problem, you don't seem to know enough to understand that
    there are shades of meaning in things.




    To just try to change things at the end is just PROOF that your
    "Correct Reasoning" has to not be based on any real principles of
    logic.


    No logic must be based on correct reasoning any logic that prove
    Donal Trump is the Christ is incorrect reasoning, thus the POE is
    abolished

    You CAN'T abolish the Principle of Explosion unless you greatly
    restrict the power of your logic.


    My two axioms abolish it neatly. All that I am getting rid of is
    incompleteness and undecidability and I am gaining a universal True(L,x) >>> predicate.

    Nope, you don't understand how the Principle of Explosion works.


    These are stipulated
    FALSE ⊨□ FALSE
    (P ∧ ¬P) ⊨□ FALSE

    SO you don't understand how the principle of Explosion works.


    No AXIOMS can affect it,


    That sounds ridiculous to me. Can you show what you mean?

    A proof only uses the axioms it uses. Adding another axiom has no affect
    on that proof.

    So, trying to add as a axiom that POE doesn't exist, doesn't affect the
    proof that POE exists.

    Thus, adding the axiom just makes the system inconsistent, and thus POE
    is in full affect on the system.


    as it comes out of a couple of simple logical rules.



    These two logic symbols are abolished ⇒ → and replaced with this: >>>>> Semantic Necessity operator: ⊨□

    Explosions have been abolished

    Nope.

    FALSE ⊨□ FALSE
    (P ∧ ¬P) ⊨□ FALSE

    Again DEFINE this operator, and the words you use to define it.


    The semantic meaning of B is necessitated by the semantic meaning of A.
    If I have a dog then I have an animal because a dog is an animal.

    So you seem to be limited to categorical logic only. As I have pointed
    out, this means you can't prove the Pythagorean theorem, since the
    conclusion isn't "semantically" related to the premises.


    I am simply using that as a concrete starting point to show one example
    of how it works.

    SO develop it there, but realize that categorical logic is weaker that
    even frist order logic, so can't handle concepts used in systems with
    higher order logic, like where Halting and Incompleteness live.




    Since it is clear that you want to change some of the basics of
    how logic works, you are not allowed to just use ANY of classical
    logic until you actually show what part of it is still usable
    under your system and what changes happen.


    Yes lets apply my ideas to FOL. I have already sketched out many
    details.

    Go ahead, try to fully define your ideas.

    Remember, until you get to supporting the Higher Order Logics, you
    can't get to the incompleteness, as that has been only established
    for systems

    I have always been talking about HOL in terms of MTT

    Which doesn't work.

    MTT does work. The earlier version translated even very complex logic expressions into the equivalent direct graph.

    I think that the current version only does a parse tree.

    Nope, it doesn't work, because it violates the basic rules. You don't
    see it because you don't understand how logic works.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Wed Apr 26 21:41:55 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/26/2023 7:07 AM, Richard Damon wrote:
    On 4/26/23 2:07 AM, olcott wrote:
    On 4/25/2023 6:56 AM, Richard Damon wrote:
    On 4/25/23 12:03 AM, olcott wrote:
    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 11:25 AM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, making >>>>>>>>>>>>>>>> it erroneous.


    Since you don't understand the meaning of
    self-contradictory, that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference steps >>>>>>>>>>>>>> in F that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement >>>>>>>>>>>>> is true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>> This sentence is not true: "This sentence is not true" is true. >>>>>>>>>>>>


    So, you don't understand how to prove that something is "True >>>>>>>>>>> in F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox
    expressed in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was
    true, then the Liar's paradox would be true, thus that
    assumption can not be true.


    When one level of indirect reference is applied to the Liar
    Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE. >>>>>>>>
    Your


    We can do the same thing when G asserts its own unprovability >>>>>>>>>> in F.
    G cannot be proved in F because this requires a sequence of >>>>>>>>>> inference
    steps in F that prove that they themselves do not exist in F. >>>>>>>>>
    Right, you can't prove, in F, that G is true, but you can
    prove, in Meta-F, that G is true in F, and that G is unprovable >>>>>>>>> in F, which is what is required.


    When G asserts its own unprovability in F it cannot be proved in F >>>>>>>> because this requires a sequence of inference steps in F that
    prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way
    Tarski's Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics of >>>>>>>>> logic, or truth.


    It may seem that way to someone that learns things by rote and >>>>>>>> mistakes
    this for actual understanding of exactly how all of the elements >>>>>>>> of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have >>>>>>>>> intentionaally hamstrung yourself to avoid being "polluted" by >>>>>>>>> "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just
    condemned yourself into being a pathological liar because you >>>>>>>>> just don't any better.


    I do at this point need to understand model theory very thoroughly. >>>>>>>>
    Learning the details of these things could have boxed me into a >>>>>>>> corner
    prior to my philosophical investigation of seeing how the key
    elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of >>>>>>>> semantic
    tautologies. It is true that formal systems grounded in this
    foundation
    cannot be incomplete nor have any expressions of language that are >>>>>>>> undecidable. Now that I have this foundation I have a way to see >>>>>>>> exactly
    how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY G >>>>>>>>>> cannot be
    proved in F. G cannot be proved in F for the same pathological >>>>>>>>>> self-reference(Olcott 2004) reason that the Liar Paradox
    cannot be proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand
    claissic arguement forms.


    It is not that I do not understand, it is that I can directly
    see where and how formal mathematical systems diverge from
    correct reasoning.

    But since you are discussing Formal Logic, you need to use the
    rules of Formal logic.

    The other way to say it is that your "Correct Reasoning" diverges >>>>>>> from the accepted and proven system of Formal Logic.



    In classical logic, intuitionistic logic and similar logical
    systems, the principle of explosion

    ex falso [sequitur] quodlibet,
    'from falsehood, anything [follows]'

    ex contradictione [sequitur] quodlibet,
    'from contradiction, anything [follows]')

    Right, if a logic system can prove a contradiction, that out of
    that contradiction you can prove anything


    https://en.wikipedia.org/wiki/Principle_of_explosion

    ∴ FALSE ⊢ Donald Trump is the Christ
    ∴ FALSE ⊢ Donald Trump is Satan

    Which isn't what was being talked about.

    You clearly don't understand how the principle of explosion works,
    which isn't surprising considering how many misconseptions you have
    about how logic works.


    ex falso [sequitur] quodlibet,'from falsehood, anything [follows]'
    ∴ FALSE ⊢ Donald Trump is the Christ

    But you are using the wrong symbol



    'from falsehood, anything [follows]'
    FALSE Proves that Donald Trump is the Christ

    That isn't what the statment actually means, so you are just stupid.



    It is jack ass nonsense like this that proves the
    principle of explosion is nothing even kludge

    Right, false doesn't PROVE anything, but implies anything,

    "P, ¬P ⊢ Q For any statements P and Q, if P and not-P are both true,
    then it logically follows that Q is true."

    https://en.wikipedia.org/wiki/Principle_of_explosion#Symbolic_representation


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Wed Apr 26 21:47:48 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/26/2023 7:07 AM, Richard Damon wrote:
    On 4/26/23 12:38 AM, olcott wrote:
    On 4/25/2023 6:56 AM, Richard Damon wrote:
    On 4/24/23 11:28 PM, olcott wrote:
    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 12:13 PM, olcott wrote:
    On 4/24/2023 10:58 AM, olcott wrote:
    On 4/22/2023 7:27 PM, Richard Damon wrote:
    On 4/22/23 7:57 PM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, >>>>>>>>>>>>>>>>>>> making it erroneous.


    Since you don't understand the meaning of
    self-contradictory, that claim is erroneous. >>>>>>>>>>>>>>>>>>

    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference >>>>>>>>>>>>>>>>> steps in F that
    prove that they themselves do not exist in F. >>>>>>>>>>>>>>>>
    It is of course impossible to prove in F that a >>>>>>>>>>>>>>>> statement is true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>>>>> This sentence is not true: "This sentence is not true" is >>>>>>>>>>>>>>> true.



    So, you don't understand how to prove that something is >>>>>>>>>>>>>> "True in F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox >>>>>>>>>>>>> expressed in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was >>>>>>>>>>>> true, then the Liar's paradox would be true, thus that >>>>>>>>>>>> assumption can not be true.


    When one level of indirect reference is applied to the Liar >>>>>>>>>>> Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> >>>>>>>>>>> TRUE.

    Your


    We can do the same thing when G asserts its own
    unprovability in F.
    G cannot be proved in F because this requires a sequence of >>>>>>>>>>>>> inference
    steps in F that prove that they themselves do not exist in F. >>>>>>>>>>>>
    Right, you can't prove, in F, that G is true, but you can >>>>>>>>>>>> prove, in Meta-F, that G is true in F, and that G is
    unprovable in F, which is what is required.


    When G asserts its own unprovability in F it cannot be proved >>>>>>>>>>> in F
    because this requires a sequence of inference steps in F that >>>>>>>>>>> prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way >>>>>>>>>>> Tarski's Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics >>>>>>>>>>>> of logic, or truth.


    It may seem that way to someone that learns things by rote >>>>>>>>>>> and mistakes
    this for actual understanding of exactly how all of the
    elements of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you >>>>>>>>>>>> have intentionaally hamstrung yourself to avoid being
    "polluted" by "rote-learning" so you are just self-inflicted >>>>>>>>>>>> ignorant.

    If you won't even try to learn the basics, you have just >>>>>>>>>>>> condemned yourself into being a pathological liar because >>>>>>>>>>>> you just don't any better.


    I do at this point need to understand model theory very
    thoroughly.

    Learning the details of these things could have boxed me into >>>>>>>>>>> a corner
    prior to my philosophical investigation of seeing how the key >>>>>>>>>>> elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set >>>>>>>>>>> of semantic
    tautologies. It is true that formal systems grounded in this >>>>>>>>>>> foundation
    cannot be incomplete nor have any expressions of language >>>>>>>>>>> that are
    undecidable. Now that I have this foundation I have a way to >>>>>>>>>>> see exactly
    how the concepts of math diverge from correct reasoning. >>>>>>>>>>>

    You and I can see both THAT G cannot be proved in F and WHY >>>>>>>>>>>>> G cannot be
    proved in F. G cannot be proved in F for the same
    pathological self-reference(Olcott 2004) reason that the >>>>>>>>>>>>> Liar Paradox cannot be proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand >>>>>>>>>>>> claissic arguement forms.


    It is not that I do not understand, it is that I can directly >>>>>>>>>>> see where and how formal mathematical systems diverge from >>>>>>>>>>> correct reasoning.

    But since you are discussing Formal Logic, you need to use the >>>>>>>>>> rules of Formal logic.


    I have never been talking about formal logic. I have always
    been talking
    about the philosophical foundations of correct reasoning.

    No, you have been talking about theorys DEEP in formal logic.
    You can't talk about the "errors" in those theories, with being >>>>>>>> in formal logic.

    IF you think you can somehow talk about the foundations, while >>>>>>>> working in the penthouse, you have just confirmed that you do
    not understand how ANY form of logic works.

    PERIOD.



    The other way to say it is that your "Correct Reasoning"
    diverges from the accepted and proven system of Formal Logic. >>>>>>>>>>

    It is correct reasoning in the absolute sense that I refer to. >>>>>>>>> If anyone has the opinion that arithmetic does not exist they are >>>>>>>>> incorrect in the absolute sense of the word: "incorrect".


    IF you reject the logic that a theory is based on, you need to >>>>>>>> reject the logic system, NOT the theory.

    You are just showing that you have wasted your LIFE because you >>>>>>>> don'tunderstnad how to work ligic.


    Because you are a learned-by-rote person you make sure to >>>>>>>>>>> never examine
    whether or not any aspect of math diverges from correct
    reasoning, you
    simply assume that math is the gospel even when it
    contradicts itself.

    Nope, I know that with logic, if you follow the rules, you >>>>>>>>>> will get the correct answer by the rules.

    If you break the rules, you have no idea where you will go. >>>>>>>>>>

    In other words you never ever spend any time on making sure
    that these
    rules fit together coherently.

    The rules work together just fine.

    YOU don't like some of the results, but they work just fine for >>>>>>>> most of the field.

    You are just PROVING that you have no idea how to actually
    discuss a new foundation for logic, likely because you are
    incapable of actually comeing up with a consistent basis for
    working logic.


    As I have told you before, if you want to see what your
    "Correct > Reasoning" can do as a replaceent logic system, you >>>>>>>>>> need to start at the
    BEGINNING, and see wht it gets.


    The foundation of correct reasoning is that the entire body of >>>>>>>>> analytical truth is a set of semantic tautologies.

    This means that all correct inference always requires
    determining the
    semantic consequence of expressions of language. This semantic >>>>>>>>> consequence can be specified syntactically, and indeed must be >>>>>>>>> represented syntactically to be computable
    Meaningless gobbledy-good until you actually define what you
    mean and spell out the actual rules that need to be followed.

    Note, "Computability" is actually a fairly late in the process >>>>>>>> concept. You first need to show that you logic can actually do >>>>>>>> something useful


    To just try to change things at the end is just PROOF that >>>>>>>>>> your "Correct Reasoning" has to not be based on any real
    principles of logic.

    Since it is clear that you want to change some of the basics >>>>>>>>>> of how logic works, you are not allowed to just use ANY of >>>>>>>>>> classical logic until you actually show what part of it is >>>>>>>>>> still usable under your system and what changes happen.


    Whenever an expression of language is derived as the semantic >>>>>>>>> consequence of other expressions of language we have valid
    inference.

    And, are you using the "classical" definition of "semantic"
    (which makes this sentence somewhat cirular) or do you mean
    something based on the concept you sometimes use of  "the
    meaning of the words".


    *Principle of explosion*
    An alternate argument for the principle stems from model theory. A >>>>>>> sentence P is a semantic consequence of a set of sentences Γ only if >>>>>>> every model of Γ is a model of P. However, there is no model of the >>>>>>> contradictory set (P ∧ ¬P) A fortiori, there is no model of (P ∧ ¬P)
    that is not a model of Q. Thus, vacuously, every model of (P ∧ >>>>>>> ¬P) is a
    model of Q. Thus, Q is a semantic consequence of (P ∧ ¬P).
    https://en.wikipedia.org/wiki/Principle_of_explosion

    Vacuous truth does not count as truth.
    All variables must be quantified

    "all cell phones in the room are turned off" will be true when no >>>>>>> cell phones are in the room.

    ∃cp ∈ cell_phones (in_this_room(cp)) ∧ turned_off(cp))



    The semantic consequence must be specified syntactically so
    that it can
    be computed or examined in formal systems.

    Just like in sound deductive inference when the premises are >>>>>>>>> known to be
    true, and the reasoning valid (a semantic consequence) then the >>>>>>>>> conclusion is necessarily true.

    So, what is the difference in your system from classical Formal >>>>>>>> Logic?


    Semantic Necessity operator: ⊨□

        FALSE ⊨□ FALSE // POE abolished
    (P ∧ ¬P) ⊨□ FALSE // POE abolished

    ⇒ and → symbols are replaced by ⊨□

    The sets that the variables range over must be defined
    all variables must be quantified

    // x is a semantic consequence of its premises in L
    Provable(P,x) ≡ ∃x ∈ L, ∃P ⊆ L (P ⊨□ x)

    // x is a semantic consequence of the axioms of L
    True(L,x) ≡ ∃x ∈ L (Axioms(L) ⊨□ x)

    *The above is all that I know right now*


    The most important aspect of the tiny little foundation of a >>>>>>>>> formal
    system that I already specified immediately above is self-evident: >>>>>>>>> True(L,X) can be defined and incompleteness is impossible.

    I don't think your system is anywhere near establish far enough >>>>>>>> for you to say that.

    Try and show exceptions to this rule and I will fill in any gaps >>>>>>> that
    you find.

    G asserts its own unprovability in F
    The reason that G cannot be proved in F is that this requires a
    sequence of inference steps in F that proves no such sequence
    of inference steps exists in F.


    ∃sequence_of_inference_steps ⊆ F (sequence_of_inference_steps ⊢ >>>>>> ∄sequence_of_inference_steps ⊆ F)



    So, you don't understand the differnce between the INFINITE set of
    sequence steps that show that G is True, and the FINITE number of
    steps that need to be shown to make G provable.


    The experts seem to believe that unless a proof can be transformed into >>>> a finite sequence of steps it is no actual proof at all. Try and cite a >>>> source that says otherwise.

    WHy? Because I agree with that. A Proof needs to be done in a finite
    number of steps.

    The question is why the infinite number of steps in F that makes G
    true don't count for making it true.

    Yes, you can't write that out to KNOW it to be true, but that is the
    differece between knowledge and fact.


    Infinite proof are not allowed: Because they can't possibly ever occur.


    We can imagine an Oracle machine that can complete these proofs in the >>>> same sort of way that we can imagine a magic fairy that waves a magic
    wand.

    You are just showing you don't understand what you talking about
    and just spouting word (or symbol) salad.

    You are oriving you are an IDIOT.

    I am seeing these things at a deeper philosophical level than you
    are. I know that is hard to believe.

    But not according to the rules of the system you are talking about.

    You don't get to change the rules on a system.


    YES I DO !!!
    My whole purpose to provide the *correct reasoning* foundation such that
    formal systems can be defined without undecidability or undefinability,
    or inconsistently.

    No, to change the rules you have to go back to the beginning.


    Non_Sequitur(G) ↔ ∃φ ((T ⊬ φ) ∧ (T ⊬ ¬φ))




    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Apr 27 07:19:21 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/26/23 10:41 PM, olcott wrote:
    On 4/26/2023 7:07 AM, Richard Damon wrote:
    On 4/26/23 2:07 AM, olcott wrote:
    On 4/25/2023 6:56 AM, Richard Damon wrote:
    On 4/25/23 12:03 AM, olcott wrote:
    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 11:25 AM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, >>>>>>>>>>>>>>>>> making it erroneous.


    Since you don't understand the meaning of
    self-contradictory, that claim is erroneous.


    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference >>>>>>>>>>>>>>> steps in F that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement >>>>>>>>>>>>>> is true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>>> This sentence is not true: "This sentence is not true" is >>>>>>>>>>>>> true.



    So, you don't understand how to prove that something is >>>>>>>>>>>> "True in F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox >>>>>>>>>>> expressed in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was >>>>>>>>>> true, then the Liar's paradox would be true, thus that
    assumption can not be true.


    When one level of indirect reference is applied to the Liar
    Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE. >>>>>>>>>
    Your


    We can do the same thing when G asserts its own unprovability >>>>>>>>>>> in F.
    G cannot be proved in F because this requires a sequence of >>>>>>>>>>> inference
    steps in F that prove that they themselves do not exist in F. >>>>>>>>>>
    Right, you can't prove, in F, that G is true, but you can
    prove, in Meta-F, that G is true in F, and that G is
    unprovable in F, which is what is required.


    When G asserts its own unprovability in F it cannot be proved in F >>>>>>>>> because this requires a sequence of inference steps in F that >>>>>>>>> prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way
    Tarski's Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics of >>>>>>>>>> logic, or truth.


    It may seem that way to someone that learns things by rote and >>>>>>>>> mistakes
    this for actual understanding of exactly how all of the
    elements of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have >>>>>>>>>> intentionaally hamstrung yourself to avoid being "polluted" by >>>>>>>>>> "rote-learning" so you are just self-inflicted ignorant.

    If you won't even try to learn the basics, you have just
    condemned yourself into being a pathological liar because you >>>>>>>>>> just don't any better.


    I do at this point need to understand model theory very
    thoroughly.

    Learning the details of these things could have boxed me into a >>>>>>>>> corner
    prior to my philosophical investigation of seeing how the key >>>>>>>>> elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of >>>>>>>>> semantic
    tautologies. It is true that formal systems grounded in this >>>>>>>>> foundation
    cannot be incomplete nor have any expressions of language that are >>>>>>>>> undecidable. Now that I have this foundation I have a way to >>>>>>>>> see exactly
    how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY G >>>>>>>>>>> cannot be
    proved in F. G cannot be proved in F for the same
    pathological self-reference(Olcott 2004) reason that the Liar >>>>>>>>>>> Paradox cannot be proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand
    claissic arguement forms.


    It is not that I do not understand, it is that I can directly >>>>>>>>> see where and how formal mathematical systems diverge from
    correct reasoning.

    But since you are discussing Formal Logic, you need to use the >>>>>>>> rules of Formal logic.

    The other way to say it is that your "Correct Reasoning"
    diverges from the accepted and proven system of Formal Logic.



    In classical logic, intuitionistic logic and similar logical
    systems, the principle of explosion

    ex falso [sequitur] quodlibet,
    'from falsehood, anything [follows]'

    ex contradictione [sequitur] quodlibet,
    'from contradiction, anything [follows]')

    Right, if a logic system can prove a contradiction, that out of
    that contradiction you can prove anything


    https://en.wikipedia.org/wiki/Principle_of_explosion

    ∴ FALSE ⊢ Donald Trump is the Christ
    ∴ FALSE ⊢ Donald Trump is Satan

    Which isn't what was being talked about.

    You clearly don't understand how the principle of explosion works, >>>>>> which isn't surprising considering how many misconseptions you
    have about how logic works.


    ex falso [sequitur] quodlibet,'from falsehood, anything [follows]'
    ∴ FALSE ⊢ Donald Trump is the Christ

    But you are using the wrong symbol



    'from falsehood, anything [follows]'
    FALSE Proves that Donald Trump is the Christ

    That isn't what the statment actually means, so you are just stupid.



    It is jack ass nonsense like this that proves the
    principle of explosion is nothing even kludge

    Right, false doesn't PROVE anything, but implies anything,

       "P, ¬P ⊢ Q For any statements P and Q, if P and not-P are both true,
        then it logically follows that Q is true."

    https://en.wikipedia.org/wiki/Principle_of_explosion#Symbolic_representation


    So, you don't understnad what you are reading.

    FALSE itself isn't proving anything.

    The fact that you can show that a statement is both true and false does
    allow you to build a proof of any logical statement.


    THIS is why you are showing yourself to be an idiot. If you can't
    understand these simply proofs, and how they work, why should anyone
    think you have a great new insight on how to do logic.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Thu Apr 27 20:09:55 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/27/2023 6:19 AM, Richard Damon wrote:
    On 4/26/23 10:41 PM, olcott wrote:
    On 4/26/2023 7:07 AM, Richard Damon wrote:
    On 4/26/23 2:07 AM, olcott wrote:
    On 4/25/2023 6:56 AM, Richard Damon wrote:
    On 4/25/23 12:03 AM, olcott wrote:
    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 11:25 AM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote:
    On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, >>>>>>>>>>>>>>>>>> making it erroneous.


    Since you don't understand the meaning of
    self-contradictory, that claim is erroneous. >>>>>>>>>>>>>>>>>

    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference >>>>>>>>>>>>>>>> steps in F that
    prove that they themselves do not exist in F.

    It is of course impossible to prove in F that a statement >>>>>>>>>>>>>>> is true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>>>> This sentence is not true: "This sentence is not true" is >>>>>>>>>>>>>> true.



    So, you don't understand how to prove that something is >>>>>>>>>>>>> "True in F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox >>>>>>>>>>>> expressed in his theory is true in his meta-theory.

    No, he didn't, he showed that *IF* a certain assuption was >>>>>>>>>>> true, then the Liar's paradox would be true, thus that
    assumption can not be true.


    When one level of indirect reference is applied to the Liar >>>>>>>>>> Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> TRUE. >>>>>>>>>>
    Your


    We can do the same thing when G asserts its own
    unprovability in F.
    G cannot be proved in F because this requires a sequence of >>>>>>>>>>>> inference
    steps in F that prove that they themselves do not exist in F. >>>>>>>>>>>
    Right, you can't prove, in F, that G is true, but you can >>>>>>>>>>> prove, in Meta-F, that G is true in F, and that G is
    unprovable in F, which is what is required.


    When G asserts its own unprovability in F it cannot be proved >>>>>>>>>> in F
    because this requires a sequence of inference steps in F that >>>>>>>>>> prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way
    Tarski's Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics >>>>>>>>>>> of logic, or truth.


    It may seem that way to someone that learns things by rote and >>>>>>>>>> mistakes
    this for actual understanding of exactly how all of the
    elements of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you have >>>>>>>>>>> intentionaally hamstrung yourself to avoid being "polluted" >>>>>>>>>>> by "rote-learning" so you are just self-inflicted ignorant. >>>>>>>>>>>
    If you won't even try to learn the basics, you have just >>>>>>>>>>> condemned yourself into being a pathological liar because you >>>>>>>>>>> just don't any better.


    I do at this point need to understand model theory very
    thoroughly.

    Learning the details of these things could have boxed me into >>>>>>>>>> a corner
    prior to my philosophical investigation of seeing how the key >>>>>>>>>> elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set of >>>>>>>>>> semantic
    tautologies. It is true that formal systems grounded in this >>>>>>>>>> foundation
    cannot be incomplete nor have any expressions of language that >>>>>>>>>> are
    undecidable. Now that I have this foundation I have a way to >>>>>>>>>> see exactly
    how the concepts of math diverge from correct reasoning.


    You and I can see both THAT G cannot be proved in F and WHY >>>>>>>>>>>> G cannot be
    proved in F. G cannot be proved in F for the same
    pathological self-reference(Olcott 2004) reason that the >>>>>>>>>>>> Liar Paradox cannot be proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand >>>>>>>>>>> claissic arguement forms.


    It is not that I do not understand, it is that I can directly >>>>>>>>>> see where and how formal mathematical systems diverge from >>>>>>>>>> correct reasoning.

    But since you are discussing Formal Logic, you need to use the >>>>>>>>> rules of Formal logic.

    The other way to say it is that your "Correct Reasoning"
    diverges from the accepted and proven system of Formal Logic. >>>>>>>>


    In classical logic, intuitionistic logic and similar logical
    systems, the principle of explosion

    ex falso [sequitur] quodlibet,
    'from falsehood, anything [follows]'

    ex contradictione [sequitur] quodlibet,
    'from contradiction, anything [follows]')

    Right, if a logic system can prove a contradiction, that out of
    that contradiction you can prove anything


    https://en.wikipedia.org/wiki/Principle_of_explosion

    ∴ FALSE ⊢ Donald Trump is the Christ
    ∴ FALSE ⊢ Donald Trump is Satan

    Which isn't what was being talked about.

    You clearly don't understand how the principle of explosion
    works, which isn't surprising considering how many misconseptions >>>>>>> you have about how logic works.


    ex falso [sequitur] quodlibet,'from falsehood, anything [follows]' >>>>>> ∴ FALSE ⊢ Donald Trump is the Christ

    But you are using the wrong symbol



    'from falsehood, anything [follows]'
    FALSE Proves that Donald Trump is the Christ

    That isn't what the statment actually means, so you are just stupid.



    It is jack ass nonsense like this that proves the
    principle of explosion is nothing even kludge

    Right, false doesn't PROVE anything, but implies anything,

        "P, ¬P ⊢ Q For any statements P and Q, if P and not-P are both true,
         then it logically follows that Q is true."

    https://en.wikipedia.org/wiki/Principle_of_explosion#Symbolic_representation >>

    So, you don't understnad what you are reading.

    FALSE itself isn't proving anything.


    'from falsehood, anything [follows]'
    Is not saying that a FALSE antecedent implies any consequent.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Thu Apr 27 20:07:05 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/27/2023 6:19 AM, Richard Damon wrote:
    On 4/26/23 10:47 PM, olcott wrote:
    On 4/26/2023 7:07 AM, Richard Damon wrote:
    On 4/26/23 12:38 AM, olcott wrote:
    On 4/25/2023 6:56 AM, Richard Damon wrote:
    On 4/24/23 11:28 PM, olcott wrote:
    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 12:13 PM, olcott wrote:
    On 4/24/2023 10:58 AM, olcott wrote:
    On 4/22/2023 7:27 PM, Richard Damon wrote:
    On 4/22/23 7:57 PM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, >>>>>>>>>>>>>>>>>>>>> making it erroneous.


    Since you don't understand the meaning of >>>>>>>>>>>>>>>>>>>> self-contradictory, that claim is erroneous. >>>>>>>>>>>>>>>>>>>>

    When G asserts its own unprovability in F: >>>>>>>>>>>>>>>>>>
    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference >>>>>>>>>>>>>>>>>>> steps in F that
    prove that they themselves do not exist in F. >>>>>>>>>>>>>>>>>>
    It is of course impossible to prove in F that a >>>>>>>>>>>>>>>>>> statement is true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>>>>>>> This sentence is not true: "This sentence is not true" >>>>>>>>>>>>>>>>> is true.



    So, you don't understand how to prove that something is >>>>>>>>>>>>>>>> "True in F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox >>>>>>>>>>>>>>> expressed in his theory is true in his meta-theory. >>>>>>>>>>>>>>
    No, he didn't, he showed that *IF* a certain assuption was >>>>>>>>>>>>>> true, then the Liar's paradox would be true, thus that >>>>>>>>>>>>>> assumption can not be true.


    When one level of indirect reference is applied to the Liar >>>>>>>>>>>>> Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> >>>>>>>>>>>>> TRUE.

    Your


    We can do the same thing when G asserts its own
    unprovability in F.
    G cannot be proved in F because this requires a sequence >>>>>>>>>>>>>>> of inference
    steps in F that prove that they themselves do not exist >>>>>>>>>>>>>>> in F.

    Right, you can't prove, in F, that G is true, but you can >>>>>>>>>>>>>> prove, in Meta-F, that G is true in F, and that G is >>>>>>>>>>>>>> unprovable in F, which is what is required.


    When G asserts its own unprovability in F it cannot be >>>>>>>>>>>>> proved in F
    because this requires a sequence of inference steps in F >>>>>>>>>>>>> that prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way >>>>>>>>>>>>> Tarski's Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the >>>>>>>>>>>>>> basics of logic, or truth.


    It may seem that way to someone that learns things by rote >>>>>>>>>>>>> and mistakes
    this for actual understanding of exactly how all of the >>>>>>>>>>>>> elements of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you >>>>>>>>>>>>>> have intentionaally hamstrung yourself to avoid being >>>>>>>>>>>>>> "polluted" by "rote-learning" so you are just
    self-inflicted ignorant.

    If you won't even try to learn the basics, you have just >>>>>>>>>>>>>> condemned yourself into being a pathological liar because >>>>>>>>>>>>>> you just don't any better.


    I do at this point need to understand model theory very >>>>>>>>>>>>> thoroughly.

    Learning the details of these things could have boxed me >>>>>>>>>>>>> into a corner
    prior to my philosophical investigation of seeing how the >>>>>>>>>>>>> key elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set >>>>>>>>>>>>> of semantic
    tautologies. It is true that formal systems grounded in >>>>>>>>>>>>> this foundation
    cannot be incomplete nor have any expressions of language >>>>>>>>>>>>> that are
    undecidable. Now that I have this foundation I have a way >>>>>>>>>>>>> to see exactly
    how the concepts of math diverge from correct reasoning. >>>>>>>>>>>>>

    You and I can see both THAT G cannot be proved in F and >>>>>>>>>>>>>>> WHY G cannot be
    proved in F. G cannot be proved in F for the same >>>>>>>>>>>>>>> pathological self-reference(Olcott 2004) reason that the >>>>>>>>>>>>>>> Liar Paradox cannot be proved in Tarski's theory. >>>>>>>>>>>>>>>

    Which he didn't do, but you are too stupid to understand >>>>>>>>>>>>>> claissic arguement forms.


    It is not that I do not understand, it is that I can >>>>>>>>>>>>> directly see where and how formal mathematical systems >>>>>>>>>>>>> diverge from correct reasoning.

    But since you are discussing Formal Logic, you need to use >>>>>>>>>>>> the rules of Formal logic.


    I have never been talking about formal logic. I have always >>>>>>>>>>> been talking
    about the philosophical foundations of correct reasoning. >>>>>>>>>>
    No, you have been talking about theorys DEEP in formal logic. >>>>>>>>>> You can't talk about the "errors" in those theories, with
    being in formal logic.

    IF you think you can somehow talk about the foundations, while >>>>>>>>>> working in the penthouse, you have just confirmed that you do >>>>>>>>>> not understand how ANY form of logic works.

    PERIOD.



    The other way to say it is that your "Correct Reasoning" >>>>>>>>>>>> diverges from the accepted and proven system of Formal Logic. >>>>>>>>>>>>

    It is correct reasoning in the absolute sense that I refer to. >>>>>>>>>>> If anyone has the opinion that arithmetic does not exist they >>>>>>>>>>> are
    incorrect in the absolute sense of the word: "incorrect". >>>>>>>>>>>

    IF you reject the logic that a theory is based on, you need to >>>>>>>>>> reject the logic system, NOT the theory.

    You are just showing that you have wasted your LIFE because >>>>>>>>>> you don'tunderstnad how to work ligic.


    Because you are a learned-by-rote person you make sure to >>>>>>>>>>>>> never examine
    whether or not any aspect of math diverges from correct >>>>>>>>>>>>> reasoning, you
    simply assume that math is the gospel even when it
    contradicts itself.

    Nope, I know that with logic, if you follow the rules, you >>>>>>>>>>>> will get the correct answer by the rules.

    If you break the rules, you have no idea where you will go. >>>>>>>>>>>>

    In other words you never ever spend any time on making sure >>>>>>>>>>> that these
    rules fit together coherently.

    The rules work together just fine.

    YOU don't like some of the results, but they work just fine >>>>>>>>>> for most of the field.

    You are just PROVING that you have no idea how to actually >>>>>>>>>> discuss a new foundation for logic, likely because you are >>>>>>>>>> incapable of actually comeing up with a consistent basis for >>>>>>>>>> working logic.


    As I have told you before, if you want to see what your >>>>>>>>>>>> "Correct > Reasoning" can do as a replaceent logic system, >>>>>>>>>>>> you need to start at the
    BEGINNING, and see wht it gets.


    The foundation of correct reasoning is that the entire body of >>>>>>>>>>> analytical truth is a set of semantic tautologies.

    This means that all correct inference always requires
    determining the
    semantic consequence of expressions of language. This semantic >>>>>>>>>>> consequence can be specified syntactically, and indeed must be >>>>>>>>>>> represented syntactically to be computable
    Meaningless gobbledy-good until you actually define what you >>>>>>>>>> mean and spell out the actual rules that need to be followed. >>>>>>>>>>
    Note, "Computability" is actually a fairly late in the process >>>>>>>>>> concept. You first need to show that you logic can actually do >>>>>>>>>> something useful


    To just try to change things at the end is just PROOF that >>>>>>>>>>>> your "Correct Reasoning" has to not be based on any real >>>>>>>>>>>> principles of logic.

    Since it is clear that you want to change some of the basics >>>>>>>>>>>> of how logic works, you are not allowed to just use ANY of >>>>>>>>>>>> classical logic until you actually show what part of it is >>>>>>>>>>>> still usable under your system and what changes happen. >>>>>>>>>>>>

    Whenever an expression of language is derived as the semantic >>>>>>>>>>> consequence of other expressions of language we have valid >>>>>>>>>>> inference.

    And, are you using the "classical" definition of "semantic" >>>>>>>>>> (which makes this sentence somewhat cirular) or do you mean >>>>>>>>>> something based on the concept you sometimes use of  "the >>>>>>>>>> meaning of the words".


    *Principle of explosion*
    An alternate argument for the principle stems from model theory. A >>>>>>>>> sentence P is a semantic consequence of a set of sentences Γ >>>>>>>>> only if
    every model of Γ is a model of P. However, there is no model of >>>>>>>>> the
    contradictory set (P ∧ ¬P) A fortiori, there is no model of (P >>>>>>>>> ∧ ¬P)
    that is not a model of Q. Thus, vacuously, every model of (P ∧ >>>>>>>>> ¬P) is a
    model of Q. Thus, Q is a semantic consequence of (P ∧ ¬P). >>>>>>>>> https://en.wikipedia.org/wiki/Principle_of_explosion

    Vacuous truth does not count as truth.
    All variables must be quantified

    "all cell phones in the room are turned off" will be true when >>>>>>>>> no cell phones are in the room.

    ∃cp ∈ cell_phones (in_this_room(cp)) ∧ turned_off(cp)) >>>>>>>>>


    The semantic consequence must be specified syntactically so >>>>>>>>>>> that it can
    be computed or examined in formal systems.

    Just like in sound deductive inference when the premises are >>>>>>>>>>> known to be
    true, and the reasoning valid (a semantic consequence) then the >>>>>>>>>>> conclusion is necessarily true.

    So, what is the difference in your system from classical
    Formal Logic?


    Semantic Necessity operator: ⊨□

        FALSE ⊨□ FALSE // POE abolished
    (P ∧ ¬P) ⊨□ FALSE // POE abolished

    ⇒ and → symbols are replaced by ⊨□

    The sets that the variables range over must be defined
    all variables must be quantified

    // x is a semantic consequence of its premises in L
    Provable(P,x) ≡ ∃x ∈ L, ∃P ⊆ L (P ⊨□ x)

    // x is a semantic consequence of the axioms of L
    True(L,x) ≡ ∃x ∈ L (Axioms(L) ⊨□ x)

    *The above is all that I know right now*


    The most important aspect of the tiny little foundation of a >>>>>>>>>>> formal
    system that I already specified immediately above is
    self-evident:
    True(L,X) can be defined and incompleteness is impossible. >>>>>>>>>>
    I don't think your system is anywhere near establish far
    enough for you to say that.

    Try and show exceptions to this rule and I will fill in any
    gaps that
    you find.

    G asserts its own unprovability in F
    The reason that G cannot be proved in F is that this requires a >>>>>>>>> sequence of inference steps in F that proves no such sequence >>>>>>>>> of inference steps exists in F.


    ∃sequence_of_inference_steps ⊆ F (sequence_of_inference_steps ⊢ >>>>>>>> ∄sequence_of_inference_steps ⊆ F)



    So, you don't understand the differnce between the INFINITE set
    of sequence steps that show that G is True, and the FINITE number >>>>>>> of steps that need to be shown to make G provable.


    The experts seem to believe that unless a proof can be transformed >>>>>> into
    a finite sequence of steps it is no actual proof at all. Try and
    cite a
    source that says otherwise.

    WHy? Because I agree with that. A Proof needs to be done in a
    finite number of steps.

    The question is why the infinite number of steps in F that makes G
    true don't count for making it true.

    Yes, you can't write that out to KNOW it to be true, but that is
    the differece between knowledge and fact.


    Infinite proof are not allowed: Because they can't possibly ever occur. >>>>

    We can imagine an Oracle machine that can complete these proofs in >>>>>> the
    same sort of way that we can imagine a magic fairy that waves a magic >>>>>> wand.

    You are just showing you don't understand what you talking about >>>>>>> and just spouting word (or symbol) salad.

    You are oriving you are an IDIOT.

    I am seeing these things at a deeper philosophical level than you
    are. I know that is hard to believe.

    But not according to the rules of the system you are talking about.

    You don't get to change the rules on a system.


    YES I DO !!!
    My whole purpose to provide the *correct reasoning* foundation such
    that
    formal systems can be defined without undecidability or undefinability, >>>> or inconsistently.

    No, to change the rules you have to go back to the beginning.


    Non_Sequitur(G) ↔ ∃φ ((T ⊬ φ) ∧ (T ⊬ ¬φ))


    No, Non_Sequitur: Most of what Peter Olcott says.

    You still can't change the rules without going back to the beginning,

    The standard definition of mathematical incompleteness:
    Incomplete(T) ↔ ∃φ ∈ F((T ⊬ φ) ∧ (T ⊬ ¬φ))

    Requires formal systems to do the logically impossible:
    to prove self-contradictory expressions of language.

    So formal systems are "incomplete" in the same sense that we determine
    that a baker that cannot bake a proper angel food cake using ordinary
    red house bricks as the only ingredient lacks sufficient baking skill.



    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Apr 27 22:41:37 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/27/23 9:07 PM, olcott wrote:

    The standard definition of mathematical incompleteness:
    Incomplete(T) ↔ ∃φ ∈ F((T ⊬ φ) ∧ (T ⊬ ¬φ))

    Remember, that if φ ∈ F then φ has a defined truth value in F, it is
    either True of False, and thus can't be the "Liar's Paradox".



    Requires formal systems to do the logically impossible:
    to prove self-contradictory expressions of language.

    Nope. You don't understand what the words mean.

    Remember, incompleteness only happens if a TRUE statement can't be
    proven, an actual "self-contradictory" statment won't be true.

    (or is FALSE and can't be proven to be false, which is basically the
    same thing).


    So formal systems are "incomplete" in the same sense that we determine
    that a baker that cannot bake a proper angel food cake using ordinary
    red house bricks as the only ingredient lacks sufficient baking skill.


    Nope, becuase they are only inclomplete if a statement that is TRUE
    can't be proven.

    You just don't understand that,

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Thu Apr 27 23:00:36 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/27/2023 9:41 PM, Richard Damon wrote:
    On 4/27/23 9:09 PM, olcott wrote:
    On 4/27/2023 6:19 AM, Richard Damon wrote:
    On 4/26/23 10:41 PM, olcott wrote:
    On 4/26/2023 7:07 AM, Richard Damon wrote:
    On 4/26/23 2:07 AM, olcott wrote:
    On 4/25/2023 6:56 AM, Richard Damon wrote:
    On 4/25/23 12:03 AM, olcott wrote:
    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 11:25 AM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, >>>>>>>>>>>>>>>>>>>> making it erroneous.


    Since you don't understand the meaning of >>>>>>>>>>>>>>>>>>> self-contradictory, that claim is erroneous. >>>>>>>>>>>>>>>>>>>

    When G asserts its own unprovability in F:

    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference >>>>>>>>>>>>>>>>>> steps in F that
    prove that they themselves do not exist in F. >>>>>>>>>>>>>>>>>
    It is of course impossible to prove in F that a >>>>>>>>>>>>>>>>> statement is true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>>>>>> This sentence is not true: "This sentence is not true" >>>>>>>>>>>>>>>> is true.



    So, you don't understand how to prove that something is >>>>>>>>>>>>>>> "True in F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox >>>>>>>>>>>>>> expressed in his theory is true in his meta-theory. >>>>>>>>>>>>>
    No, he didn't, he showed that *IF* a certain assuption was >>>>>>>>>>>>> true, then the Liar's paradox would be true, thus that >>>>>>>>>>>>> assumption can not be true.


    When one level of indirect reference is applied to the Liar >>>>>>>>>>>> Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> >>>>>>>>>>>> TRUE.

    Your


    We can do the same thing when G asserts its own
    unprovability in F.
    G cannot be proved in F because this requires a sequence >>>>>>>>>>>>>> of inference
    steps in F that prove that they themselves do not exist in F. >>>>>>>>>>>>>
    Right, you can't prove, in F, that G is true, but you can >>>>>>>>>>>>> prove, in Meta-F, that G is true in F, and that G is >>>>>>>>>>>>> unprovable in F, which is what is required.


    When G asserts its own unprovability in F it cannot be >>>>>>>>>>>> proved in F
    because this requires a sequence of inference steps in F >>>>>>>>>>>> that prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way >>>>>>>>>>>> Tarski's Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the basics >>>>>>>>>>>>> of logic, or truth.


    It may seem that way to someone that learns things by rote >>>>>>>>>>>> and mistakes
    this for actual understanding of exactly how all of the >>>>>>>>>>>> elements of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you >>>>>>>>>>>>> have intentionaally hamstrung yourself to avoid being >>>>>>>>>>>>> "polluted" by "rote-learning" so you are just
    self-inflicted ignorant.

    If you won't even try to learn the basics, you have just >>>>>>>>>>>>> condemned yourself into being a pathological liar because >>>>>>>>>>>>> you just don't any better.


    I do at this point need to understand model theory very >>>>>>>>>>>> thoroughly.

    Learning the details of these things could have boxed me >>>>>>>>>>>> into a corner
    prior to my philosophical investigation of seeing how the >>>>>>>>>>>> key elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set >>>>>>>>>>>> of semantic
    tautologies. It is true that formal systems grounded in this >>>>>>>>>>>> foundation
    cannot be incomplete nor have any expressions of language >>>>>>>>>>>> that are
    undecidable. Now that I have this foundation I have a way to >>>>>>>>>>>> see exactly
    how the concepts of math diverge from correct reasoning. >>>>>>>>>>>>

    You and I can see both THAT G cannot be proved in F and >>>>>>>>>>>>>> WHY G cannot be
    proved in F. G cannot be proved in F for the same
    pathological self-reference(Olcott 2004) reason that the >>>>>>>>>>>>>> Liar Paradox cannot be proved in Tarski's theory.


    Which he didn't do, but you are too stupid to understand >>>>>>>>>>>>> claissic arguement forms.


    It is not that I do not understand, it is that I can
    directly see where and how formal mathematical systems >>>>>>>>>>>> diverge from correct reasoning.

    But since you are discussing Formal Logic, you need to use >>>>>>>>>>> the rules of Formal logic.

    The other way to say it is that your "Correct Reasoning" >>>>>>>>>>> diverges from the accepted and proven system of Formal Logic. >>>>>>>>>>


    In classical logic, intuitionistic logic and similar logical >>>>>>>>>> systems, the principle of explosion

    ex falso [sequitur] quodlibet,
    'from falsehood, anything [follows]'

    ex contradictione [sequitur] quodlibet,
    'from contradiction, anything [follows]')

    Right, if a logic system can prove a contradiction, that out of >>>>>>>>> that contradiction you can prove anything


    https://en.wikipedia.org/wiki/Principle_of_explosion

    ∴ FALSE ⊢ Donald Trump is the Christ
    ∴ FALSE ⊢ Donald Trump is Satan

    Which isn't what was being talked about.

    You clearly don't understand how the principle of explosion
    works, which isn't surprising considering how many
    misconseptions you have about how logic works.


    ex falso [sequitur] quodlibet,'from falsehood, anything [follows]' >>>>>>>> ∴ FALSE ⊢ Donald Trump is the Christ

    But you are using the wrong symbol



    'from falsehood, anything [follows]'
    FALSE Proves that Donald Trump is the Christ

    That isn't what the statment actually means, so you are just stupid. >>>>>


    It is jack ass nonsense like this that proves the
    principle of explosion is nothing even kludge

    Right, false doesn't PROVE anything, but implies anything,

        "P, ¬P ⊢ Q For any statements P and Q, if P and not-P are both >>>> true,
         then it logically follows that Q is true."

    https://en.wikipedia.org/wiki/Principle_of_explosion#Symbolic_representation


    So, you don't understnad what you are reading.

    FALSE itself isn't proving anything.


    'from falsehood, anything [follows]'
    Is not saying that a FALSE antecedent implies any consequent.


    That is EXACTLY what it is saying. That a false premise can be said to
    imply any consequent, since that implication only holds if the premise
    is actually true.

    You are just not understanding what the words actually mean, because you
    are ignorant by choice.

    https://proofwiki.org/wiki/Rule_of_Explosion
    Sequent Form ⊥ ⊢ ϕ

    https://en.wikipedia.org/wiki/List_of_logic_symbols
    ⊥ falsum, ⊢ proves ϕ this logic sentence



    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Thu Apr 27 23:23:32 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/27/2023 9:41 PM, Richard Damon wrote:
    On 4/27/23 9:07 PM, olcott wrote:

    The standard definition of mathematical incompleteness:
    Incomplete(T) ↔ ∃φ ∈ F((T ⊬ φ) ∧ (T ⊬ ¬φ))

    Remember, that if φ ∈ F then φ has a defined truth value in F, it is either True of False, and thus can't be the "Liar's Paradox".



    How about this one?
    Incomplete(T) ↔ ∃φ ∈ WFF(F) ((T ⊬ φ) ∧ (T ⊬ ¬φ))


    Requires formal systems to do the logically impossible:
    to prove self-contradictory expressions of language.

    Nope. You don't understand what the words mean.

    That part I have correctly and Gödel acknowledged the self-contradictory expressions ... can likewise be used for a similar undecidability proof...

    ...14 Every epistemological antinomy can likewise be used for a similar undecidability proof...
    (Gödel 1931:40)

    Antinomy
    ...term often used in logic and epistemology, when describing a paradox
    or unresolvable contradiction. https://www.newworldencyclopedia.org/entry/Antinomy

    Remember, incompleteness only happens if a TRUE statement can't be
    proven, an actual "self-contradictory" statment won't be true.


    You have that incorrectly too.
    When G asserts its own unprovability in F the proof of G in F requires a sequence of inference steps in F that prove that they themselves do not
    exist.

    Gödel’s Theorem, as a simple corollary of Proposition VI (p. 57) is frequently called, proves that there are arithmetical propositions which
    are undecidable (i.e. neither provable nor disprovable) within their arithmetical system, and the proof proceeds by actually specifying such
    a proposition, namely the proposition g expressed by the formula to
    which “17 Gen r” refers (p. 58). g is an arithmetical proposition; but
    the proposition that g is undecidable within the system is not an
    arithmetical proposition, since it is concerned with provability within
    an arithmetical system, and this is a meta-arithmetical and not an
    arithmetical notion. Gödel’s Theorem is thus a result which belongs not
    to mathematics but to metamathematics, the name given by Hilbert to the
    study of rigorous proof in mathematics and symbolic logic

    https://mavdisk.mnsu.edu/pj2943kt/Fall%202015/Promotion%20Application/Previous%20Years%20Article%2022%20Materials/godel-1931.pdf

    (or is FALSE and can't be proven to be false, which is basically the
    same thing).


    So formal systems are "incomplete" in the same sense that we determine
    that a baker that cannot bake a proper angel food cake using ordinary
    red house bricks as the only ingredient lacks sufficient baking skill.


    Nope, becuase they are only inclomplete if a statement that is TRUE
    can't be proven.


    The liar paradox is self contradictory when applied to itself is not self-contradictory when applied to another different instance of itself.

    This sentence is not true: "This sentence is not true"
    inner one is neither true nor false
    outer one is true because the inner one is neither true nor false

    When G asserts its own unprovability in F the proof of G in F requires a sequence of inference steps in F that prove that they themselves do not
    exist. metamathematics, can see that G cannot be proved in F.


    You just don't understand that,


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Apr 28 07:40:34 2023
    XPost: comp.theory, sci.logic, alt.philosophy
    XPost: sci.math

    On 4/28/23 12:00 AM, olcott wrote:
    On 4/27/2023 9:41 PM, Richard Damon wrote:
    On 4/27/23 9:09 PM, olcott wrote:
    On 4/27/2023 6:19 AM, Richard Damon wrote:
    On 4/26/23 10:41 PM, olcott wrote:
    On 4/26/2023 7:07 AM, Richard Damon wrote:
    On 4/26/23 2:07 AM, olcott wrote:
    On 4/25/2023 6:56 AM, Richard Damon wrote:
    On 4/25/23 12:03 AM, olcott wrote:
    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 11:25 AM, olcott wrote:
    On 4/22/2023 6:19 PM, Richard Damon wrote:
    On 4/22/23 6:49 PM, olcott wrote:
    On 4/22/2023 5:22 PM, Richard Damon wrote:
    On 4/22/23 6:10 PM, olcott wrote:
    On 4/22/2023 4:54 PM, Richard Damon wrote:
    On 4/22/23 5:36 PM, olcott wrote:
    On 4/22/2023 4:27 PM, Richard Damon wrote:
    On 4/22/23 5:08 PM, olcott wrote:
    On 4/16/2023 6:16 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/15/23 10:54 PM, olcott wrote:

    G is unprovable because it is self-contradictory, >>>>>>>>>>>>>>>>>>>>> making it erroneous.


    Since you don't understand the meaning of >>>>>>>>>>>>>>>>>>>> self-contradictory, that claim is erroneous. >>>>>>>>>>>>>>>>>>>>

    When G asserts its own unprovability in F: >>>>>>>>>>>>>>>>>>
    But Godel's G doesn't do that.


    Any proof of G in F requires a sequence of inference >>>>>>>>>>>>>>>>>>> steps in F that
    prove that they themselves do not exist in F. >>>>>>>>>>>>>>>>>>
    It is of course impossible to prove in F that a >>>>>>>>>>>>>>>>>> statement is true but not provable in F.

    You don't need to do the proof in F,

    To prove G in F you do.
    Otherwise you are doing the same cheap trick as Tarski: >>>>>>>>>>>>>>>>> This sentence is not true: "This sentence is not true" >>>>>>>>>>>>>>>>> is true.



    So, you don't understand how to prove that something is >>>>>>>>>>>>>>>> "True in F" by doing the steps in Meta-F.


    I just showed you how Tarski proved that the Liar Paradox >>>>>>>>>>>>>>> expressed in his theory is true in his meta-theory. >>>>>>>>>>>>>>
    No, he didn't, he showed that *IF* a certain assuption was >>>>>>>>>>>>>> true, then the Liar's paradox would be true, thus that >>>>>>>>>>>>>> assumption can not be true.


    When one level of indirect reference is applied to the Liar >>>>>>>>>>>>> Paradox it
    becomes actually true. There was no "if".

    This sentence is not true: "This sentence is not true" <IS> >>>>>>>>>>>>> TRUE.

    Your


    We can do the same thing when G asserts its own
    unprovability in F.
    G cannot be proved in F because this requires a sequence >>>>>>>>>>>>>>> of inference
    steps in F that prove that they themselves do not exist >>>>>>>>>>>>>>> in F.

    Right, you can't prove, in F, that G is true, but you can >>>>>>>>>>>>>> prove, in Meta-F, that G is true in F, and that G is >>>>>>>>>>>>>> unprovable in F, which is what is required.


    When G asserts its own unprovability in F it cannot be >>>>>>>>>>>>> proved in F
    because this requires a sequence of inference steps in F >>>>>>>>>>>>> that prove that
    they themselves do not exist.

    Meta-F merely removes the self-contradiction the same way >>>>>>>>>>>>> Tarski's Meta-
    theory removed the self-contradiction.


    You are just showing that your mind can't handle the >>>>>>>>>>>>>> basics of logic, or truth.


    It may seem that way to someone that learns things by rote >>>>>>>>>>>>> and mistakes
    this for actual understanding of exactly how all of the >>>>>>>>>>>>> elements of a
    proof fit together coherently or fail to do so.

    It sounds like you are too stupid to learn, and that you >>>>>>>>>>>>>> have intentionaally hamstrung yourself to avoid being >>>>>>>>>>>>>> "polluted" by "rote-learning" so you are just
    self-inflicted ignorant.

    If you won't even try to learn the basics, you have just >>>>>>>>>>>>>> condemned yourself into being a pathological liar because >>>>>>>>>>>>>> you just don't any better.


    I do at this point need to understand model theory very >>>>>>>>>>>>> thoroughly.

    Learning the details of these things could have boxed me >>>>>>>>>>>>> into a corner
    prior to my philosophical investigation of seeing how the >>>>>>>>>>>>> key elements
    fail to fit together coherently.

    It is true that the set of analytical truth is simply a set >>>>>>>>>>>>> of semantic
    tautologies. It is true that formal systems grounded in >>>>>>>>>>>>> this foundation
    cannot be incomplete nor have any expressions of language >>>>>>>>>>>>> that are
    undecidable. Now that I have this foundation I have a way >>>>>>>>>>>>> to see exactly
    how the concepts of math diverge from correct reasoning. >>>>>>>>>>>>>

    You and I can see both THAT G cannot be proved in F and >>>>>>>>>>>>>>> WHY G cannot be
    proved in F. G cannot be proved in F for the same >>>>>>>>>>>>>>> pathological self-reference(Olcott 2004) reason that the >>>>>>>>>>>>>>> Liar Paradox cannot be proved in Tarski's theory. >>>>>>>>>>>>>>>

    Which he didn't do, but you are too stupid to understand >>>>>>>>>>>>>> claissic arguement forms.


    It is not that I do not understand, it is that I can >>>>>>>>>>>>> directly see where and how formal mathematical systems >>>>>>>>>>>>> diverge from correct reasoning.

    But since you are discussing Formal Logic, you need to use >>>>>>>>>>>> the rules of Formal logic.

    The other way to say it is that your "Correct Reasoning" >>>>>>>>>>>> diverges from the accepted and proven system of Formal Logic. >>>>>>>>>>>


    In classical logic, intuitionistic logic and similar logical >>>>>>>>>>> systems, the principle of explosion

    ex falso [sequitur] quodlibet,
    'from falsehood, anything [follows]'

    ex contradictione [sequitur] quodlibet,
    'from contradiction, anything [follows]')

    Right, if a logic system can prove a contradiction, that out >>>>>>>>>> of that contradiction you can prove anything


    https://en.wikipedia.org/wiki/Principle_of_explosion

    ∴ FALSE ⊢ Donald Trump is the Christ
    ∴ FALSE ⊢ Donald Trump is Satan

    Which isn't what was being talked about.

    You clearly don't understand how the principle of explosion >>>>>>>>>> works, which isn't surprising considering how many
    misconseptions you have about how logic works.


    ex falso [sequitur] quodlibet,'from falsehood, anything [follows]' >>>>>>>>> ∴ FALSE ⊢ Donald Trump is the Christ

    But you are using the wrong symbol



    'from falsehood, anything [follows]'
    FALSE Proves that Donald Trump is the Christ

    That isn't what the statment actually means, so you are just stupid. >>>>>>


    It is jack ass nonsense like this that proves the
    principle of explosion is nothing even kludge

    Right, false doesn't PROVE anything, but implies anything,

        "P, ¬P ⊢ Q For any statements P and Q, if P and not-P are both >>>>> true,
         then it logically follows that Q is true."

    https://en.wikipedia.org/wiki/Principle_of_explosion#Symbolic_representation


    So, you don't understnad what you are reading.

    FALSE itself isn't proving anything.


    'from falsehood, anything [follows]'
    Is not saying that a FALSE antecedent implies any consequent.


    That is EXACTLY what it is saying. That a false premise can be said to
    imply any consequent, since that implication only holds if the premise
    is actually true.

    You are just not understanding what the words actually mean, because
    you are ignorant by choice.

    https://proofwiki.org/wiki/Rule_of_Explosion
    Sequent Form ⊥ ⊢ ϕ

    https://en.wikipedia.org/wiki/List_of_logic_symbols
    ⊥ falsum, ⊢ proves ϕ this logic sentence




    So, you still don't understand the statement because you are just using
    (never actually) learned by rote transformations.

    Just shows your ignorance.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sat May 6 13:36:49 2023
    XPost: comp.theory, sci.logic

    On 4/28/2023 6:40 AM, Richard Damon wrote:
    On 4/28/23 12:23 AM, olcott wrote:
    On 4/27/2023 9:41 PM, Richard Damon wrote:
    On 4/27/23 9:07 PM, olcott wrote:

    The standard definition of mathematical incompleteness:
    Incomplete(T) ↔ ∃φ ∈ F((T ⊬ φ) ∧ (T ⊬ ¬φ))

    Remember, that if φ ∈ F then φ has a defined truth value in F, it is >>> either True of False, and thus can't be the "Liar's Paradox".



    How about this one?
    Incomplete(T) ↔ ∃φ ∈ WFF(F) ((T ⊬ φ) ∧ (T ⊬ ¬φ))

    You use of terms that you do not define doesn't help you.

    Incomplete means, in actual words, that there exists a true statement in
    F that can not be proven in F, or similarly a False statement in F that
    can not be disproven (proven to be false).


    I don't think that there is any source that says it is a true statement
    in F.

    Incompleteness is NOT about statements that meet the "syntax" of F, but
    might not actually be Truthbearers. Of course you can't prove or refute
    a non-truthbearer (at best you might be able to show it is a non-truthbearer).


    "Kurt Gödel's incompleteness theorem demonstrates that mathematics
    contains true statements that cannot be proved. His proof achieves
    this by constructing paradoxical mathematical statements. To see how
    the proof works, begin by considering the liar's paradox: "This
    statement is false." This statement is true if and only if it is
    false, and therefore it is neither true nor false.

    Now let's consider "This statement is unprovable." If it is provable,
    then we are proving a falsehood, which is extremely unpleasant and is
    generally assumed to be impossible. The only alternative left is that
    this statement is unprovable. Therefore, it is in fact both true and
    unprovable. Our system of reasoning is incomplete, because some truths
    are unprovable."
    https://www.scientificamerican.com/article/what-is-goumldels-proof/


    Trying to use ANY other definition that isn't actually equivalent is
    just proof that you don't understand the rules of logic and have fallen
    to a strawman.



    Requires formal systems to do the logically impossible:
    to prove self-contradictory expressions of language.

    Nope. You don't understand what the words mean.

    That part I have correctly and Gödel acknowledged the
    self-contradictory expressions ... can likewise be used for a similar
    undecidability proof...

    And none of those are directly about G in F.


    ...14 Every epistemological antinomy can likewise be used for a
    similar undecidability proof...
    (Gödel 1931:40)

    Yep, you can use the FORM of any epistemolgoical antinomy, converting it
    from a statement about its own truth to being about its own provability,
    to get a similar proof.


    G asserts its own unprovability in F
    (a formal system with its own provability predicate)


    Antinomy
    ...term often used in logic and epistemology, when describing a
    paradox or unresolvable contradiction.
    https://www.newworldencyclopedia.org/entry/Antinomy

    Remember, incompleteness only happens if a TRUE statement can't be
    proven, an actual "self-contradictory" statment won't be true.


    You have that incorrectly too.
    When G asserts its own unprovability in F the proof of G in F requires
    a sequence of inference steps in F that prove that they themselves do
    not exist.

    But G DOESN'T "asserts its own unprovability in F", and G is not proven
    "in F".

    You statement just shows that you don't understand the proof and are
    totally missing that almost all of the paper is written from the aspect
    of Meta-F.


    Gödel’s Theorem, as a simple corollary of Proposition VI (p. 57) is
    frequently called, proves that there are arithmetical propositions
    which are undecidable (i.e. neither provable nor disprovable) within
    their arithmetical system, and the proof proceeds by actually
    specifying such a proposition, namely the proposition g expressed by
    the formula to which “17 Gen r” refers (p. 58). g is an arithmetical
    proposition; but the proposition that g is undecidable within the
    system is not an arithmetical proposition, since it is concerned with
    provability within an arithmetical system, and this is a
    meta-arithmetical and not an arithmetical notion. Gödel’s Theorem is
    thus a result which belongs not to mathematics but to metamathematics,
    the name given by Hilbert to the study of rigorous proof in
    mathematics and symbolic logic

    https://mavdisk.mnsu.edu/pj2943kt/Fall%202015/Promotion%20Application/Previous%20Years%20Article%2022%20Materials/godel-1931.pdf

    Yes, Hilbert had simillar errors in logic, which he, I believe,
    eventaully realized. Yes, much of Godel's proof could be described as "meta-mathematics", but that meta- shows that IN MATHEMATICS ITSELF,
    there exist propositions that are true but can not be proven within mathematics. Thus, mathematics meets the requirements to be called "incomplete"


    (or is FALSE and can't be proven to be false, which is basically the
    same thing).


    So formal systems are "incomplete" in the same sense that we determine >>>> that a baker that cannot bake a proper angel food cake using ordinary
    red house bricks as the only ingredient lacks sufficient baking skill. >>>>

    Nope, becuase they are only inclomplete if a statement that is TRUE
    can't be proven.


    The liar paradox is self contradictory when applied to itself is not
    self-contradictory when applied to another different instance of itself.

    So, you don't understand what the "Liar Paradox" actually is. It is, and
    only is, a statement that asserts ITS OWN falsehood. It can't refer to a "different instance" as either that other isn't "itself", so refering to
    it makes this statement not the liar's paradox, or it actually IS
    itself, and thus must have the same truth value. You don't seem to
    understand the fundamental rule that if a copy of the statement is
    considered to be "the same statement", that all those copies must. by definition. have the same truth value.


    This sentence is not true: "This sentence is not true"
    THIS SECOND SENTENCE IS THE LIAR PARADOX YOU FOOL

    inner one is neither true nor false
    outer one is true because the inner one is neither true nor false

    But isn't the liar's paradox.

    Again, you show you don't understand the actual meaning of the words.


    Rather than paying attention you merely glance at what I say and say
    that I don't understand.


    When G asserts its own unprovability in F the proof of G in F requires
    a sequence of inference steps in F that prove that they themselves do
    not exist. metamathematics, can see that G cannot be proved in F.

    Exept that Godel's G doesn't "assert is own unprovability in F" nor is
    "G proved in F" (in fact, Godel shows such a proof is impossible).


    I AM NOT TALKING ABOUT EXACTLY WHAT HE SAYS ABOUT HIS PROOF
    I AM TALKING ABOUT WHAT HE SAYS IS AN EQUIVALENT PROOF

    ...14 Every epistemological antinomy can likewise be used for a
    similar undecidability proof... (Gödel 1931:39-41)

    THEREFORE WHEN G ASSERTS ITS OWN UNPROVABILITY IN F
    (and F is powerful enough to have its own provability predicate)
    THEN THIS EPISTEMOLOGICAL ANTINOMY MEETS HIS EQUIVALENCE SPEC

    WE CAN SEE THAT IT IS AN {epistemological antinomy} BECAUSE
    THE PROOF OF G IN F REQUIRES A SEQUENCE OF INFERENCE STEPS
    IN F THAT PROVE THAT THEY THEMSELVES DO NOT EXIST.

    metamathematics can PROVE that Godel's statement G is TRUE IN
    MATHEMATICS, but not provable there.

    You don't seem to understand how meta-systems work and how they can
    actaully prove things about the system they are a meta- for.


    IT IS NOT MY LACK OF KNOWLEDGE OF LOGIC IT IS YOUR LACK OF KNOWLEDGE
    OF EPISTEMOLOGY

    If you "Correct Reasoning" can't handle meta-logic, then you aren't
    going to be able to prove much in it, as most major proofs actually use meta-logic.



    You just don't understand that,




    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat May 6 16:45:54 2023
    XPost: comp.theory, sci.logic

    On 5/6/23 2:36 PM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:
    On 4/28/23 12:23 AM, olcott wrote:
    On 4/27/2023 9:41 PM, Richard Damon wrote:
    On 4/27/23 9:07 PM, olcott wrote:

    The standard definition of mathematical incompleteness:
    Incomplete(T) ↔ ∃φ ∈ F((T ⊬ φ) ∧ (T ⊬ ¬φ))

    Remember, that if φ ∈ F then φ has a defined truth value in F, it is >>>> either True of False, and thus can't be the "Liar's Paradox".



    How about this one?
    Incomplete(T) ↔ ∃φ ∈ WFF(F) ((T ⊬ φ) ∧ (T ⊬ ¬φ))

    You use of terms that you do not define doesn't help you.

    Incomplete means, in actual words, that there exists a true statement
    in F that can not be proven in F, or similarly a False statement in F
    that can not be disproven (proven to be false).


    I don't think that there is any source that says it is a true statement
    in F.


    Except that Godel has PROVEN (in Meta-F) that G actually IS true in F.

    The fact that you can't understand that proof, doesn't make it not true.

    It can be shown just in F with infinte work (which is ALLOWED for Truth,
    but not for a proof or knowledge) by just testing every number against
    the specific recursive relationship.

    Incompleteness is NOT about statements that meet the "syntax" of F,
    but might not actually be Truthbearers. Of course you can't prove or
    refute a non-truthbearer (at best you might be able to show it is a
    non-truthbearer).


     "Kurt Gödel's incompleteness theorem demonstrates that mathematics
      contains true statements that cannot be proved. His proof achieves
      this by constructing paradoxical mathematical statements. To see how
      the proof works, begin by considering the liar's paradox: "This
      statement is false." This statement is true if and only if it is
      false, and therefore it is neither true nor false.

      Now let's consider "This statement is unprovable." If it is provable,
      then we are proving a falsehood, which is extremely unpleasant and is
      generally assumed to be impossible. The only alternative left is that
      this statement is unprovable. Therefore, it is in fact both true and
      unprovable. Our system of reasoning is incomplete, because some truths
      are unprovable."
      https://www.scientificamerican.com/article/what-is-goumldels-proof/


    So, you are admitting you don't actually understand what the actual
    proof is, so only read the dumbed down summaries of it.

    Until you show an actual flaw in the actual proof, you are just showing yourself to be the ignorant liar that you are.


    Trying to use ANY other definition that isn't actually equivalent is
    just proof that you don't understand the rules of logic and have
    fallen to a strawman.



    Requires formal systems to do the logically impossible:
    to prove self-contradictory expressions of language.

    Nope. You don't understand what the words mean.

    That part I have correctly and Gödel acknowledged the
    self-contradictory expressions ... can likewise be used for a similar
    undecidability proof...

    And none of those are directly about G in F.


    ...14 Every epistemological antinomy can likewise be used for a
    similar undecidability proof...
    (Gödel 1931:40)

    Yep, you can use the FORM of any epistemolgoical antinomy, converting
    it from a statement about its own truth to being about its own
    provability, to get a similar proof.


    G asserts its own unprovability in F
    (a formal system with its own provability predicate)

    Nope. G, in F, just asserts that there don't exist any numbers that
    satisify a particular relationship.

    It is obly in Meta-F, that we can understand that this means that if G
    is true, it can not be prove, but if it is false, then there exist a
    proof that it is true.



    Antinomy
    ...term often used in logic and epistemology, when describing a
    paradox or unresolvable contradiction.
    https://www.newworldencyclopedia.org/entry/Antinomy

    Remember, incompleteness only happens if a TRUE statement can't be
    proven, an actual "self-contradictory" statment won't be true.


    You have that incorrectly too.
    When G asserts its own unprovability in F the proof of G in F
    requires a sequence of inference steps in F that prove that they
    themselves do not exist.

    But G DOESN'T "asserts its own unprovability in F", and G is not
    proven "in F".

    You statement just shows that you don't understand the proof and are
    totally missing that almost all of the paper is written from the
    aspect of Meta-F.


    Gödel’s Theorem, as a simple corollary of Proposition VI (p. 57) is
    frequently called, proves that there are arithmetical propositions
    which are undecidable (i.e. neither provable nor disprovable) within
    their arithmetical system, and the proof proceeds by actually
    specifying such a proposition, namely the proposition g expressed by
    the formula to which “17 Gen r” refers (p. 58). g is an arithmetical >>> proposition; but the proposition that g is undecidable within the
    system is not an arithmetical proposition, since it is concerned with
    provability within an arithmetical system, and this is a
    meta-arithmetical and not an arithmetical notion. Gödel’s Theorem is
    thus a result which belongs not to mathematics but to
    metamathematics, the name given by Hilbert to the study of rigorous
    proof in mathematics and symbolic logic

    https://mavdisk.mnsu.edu/pj2943kt/Fall%202015/Promotion%20Application/Previous%20Years%20Article%2022%20Materials/godel-1931.pdf

    Yes, Hilbert had simillar errors in logic, which he, I believe,
    eventaully realized. Yes, much of Godel's proof could be described as
    "meta-mathematics", but that meta- shows that IN MATHEMATICS ITSELF,
    there exist propositions that are true but can not be proven within
    mathematics. Thus, mathematics meets the requirements to be called
    "incomplete"


    (or is FALSE and can't be proven to be false, which is basically the
    same thing).


    So formal systems are "incomplete" in the same sense that we determine >>>>> that a baker that cannot bake a proper angel food cake using ordinary >>>>> red house bricks as the only ingredient lacks sufficient baking skill. >>>>>

    Nope, becuase they are only inclomplete if a statement that is TRUE
    can't be proven.


    The liar paradox is self contradictory when applied to itself is not
    self-contradictory when applied to another different instance of itself.

    So, you don't understand what the "Liar Paradox" actually is. It is,
    and only is, a statement that asserts ITS OWN falsehood. It can't
    refer to a "different instance" as either that other isn't "itself",
    so refering to it makes this statement not the liar's paradox, or it
    actually IS itself, and thus must have the same truth value. You don't
    seem to understand the fundamental rule that if a copy of the
    statement is considered to be "the same statement", that all those
    copies must. by definition. have the same truth value.


    This sentence is not true: "This sentence is not true"
    THIS SECOND SENTENCE IS THE LIAR PARADOX YOU FOOL

    Nope. The liar's paradox is of the form of "This statement is a LIE".

    The Liar's paradox (at least in its normal forms) never has one
    statement refering to a seperate statement.


    The fact tht your edited out my answer about that makes you GUILTY of
    the lie. and shows that you know you have no rebutal to my correction of
    your error.

    That shows it isn't an "Honest Mistake" but a deliberate act of ignoring
    the truth to try to hold onto the error that you think are true.


    inner one is neither true nor false
    outer one is true because the inner one is neither true nor false

    But isn't the liar's paradox.

    Again, you show you don't understand the actual meaning of the words.


    Rather than paying attention you merely glance at what I say and say
    that I don't understand.
    Because you speak in Falsehoods and lies. Maybe you don't understand
    that by talking about an actual Theorem, you have ACCEPTED the rules of
    the logic system that theorem uses.

    Using a wrong logic system is just another form of lie.



    When G asserts its own unprovability in F the proof of G in F
    requires a sequence of inference steps in F that prove that they
    themselves do not exist. metamathematics, can see that G cannot be
    proved in F.

    Exept that Godel's G doesn't "assert is own unprovability in F" nor is
    "G proved in F" (in fact, Godel shows such a proof is impossible).


    I AM NOT TALKING ABOUT EXACTLY WHAT HE SAYS ABOUT HIS PROOF
    I AM TALKING ABOUT WHAT HE SAYS IS AN EQUIVALENT PROOF

    So, you are admitting that you are just talking about a strawman.

    You think things are equivalent that actually have critical differences.

    That just shows you are too stupid to understand how logic actually works.


       ...14 Every epistemological antinomy can likewise be used for a
       similar undecidability proof... (Gödel 1931:39-41)

    Right,


    THEREFORE WHEN G ASSERTS ITS OWN UNPROVABILITY IN F
    (and F is powerful enough to have its own provability predicate)
    THEN THIS EPISTEMOLOGICAL ANTINOMY MEETS HIS EQUIVALENCE SPEC

    WE CAN SEE THAT IT IS AN {epistemological antinomy} BECAUSE
    THE PROOF OF G IN F REQUIRES A SEQUENCE OF INFERENCE STEPS
    IN F THAT PROVE THAT THEY THEMSELVES DO NOT EXIST.

    So, you are admitting to using the strawman of looking at summaries and
    not the actual thing you are claiming to refute.

    You are thus admittign you are too stupid to actually understand how
    logic actually work, and EVERYTHING you have done in your life is
    tainted by the stink of falsehood.


    metamathematics can PROVE that Godel's statement G is TRUE IN
    MATHEMATICS, but not provable there.

    You don't seem to understand how meta-systems work and how they can
    actaully prove things about the system they are a meta- for.


    IT IS NOT MY LACK OF KNOWLEDGE OF LOGIC IT IS YOUR LACK OF KNOWLEDGE
    OF EPISTEMOLOGY

    Nope. First rule, you have to use the rules of the formal logic system
    that you are talking about.

    Your failure to understand that puts you firmly into the realm of the
    hand-wavy philosophers who like to talk in grand words, but don't
    understand how to actually do actual formal logic.

    Note, "Epistemology" is actually out of scope when talking about formal systems, but only for arguing if a given "formal system" meets your goals.

    The Formal Systems the contain "Mathematics" have defined their ideas of epistemology, you need to either accept it, or start all over and work
    with a new basis.

    All you are doing is showing that you are too stupid to understand the epistemological basis of formal logic, and how that interacts with other epistemologies. That means you really don't understand what actually IS epistemology.


    If you "Correct Reasoning" can't handle meta-logic, then you aren't
    going to be able to prove much in it, as most major proofs actually
    use meta-logic.



    You just don't understand that,





    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)