• Nobody uses encrypted email, because it is too awkward. Possible soluti

    From Hans-Georg Michna@21:1/5 to All on Sat Sep 7 10:41:29 2019
    I just stumbled over Delta Chat https://delta.chat/ and find the
    idea intriguing. It uses the IMAP protocol, particularly its
    Enhanced Idle push function if available, to produce a speedy,
    chat-like system with full encryption and without any
    centralization (no central server). Simple, but effective.

    It is also open-source and available on various platforms. The
    Android version is available on F-Droid (preferred) and on
    Google's Play Store.

    Delta Chat messages to people who do not have it installed
    arrive as ordinary emails. These can obviously not be encrypted,
    but they make participation easier.

    It is still in beta state, and I have not tested it yet, so I
    cannot tell how well it will be accepted around the world. But
    if it works promised, it could be an excellent solution. Has
    anybody here tried it yet?

    Hans

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Hans-Georg Michna on Sat Sep 7 11:14:59 2019
    On 9/7/19 2:41 AM, Hans-Georg Michna wrote:
    Nobody uses encrypted email, because it is too awkward.

    Your subject is false. I and a double digit number of people I know use encrypted email (S/MIME or PGP) daily.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hans-Georg Michna@21:1/5 to Grant Taylor on Sun Sep 8 15:54:07 2019
    On Sat, 7 Sep 2019 11:14:59 -0600, Grant Taylor wrote:

    On 9/7/19 2:41 AM, Hans-Georg Michna wrote:

    Nobody uses encrypted email, because it is too awkward.

    Your subject is false. I and a double digit number of people I know use >encrypted email (S/MIME or PGP) daily.

    Of course the subject is false. It oversimplifies or, in this
    case, exaggerates, but that's what very short statements do.

    But how about the idea behind it?

    You will probably have to admit that you exchange emails with
    lots of people who don't use encrypted email, because it is too
    awkward.

    Hans

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Louis Valence@21:1/5 to Hans-Georg Michna on Sun Sep 8 12:16:16 2019
    Hans-Georg Michna <hans-georgNoEmailPlease@michna.com> writes:

    [...]

    You will probably have to admit that you exchange emails with
    lots of people who don't use encrypted email, because it is too
    awkward.

    As a lay user, yes, I agree it's difficult to deal with cryptography.
    Just having to *keep* a private key is a problem. We're always losing
    our passwords. (In fact, passwords are a pain and so are keys.)

    As a user, though, let me share an idea. Consider two people A and B
    and e-mail clients, cA and cB. A wants to exchange some e-mails with B.
    So A writes up his message and mails it to B. cA will not send the
    message itself to B just to yet. That first e-mail message will only
    initiate a protocol to exchange public keys. If B accepts, more mail
    messages go back and forth until the protocol reaches its desired state,
    ready for exchanging encrypted data. (This can take days depending how
    often A, B check their e-mails. It could take seconds if both are
    online at the same time.) Until this happens, cA and cB handle
    everything. A, B do nothing. The most they can do is take a look at
    message status in their clients to see in what phase the protocol. cA
    has kept the message locally until cB sends B's public key which could
    be generated just for that e-mail thread, say.

    Now, cA and cB know each other and communication takes place without
    users having to do anything about keys and whatnot. cA and cB could be
    set up to use a key at every message (slow), per thread (faster) or even
    always use the same key with everyone they talk to.

    All of this seems obvious. I'm not aware of any client doing it this
    way, though. Suppose one of these very popular e-mail clients begin to
    do this by default. ``Everyone'' would start using it overnight without
    even knowing what's going on.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Hans-Georg Michna on Sun Sep 8 10:48:35 2019
    On 9/8/19 7:54 AM, Hans-Georg Michna wrote:
    Of course the subject is false. It oversimplifies or, in this case, exaggerates, but that's what very short statements do.

    And such very short statements are often wrong. Small changes could
    make them accurate. E.g. "Too few people use encrypted email, because
    it's too awkward."

    But how about the idea behind it?

    I agree with too few people use encrypted email.

    I think the idea that it's too awkward is not completely accurate. I
    think encrypted email can be, probably is by default, awkward to use.
    But it does not have to be.

    I believe that a decent MUA, possibly with an add-on, can make encrypted
    email relatively trivial to /use/ day-to-day.

    Granted, the setup is a bit onerous. But many people think that email
    setup is too onerous as is. I do think that the additional care and
    feeding of encryption (certificate installation and renewal) is worth
    the the result.

    You will probably have to admit that you exchange emails with lots
    of people who don't use encrypted email,…

    Yes, every single day.

    …because it is too awkward.

    Nope. Every person I've asked to configure encryption has done so.
    Most of the people that I exchange unencrypted email with do so out of ignorance on their part. A very small number of people consciously
    choose not to use encryption.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Louis Valence on Mon Sep 9 21:22:17 2019
    On 9/8/19 9:16 AM, Louis Valence wrote:
    As a lay user, yes, I agree it's difficult to deal with cryptography.
    Just having to *keep* a private key is a problem. We're always losing
    our passwords. (In fact, passwords are a pain and so are keys.)

    I'm of the opinion that users are, and have to be, responsible for
    something. Nothing states that what they are responsible for must be
    secure, though ideally it is, just that they can reproduce it when needed.

    As a user, though, let me share an idea. Consider two people A and
    B and e-mail clients, cA and cB. A wants to exchange some e-mails
    with B. So A writes up his message and mails it to B. cA will not
    send the message itself to B just to yet. That first e-mail message
    will only initiate a protocol to exchange public keys. If B accepts,
    more mail messages go back and forth until the protocol reaches its
    desired state, ready for exchanging encrypted data. (This can take
    days depending how often A, B check their e-mails. It could take
    seconds if both are online at the same time.) Until this happens, cA
    and cB handle everything. A, B do nothing. The most they can do is
    take a look at message status in their clients to see in what phase
    the protocol. cA has kept the message locally until cB sends B's
    public key which could be generated just for that e-mail thread, say.

    My experience with S/MIME is somewhat like that. A / cA sends a
    /signed/ email to B / cB. cB then extracts what is necessary to encrypt messages to A. When B / cB sends a /signed/ (and possibly encrypted)
    message (back) to A, A / cA then extracts what is necessary to encrypt
    messages to B. One /signed/ email each way and both parties have what
    is necessary to send signed and / or encrypted emails between each other.

    I can't speak for PGP / GPG as I've done little with them.

    Now, cA and cB know each other and communication takes place without
    users having to do anything about keys and whatnot. cA and cB could
    be set up to use a key at every message (slow), per thread (faster)
    or even always use the same key with everyone they talk to.

    All A & B need to do above is to tell their clients to learn the other's
    public key (extracted from a /signature/). Then a well behaved S/MIME
    email client will automatically handle encryption henceforth.

    All of this seems obvious. I'm not aware of any client doing it this
    way, though. Suppose one of these very popular e-mail clients begin
    to do this by default. ``Everyone'' would start using it overnight
    without even knowing what's going on.

    No, it couldn't happen overnight. Keys / certificates / et al. must be acquired from somewhere. This takes some upfront configuration of the
    client.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hans-Georg Michna@21:1/5 to Louis Valence on Tue Sep 10 11:59:43 2019
    On Sun, 08 Sep 2019 12:16:16 -0300, Louis Valence wrote:

    Hans-Georg Michna <hans-georgNoEmailPlease@michna.com> writes:

    [...]

    As a user, though, let me share an idea. Consider two people A and B
    and e-mail clients, cA and cB. A wants to exchange some e-mails with B.
    So A writes up his message and mails it to B. cA will not send the
    message itself to B just to yet. That first e-mail message will only >initiate a protocol to exchange public keys. If B accepts, more mail >messages go back and forth until the protocol reaches its desired state, >ready for exchanging encrypted data. (This can take days depending how
    often A, B check their e-mails. It could take seconds if both are
    online at the same time.) Until this happens, cA and cB handle
    everything. A, B do nothing. The most they can do is take a look at
    message status in their clients to see in what phase the protocol. cA
    has kept the message locally until cB sends B's public key which could
    be generated just for that e-mail thread, say.

    Now, cA and cB know each other and communication takes place without
    users having to do anything about keys and whatnot. cA and cB could be
    set up to use a key at every message (slow), per thread (faster) or even >always use the same key with everyone they talk to.

    All of this seems obvious. I'm not aware of any client doing it this
    way, though. Suppose one of these very popular e-mail clients begin to
    do this by default. ``Everyone'' would start using it overnight without
    even knowing what's going on.

    You are apparently describing what Delta Chat does behind the
    curtains. It is available today and works without any problems,
    as far as I can see.

    Try it, you may like it. It seems to be the only decentralized
    and end-to-end encrypted chat/email system. Unlike Hangouts,
    WhatsApp, Telegram and all the others, here no big company can
    run a central server, store your metadata, or even peruse the
    content of your messages. That's something. And the idea is
    simple.

    Hans

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hans-Georg Michna@21:1/5 to All on Tue Sep 10 12:02:03 2019
    But making it as easy as Delta Chat might well convince more
    people to go encrypted, because it is much simpler and easier to
    handle than S/MIME.

    Hans

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Louis Valence@21:1/5 to Grant Taylor on Wed Sep 11 09:31:12 2019
    Grant Taylor <gtaylor@tnetconsulting.net> writes:

    On 9/8/19 9:16 AM, Louis Valence wrote:
    As a lay user, yes, I agree it's difficult to deal with
    cryptography. Just having to *keep* a private key is a problem.
    We're always losing our passwords. (In fact, passwords are a pain
    and so are keys.)

    I'm of the opinion that users are, and have to be, responsible for
    something.

    [...]

    They cannot get away from being responsible. For instance, today most
    don't encrypt their communication and they all /respond/ to the
    consequences.

    As a user, though, let me share an idea. Consider two people A and
    B and e-mail clients, cA and cB. A wants to exchange some e-mails
    with B. So A writes up his message and mails it to B. cA will not
    send the message itself to B just to yet. That first e-mail message
    will only initiate a protocol to exchange public keys. If B
    accepts, more mail messages go back and forth until the protocol
    reaches its desired state, ready for exchanging encrypted data.
    (This can take days depending how often A, B check their e-mails.
    It could take seconds if both are online at the same time.) Until
    this happens, cA and cB handle everything. A, B do nothing. The
    most they can do is take a look at message status in their clients
    to see in what phase the protocol. cA has kept the message locally
    until cB sends B's public key which could be generated just for that
    e-mail thread, say.

    My experience with S/MIME is somewhat like that. A / cA sends a
    /signed/ email to B / cB. cB then extracts what is necessary to
    encrypt messages to A. When B / cB sends a /signed/ (and possibly
    encrypted) message (back) to A, A / cA then extracts what is necessary
    to encrypt messages to B. One /signed/ email each way and both
    parties have what is necessary to send signed and / or encrypted
    emails between each other.

    Interesting! I didn't know about S/MIME. I'll read RFC 3369.

    [...]

    All of this seems obvious. I'm not aware of any client doing it
    this way, though. Suppose one of these very popular e-mail clients
    begin to do this by default. ``Everyone'' would start using it
    overnight without even knowing what's going on.

    No, it couldn't happen overnight. Keys / certificates / et al. must
    be acquired from somewhere. This takes some upfront configuration of
    the client.

    Please educate me on this. Why do we need certificates for the context
    of A, B communicating directly? It's not clear to me but I sense
    certificates might be involved to protect A and B from a middle man C.
    If C can tamper A and B's communication, then on the first signed e-mail
    from A to B (which including A's public key), C could replace the public
    key, then B would encrypt B's reply with C's public key, then C would
    decrypt the message and reencrypt it back to A using A's public key ---
    and A and B would never notice anything wrong. Even if this is right,
    it's not clear to me how certificates solve this problem, so do share
    your knowledge, if you would. Thanks!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Louis Valence on Wed Sep 11 21:07:26 2019
    On 9/11/19 6:31 AM, Louis Valence wrote:
    Interesting! I didn't know about S/MIME. I'll read RFC 3369.

    Read RFC 3369, et al., if you want to. But I'd suggest avoiding the
    murky details of S/MIME, and PGP, for your health. I think a good
    tutorial would be a much better introduction. It's a you (likely) can't
    see the forest (larger S/MIME ecosystem) because of all the trees (murky details) getting in the way.

    Please educate me on this.

    I'll try.

    Why do we need certificates for the context of A, B communicating
    directly?

    Because S/MIME, like SSL / TLS, uses public & private keys via
    certificates as the foundation of how it works.

    It's not clear to me but I sense certificates might be involved
    to protect A and B from a middle man C.

    It's more the encryption that protects A & B from the middle man C. But
    the encryption uses the keys from the certificates.

    If C can tamper A and B's communication, then on the first signed
    e-mail from A to B (which including A's public key), C could replace
    the public key, then B would encrypt B's reply with C's public key,
    then C would decrypt the message and reencrypt it back to A using
    A's public key --- and A and B would never notice anything wrong.

    That theoretically could happen.

    Even if this is right, it's not clear to me how certificates solve
    this problem, so do share your knowledge, if you would.

    Certificates are (normally) issued by Certificate Authorities, which are ideally trusted. So, a CA will issue a certificate for A containing A's pertinent details, notably A's email address. Likewise a CA will issue
    a certificate for B containing B's pertinent details.

    So, when B receives a signed message from A, B can check details of the signature, including which CA issues the certificate. A can do
    likewise. A and B can (and ideally should) communicate through
    different channels to share details of their certificate and validate
    some details of the other certificate.

    Thus, B can trust the signature on the email from A, and A can trust the signature on an email from B.

    Once the trust is established, it's largely smooth sailing until it's
    time to renew certificates.

    At least that's my understanding at a high level.

    I hope that helps.

    Thanks!
    You're welcome.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Louis Valence@21:1/5 to Grant Taylor on Thu Sep 12 14:24:04 2019
    Grant Taylor <gtaylor@tnetconsulting.net> writes:

    On 9/11/19 6:31 AM, Louis Valence wrote:
    Interesting! I didn't know about S/MIME. I'll read RFC 3369.

    Read RFC 3369, et al., if you want to. But I'd suggest avoiding the
    murky details of S/MIME, and PGP, for your health. I think a good
    tutorial would be a much better introduction. It's a you (likely)
    can't see the forest (larger S/MIME ecosystem) because of all the
    trees (murky details) getting in the way.

    Yes, RFC 3369 is more interested in defining the syntax of the data to
    be exchanged. So I dropped it.

    [...]

    It's not clear to me but I sense certificates might be involved to
    protect A and B from a middle man C.

    It's more the encryption that protects A & B from the middle man C.
    But the encryption uses the keys from the certificates.

    From what you say I get the impression that public keys are somehow
    embedded in certificates. Is that really the case?

    If C can tamper A and B's communication, then on the first signed
    e-mail from A to B (which including A's public key), C could replace
    the public key, then B would encrypt B's reply with C's public key,
    then C would decrypt the message and reencrypt it back to A using
    A's public key --- and A and B would never notice anything wrong.

    That theoretically could happen.

    [...]

    Even if this is right, it's not clear to me how certificates solve
    this problem, so do share your knowledge, if you would.

    Certificates are (normally) issued by Certificate Authorities, which
    are ideally trusted. So, a CA will issue a certificate for A
    containing A's pertinent details, notably A's email address. Likewise
    a CA will issue a certificate for B containing B's pertinent details.

    So, when B receives a signed message from A, B can check details of
    the signature, including which CA issues the certificate. A can do
    likewise. A and B can (and ideally should) communicate through
    different channels to share details of their certificate and validate
    some details of the other certificate.

    Thus, B can trust the signature on the email from A, and A can trust
    the signature on an email from B.

    Once the trust is established, it's largely smooth sailing until it's
    time to renew certificates.

    At least that's my understanding at a high level.

    I confirm your understanding. It is now mine as well. Let's discuss
    the general assumption that one can safely get the public key from a Certificate Authority. How can I safely get such thing? I don't think
    I've done anything on my computer in a safe way at all.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Louis Valence on Fri Sep 13 07:16:20 2019
    On 9/12/19 11:24 AM, Louis Valence wrote:
    Yes, RFC 3369 is more interested in defining the syntax of the data
    to be exchanged. So I dropped it.

    ;-)

    From what you say I get the impression that public keys are somehow
    embedded in certificates. Is that really the case?

    Yes. I think that is a fair and accurate statement. Though there is a
    lot more to X.509 certificates.

    the general assumption that one can safely get the public key from
    a Certificate Authority. How can I safely get such thing?

    The CAs that I've used never had the private key. Rather, my web
    browser generated the key pair along with a Certificate Signing Request.
    The CSR was sent to the CA for them to sign. The CA never had
    anything to loose or distrust.

    I don't think I've done anything on my computer in a safe way at all.

    To each his / her / their own.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Louis Valence@21:1/5 to Grant Taylor on Fri Sep 13 14:21:24 2019
    Grant Taylor <gtaylor@tnetconsulting.net> writes:

    [...]

    the general assumption that one can safely get the public key from a
    Certificate Authority. How can I safely get such thing?

    The CAs that I've used never had the private key. Rather, my web
    browser generated the key pair along with a Certificate Signing
    Request. The CSR was sent to the CA for them to sign. The CA never
    had anything to loose or distrust.

    I'm lost here. Did you say ``private key''? I can't ``parse'' the
    first sentence. I can't understand the rest of the paragraph either.
    What is a CSR and did you send a CA one? With what purpose? (How did
    you know the CA was really the CA you wanted to talk to?)

    [...]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Louis Valence on Sun Sep 15 00:05:38 2019
    On 9/13/19 11:21 AM, Louis Valence wrote:
    The CAs that I've used never had the private key. Rather, my web
    browser generated the key pair along with a Certificate Signing
    Request. The CSR was sent to the CA for them to sign. The CA never
    had anything to loose or distrust.

    I'm lost here. Did you say ``private key''?

    Yes.

    X.509 certificates are a key pair, public and private, with some
    additional metadata.

    "The CAs that I've used never had the private key."

    My web browser generated the public / private key pair, and then
    subsequently sent the /public/ key with some metadata to the CA. That
    is largely what a Certificate Signing Request is.

    The CA signed my CSR with their private key, thus turning it into a
    signed certificate, which they sent back to me.

    Notice how the CA never had the private key.

    I can't ``parse'' the first sentence. I can't understand the rest
    of the paragraph either.

    Did that help?

    What is a CSR and did you send a CA one?

    A CSR is a what a certificate is created from.

    With what purpose?

    To send transport my public key and metadata to the CA for them to sign.

    (How did you know the CA was really the CA you wanted to talk to?)

    Without getting extremely deep into the weeds and bordering on
    conspiracy theory type discussions, let's just say for the sake of
    discussion that I can't articulate well enough to /prove/ that I am
    talking to the CA. (There is also the fact that I was connecting to the
    CA via HTTPS.)

    Once I received the certificate from the CA, I can extract other
    metadata and use the CA's well known /public/ key to validate the hash
    that they added to the certificate. This allows me to mathematically
    validate that the CA was who signed it.

    I think it largely doesn't matter if there was someone else in between
    the the CA and myself. I say this because there was nothing that the intermediate could have done that would cause problems. They could not
    modify my CSR because it was signed with my private key. So any
    certificate I received based on a modified CSR would fail to validate
    with my private key. Similarly, things would fail to validate if the intermediary tried to modify the certificate on it's way back to me.
    I'm also not worried if the intermediary keeps a copy of the certificate because it's effectively useless and a waste of bytes on disk without my private key. (Remember, my private key never left my system.)

    So, an intermediary could at most prevent me from getting a functional certificate, thereby performing a Denial of Service. But that would be
    easy to detect nearly instantaneously.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Louis Valence@21:1/5 to Grant Taylor on Sun Sep 15 12:08:18 2019
    Grant Taylor <gtaylor@tnetconsulting.net> writes:

    On 9/13/19 11:21 AM, Louis Valence wrote:
    The CAs that I've used never had the private key. Rather, my web
    browser generated the key pair along with a Certificate Signing
    Request. The CSR was sent to the CA for them to sign. The CA never
    had anything to loose or distrust.

    I'm lost here. Did you say ``private key''?

    Yes.

    X.509 certificates are a key pair, public and private, with some
    additional metadata.

    "The CAs that I've used never had the private key."

    My web browser generated the public / private key pair, and then
    subsequently sent the /public/ key with some metadata to the CA. That
    is largely what a Certificate Signing Request is.

    The CA signed my CSR with their private key, thus turning it into a
    signed certificate, which they sent back to me.

    Notice how the CA never had the private key.

    I can't ``parse'' the first sentence. I can't understand the rest
    of the paragraph either.

    Did that help?

    A little. Let's see a bit further.

    What is a CSR and did you send a CA one?

    A CSR is a what a certificate is created from.

    With what purpose?

    To send transport my public key and metadata to the CA for them to sign.

    Why should you want your own public key signed by a CA?

    [...]

    Once I received the certificate from the CA, I can extract other
    metadata and use the CA's well known /public/ key to validate the hash
    that they added to the certificate. This allows me to mathematically validate that the CA was who signed it.

    The ``hash'' is the signature the CA added. To validate the signature
    you need the CA's public key, but it is the CA itself that's giving you
    its public key. I think in your description, your communication with
    the CA is assumed to be safe and the identity of CA is also assumed to
    be correct. (See below for a clearer presentation of this point of
    view.)

    I think it largely doesn't matter if there was someone else in between
    the the CA and myself. I say this because there was nothing that the intermediate could have done that would cause problems. They could
    not modify my CSR because it was signed with my private key. So any certificate I received based on a modified CSR would fail to validate
    with my private key. Similarly, things would fail to validate if the intermediary tried to modify the certificate on it's way back to
    me. I'm also not worried if the intermediary keeps a copy of the
    certificate because it's effectively useless and a waste of bytes on
    disk without my private key. (Remember, my private key never left my system.)

    Here's what I think. Suppose C is a middle-man. (For instance, your
    ISP.) Before you establish a TLS session with your CA, to send your CSR
    or whatever, C directs you to a different CA, with whom you establish a
    TLS session and proceed. C doesn't need to modify your CSR; it only
    needs to put a different CA to talk to you. You can only detect such identity-violation if you already have the CA's public key with which
    you can verify their signature. So you need to safely get the CA's
    public key. How would get such thing safely?

    So, an intermediary could at most prevent me from getting a functional certificate, thereby performing a Denial of Service. But that would
    be easy to detect nearly instantaneously.

    That's true.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Louis Valence on Sat Sep 21 20:42:33 2019
    On 9/15/19 9:08 AM, Louis Valence wrote:
    A little. Let's see a bit further.

    :-)

    Why should you want your own public key signed by a CA?

    Because I want a recipient that doesn't trust me, by (hypothetically)
    does trust a public CA. So if said public CA vouches for me, then
    (hopefully) the recipient will trust the CA's trust in me and thus trust
    me themselves. (Think commutative property in math.)

    The ``hash'' is the signature the CA added. To validate the signature
    you need the CA's public key, but it is the CA itself that's giving you
    its public key. I think in your description, your communication with
    the CA is assumed to be safe and the identity of CA is also assumed to
    be correct. (See below for a clearer presentation of this point of
    view.)

    Yes.

    Here's what I think. Suppose C is a middle-man. (For instance, your
    ISP.) Before you establish a TLS session with your CA, to send your CSR
    or whatever, C directs you to a different CA, with whom you establish a
    TLS session and proceed. C doesn't need to modify your CSR; it only
    needs to put a different CA to talk to you. You can only detect such identity-violation if you already have the CA's public key with which
    you can verify their signature. So you need to safely get the CA's
    public key. How would get such thing safely?

    First, the TLS connection with the CA is established before sending the
    CSR to them.

    Second, most OSs come with public keys of many (hypothetically) trusted
    / well known public CAs. These public keys in this OS's (or
    application's) public key store are what are used to validate signatures (hashes) from the CAs.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Louis Valence@21:1/5 to Grant Taylor on Sun Sep 22 23:48:47 2019
    Grant Taylor <gtaylor@tnetconsulting.net> writes:

    [...]

    Why should you want your own public key signed by a CA?

    Because I want a recipient that doesn't trust me, by (hypothetically)
    does trust a public CA. So if said public CA vouches for me, then (hopefully) the recipient will trust the CA's trust in me and thus
    trust me themselves. (Think commutative property in math.)

    Got ya.

    [...]

    Here's what I think. Suppose C is a middle-man. (For instance, your
    ISP.) Before you establish a TLS session with your CA, to send your CSR
    or whatever, C directs you to a different CA, with whom you establish a
    TLS session and proceed. C doesn't need to modify your CSR; it only
    needs to put a different CA to talk to you. You can only detect such
    identity-violation if you already have the CA's public key with which
    you can verify their signature. So you need to safely get the CA's
    public key. How would get such thing safely?

    First, the TLS connection with the CA is established before sending
    the CSR to them.

    I understand. We have to assume the TLS connection is safe. But I
    can't see a method to be safe if you don't assume your ISP is after you,
    say. They'd easily make you to talk to an impostor CA and you'd only
    detect if the public key of your CA was given to you safely. (If you downloaded your browser or your OS by way of your ISP, then your entire security rests on you trusting your ISP.)

    Second, most OSs come with public keys of many (hypothetically)
    trusted / well known public CAs. These public keys in this OS's (or application's) public key store are what are used to validate
    signatures (hashes) from the CAs.

    My browser was downloaded from the Internet, so my ISP could've easily
    given me an impostor browser with impostor public keys and CA
    certificates. My OS has been updated so many times (against my will), I
    have no say what goes on in here.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Louis Valence on Sun Sep 22 22:26:05 2019
    On 9/22/19 8:48 PM, Louis Valence wrote:
    Got ya.

    :-)

    I understand. We have to assume the TLS connection is safe.

    No, we don't have to assume that. There are ways to mathematically
    prove it. I can't articulate it. But there are people who can.

    But I can't see a method to be safe if you don't assume your ISP is
    after you, say. They'd easily make you to talk to an impostor CA and
    you'd only detect if the public key of your CA was given to you safely.

    Let's say that you got the public key without it passing through your
    ISP in an untrusted manner. This includes included on an OS
    installation CD from a software vendor not passing through your ISP.
    Likewise, passing through your ISP via an encrypted connection.

    You have a priming issue, of how do you start the trust. Once you have
    primed things and started the trust, it's relatively easy to roll things forward.

    (If you downloaded your browser or your OS by way of your ISP, then
    your entire security rests on you trusting your ISP.)

    Talk to the Debian Linux folks that insist on performing downloads
    through HTTP without any encryption and relying on locally verified
    signatures.

    Point being, you don't have to trust your ISP. There are ways to
    validate what passes through them without trusting them.

    My browser was downloaded from the Internet, so my ISP could've easily
    given me an impostor browser with impostor public keys and CA
    certificates.

    Given many forms of security & protection that have been in common use
    for 10–20 years, I think it's actually more difficult to get an
    illegitimate download than you seem to be describing.

    My OS has been updated so many times (against my will), I have no
    say what goes on in here.
    You do have some say. The opportunity cost is high enough that most
    people won't object.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Louis Valence@21:1/5 to Grant Taylor on Mon Sep 23 08:41:05 2019
    Grant Taylor <gtaylor@tnetconsulting.net> writes:

    On 9/22/19 8:48 PM, Louis Valence wrote:
    Got ya.

    :-)

    I understand. We have to assume the TLS connection is safe.

    No, we don't have to assume that. There are ways to mathematically
    prove it. I can't articulate it. But there are people who can.

    (You can't be thinking of protocols like Diffie-Hellman here because ---
    like you notice below --- we have to establish *some* trust before any
    security can take place.)

    But I can't see a method to be safe if you don't assume your ISP is
    after you, say. They'd easily make you to talk to an impostor CA
    and you'd only detect if the public key of your CA was given to you
    safely.

    Let's say that you got the public key without it passing through your
    ISP in an untrusted manner. This includes included on an OS
    installation CD from a software vendor not passing through your
    ISP. Likewise, passing through your ISP via an encrypted connection.

    Yes, that's the solution, AFAIK. But that's also the point I made in
    the beginning of this conversation: I haven't done anything safe on my
    computer ever. I never really had a real opportunity.

    You have a priming issue, of how do you start the trust. Once you
    have primed things and started the trust, it's relatively easy to roll
    things forward.

    Agreed.

    (If you downloaded your browser or your OS by way of your ISP, then
    your entire security rests on you trusting your ISP.)

    Talk to the Debian Linux folks that insist on performing downloads
    through HTTP without any encryption and relying on locally verified signatures.

    Point being, you don't have to trust your ISP. There are ways to
    validate what passes through them without trusting them.

    True, but you'll have to bypass them at least on a first move.

    My browser was downloaded from the Internet, so my ISP could've easily
    given me an impostor browser with impostor public keys and CA
    certificates.

    Given many forms of security & protection that have been in common use
    for 10–20 years, I think it's actually more difficult to get an illegitimate download than you seem to be describing.

    If I don't trust my ISP, I can't see a way out of this.

    My OS has been updated so many times (against my will), I have no
    say what goes on in here.

    You do have some say. The opportunity cost is high enough that most
    people won't object.

    I got lost here. The opportunity cost is high? What opportunity?
    Opportunity for what?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Louis Valence on Mon Sep 23 08:59:47 2019
    On 9/23/19 5:41 AM, Louis Valence wrote:
    True, but you'll have to bypass them at least on a first move.

    Hence "priming".

    You also don't /have/ to bypass them. You can use them and validate
    what comes via them using other methods.

    But that moves the priming issue to a different communications channel.

    If I don't trust my ISP, I can't see a way out of this.

    You need some seed of trust. That way you can authenticate what passes
    through untrusted channels like your ISP.

    I got lost here. The opportunity cost is high? What opportunity? Opportunity for what?

    You have the option of not accepting the updates. But doing so has the opportunity cost of not benefiting from the foregone updates and the
    hassle of doing so. Thus the opportunity cost, or what you pay / give
    up by choosing to forego the updates.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Louis Valence@21:1/5 to Grant Taylor on Mon Sep 23 23:12:00 2019
    Grant Taylor <gtaylor@tnetconsulting.net> writes:

    [...]

    If I don't trust my ISP, I can't see a way out of this.

    You need some seed of trust. That way you can authenticate what
    passes through untrusted channels like your ISP.

    Yes, that's my understanding too.

    I got lost here. The opportunity cost is high? What opportunity?
    Opportunity for what?

    You have the option of not accepting the updates. But doing so has
    the opportunity cost of not benefiting from the foregone updates and
    the hassle of doing so. Thus the opportunity cost, or what you pay /
    give up by choosing to forego the updates.

    I'm running Windows 10. I don't know how to tell it not to update. I
    would actually like this. I don't have any need to update anything.
    This is essentially just my typewriter and I would appreciate finding it exactly as I left it the last time. But as a matter of fact I
    sometimes find it completely different. The last update even changed
    icons in the system. ``Please don't touch my stuff. I need it for
    work. This is not a toy.''

    Also, I have to report that at one time in the past (couple of years
    ago), I did try some hacks to stop the updates. I think I made Windows
    think my local network was actually a slow network, so it wouldn't do
    updates on it. I don't know if this ever worked. Little by little
    things were changing too --- I believe some of the so-called Windows
    apps stop working if they're not updated. Because I think they talk to
    a server and the server likely stops talking to them if they're ``too
    old''. FWIW, though, that's not quite a problem to me --- as long as my
    own programs do run which I believe they always do, in particular if the
    system doesn't change on their backs. Anyhow, I suppose whatever I did
    was undone because the system updates regularly now.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)