• Integrating Kerberos with a Java application using JAAS

    From Aparajita Singh@21:1/5 to All on Fri Jun 12 19:35:50 2020
    Hi,

    We are trying to migrate an unauthenticated zookeeper cluster to a kerberos authenticated one. This <https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+and+SASL> guide
    was followed for configuring kerberos on zookeeper and this <https://web.ornl.gov/~romeja/HowToKerb.html#Install> guide was
    followed for setting up a KDC host.
    The issue right now is that, when zookeeper shell client is used to request
    for some data, the service ticket which is issued by the KDC is not getting decrypted by the server due to which authentication is failing.

    Has anyone faced this issue before? Any help would be appreciated.

    *Setup:*
    Principal name is zookeeper/stage-kdc-zk-2face@stage.fdp.kafka for both
    server and client.

    *Data request command:*
    zookeeper-client -server stage-kdc-zk-2face:2181 get /test2

    *Stack trace from client:*
    Exception in thread "main" org.apache.zookeeper.KeeperException$ConnectionLossException:
    KeeperErrorCode = ConnectionLoss for /test2
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
    at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
    at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
    at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:717)
    at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:591)
    at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:354)
    at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:282)

    *Error in zookeeper server:*
    2020-06-12 18:53:57,510 - WARN [NIOServerCxn.Factory: 0.0.0.0/0.0.0.0:2181:ZooKeeperServer@969] - Client failed to SASL
    authenticate: javax.security.sasl.SaslException: GSS initiate failed
    [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism
    level: Invalid argument (400) - Cannot find key of appropriate type to
    decrypt AP REP - AES256 CTS mode with HMAC SHA1-96)]

    *krb5kdc.log:*
    Jun 12 18:53:57 stage-kdc-zk-2face krb5kdc[1391](info): AS_REQ (2 etypes {aes256-cts-hmac-sha1-96(18), aes128-cts-hmac-sha1-96(17)}) 10.34.169.158: ISSUE: authtime 1591968237, etypes {rep=aes256-cts-hmac-sha1-96(18), tkt=aes256-cts-hmac-sha1-96(18), ses=aes256-cts-hmac-sha1-96(18)}, zookeeper/stage-kdc-zk-2face@stage.fdp.kafka for krbtgt/stage.fdp.kafka@stage.fdp.kafka
    Jun 12 18:53:57 stage-kdc-zk-2face krb5kdc[1391](info): TGS_REQ (4 etypes {aes256-cts-hmac-sha1-96(18), aes128-cts-hmac-sha1-96(17), DEPRECATED:des3-cbc-sha1(16), DEPRECATED:arcfour-hmac(23)}) 10.34.169.158: ISSUE: authtime 1591968237, etypes {rep=aes256-cts-hmac-sha1-96(18), tkt=aes256-cts-hmac-sha1-96(18), ses=aes256-cts-hmac-sha1-96(18)}, zookeeper/stage-kdc-zk-2face@stage.fdp.kafka for zookeeper/stage-kdc-zk-2face@stage.fdp.kafka

    --
    Thanks,
    Aparajita

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Greg Hudson@21:1/5 to Aparajita Singh on Fri Jun 12 11:48:16 2020
    To: kerberos@mit.edu

    On 6/12/20 10:05 AM, Aparajita Singh wrote:
    [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid argument (400) - Cannot find key of appropriate type to decrypt AP REP - AES256 CTS mode with HMAC SHA1-96)]

    Most likely the long-term key of the service as seen by the KDC does not
    match the entry in the keytab of the service.

    Each time you run the kadmin "ktadd" command, new keys are generated for
    the service, with a new key version number (kvno), and are added to the
    keytab on whatever machine you run it as. Any existing keytab file
    elsewhere is invalidated by the generation of new keys.

    Since you are using the same client and service principal name (why?),
    you may have provisioned keytab files for the same principal name on the
    client and server hosts. If you really need to use the same client and
    server principal name, you will need to provision one keytab file and
    copy it around (with scp or similar) rather than provision it separately
    on each machine.

    You can use "kvno zookeeper/stage-kdc-zk-2face@stage.fdp.kafka" on the
    client to see what kvno of tickets the KDC issued to the client. You
    can use "klist -k" or "klist -k -t /path/to/keytab" to see the kvno
    present in a keytab file.

    As an aside, the instructions you reference are from 17 years ago.
    Please refer to https://web.mit.edu/kerberos/krb5-latest/doc/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Aparajita Singh@21:1/5 to Greg Hudson on Fri Jun 12 23:03:49 2020
    Copy: kerberos@mit.edu

    Thanks Greg for the quick response.

    I don't need to use the same principal name for client and server. I had initially configured the server to use principal name "host/ stage-kdc-zk-2face@stage.fdp.kafka" but I could see in krb5kdc log file
    that when the client tried to request data from the server it was
    requesting for a service ticket for service principal "zookeeper/ stage-kdc-zk-2face@stage.fdp.kafka" when it should be requesting for "host/ stage-kdc-zk-2face@stage.fdp.kafka". The log from the KDC is below. My
    current focus is to integrate the KDC with the zookeeper cluster so I am
    using the same principal for both server and client for the time being.
    *KDC log:*
    Jun 12 18:09:11 stage-kdc-zk-2face krb5kdc[1391](info): TGS_REQ (4 etypes {aes256-cts-hmac-sha1-96(18), aes128-cts-hmac-sha1-96(17), DEPRECATED:des3-cbc-sha1(16), DEPRECATED:arcfour-hmac(23)}) 10.34.169.158: ISSUE: authtime 1591965551, etypes {rep=aes256-cts-hmac-sha1-96(18), tkt=aes256-cts-hmac-sha1-96(18), ses=aes256-cts-hmac-sha1-96(18)}, zookeeper/stage-kdc-zk-2face@stage.fdp.kafka for zookeeper/stage-kdc-zk-2face@stage.fdp.kafka

    As per the output of kvno and klist commands, the key version is 1 in both
    but there were duplicate entries in the keytab for each encryption type. I
    must have created duplicates in the keytab while testing so I destroyed the database (didn't need to but I got confused here) and created new
    principals and then generated a new keytab. I used kinit to generate a
    ticket for "zookeeper/stage-kdc-zk-2face@stage.fdp.kafka" principal and
    then used the kvno and klist commands shared by you to verify the version number.
    This was the output:
    *user@stage-kdc-zk-2face:~$* sudo /krb5/bin/kvno -k /etc/krb5.keytab zookeeper/stage-kdc-zk-2face@stage.fdp.kafka zookeeper/stage-kdc-zk-2face@stage.fdp.kafka: kvno = 1, keytab entry valid *user@stage-kdc-zk-2face:~$* sudo /krb5/bin/klist -e -k /etc/krb5.keytab |
    grep zookeeper/stage-kdc-zk-2face@stage.fdp.kafka
    1 zookeeper/stage-kdc-zk-2face@stage.fdp.kafka (aes256-cts-hmac-sha1-96)
    1 zookeeper/stage-kdc-zk-2face@stage.fdp.kafka (aes128-cts-hmac-sha1-96)

    Seems like there is no mismatch in the keys. I set the "useTicketCache"
    config to true and verified that it was using the ticket cache by checking debug logs from zookeeper client. The issue is still persisting and the
    logs from my original email haven't changed.

    On Fri, 12 Jun 2020 at 21:18, Greg Hudson <ghudson@mit.edu> wrote:

    On 6/12/20 10:05 AM, Aparajita Singh wrote:
    [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid argument (400) - Cannot find key of appropriate type to decrypt AP REP - AES256 CTS mode with HMAC SHA1-96)]

    Most likely the long-term key of the service as seen by the KDC does not match the entry in the keytab of the service.

    Each time you run the kadmin "ktadd" command, new keys are generated for
    the service, with a new key version number (kvno), and are added to the keytab on whatever machine you run it as. Any existing keytab file
    elsewhere is invalidated by the generation of new keys.

    Since you are using the same client and service principal name (why?),
    you may have provisioned keytab files for the same principal name on the client and server hosts. If you really need to use the same client and server principal name, you will need to provision one keytab file and
    copy it around (with scp or similar) rather than provision it separately
    on each machine.

    You can use "kvno zookeeper/stage-kdc-zk-2face@stage.fdp.kafka" on the
    client to see what kvno of tickets the KDC issued to the client. You
    can use "klist -k" or "klist -k -t /path/to/keytab" to see the kvno
    present in a keytab file.

    As an aside, the instructions you reference are from 17 years ago.
    Please refer to https://web.mit.edu/kerberos/krb5-latest/doc/



    --
    Thanks,
    Aparajita

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)