Tag: keystore

  • Wrong again, there is no return code 0 on self signed certs

    Morning,

    It looks like i was wrong again with the SSL generation script. Here is the second article

    Code 0 is not good after all and it signals that Kafka broker is closing the connection really fast.

    So:

  • There is no 0 on self signed certs
  • Please make sure that you have a certificate in chain when you test
  • I will give you just the server side, for the client it’s still not very clear if it works. Once i have the confirmation i will post it.

    #!/bin/bash
    HOST=<%= @fqdn %>
    PASSWORD=<%= @pass %>
    KEYSTOREPASS=<%= @keystorepass %>
    VALIDITY=365
    
    openssl genrsa -out CA.key 2048
    openssl req -new -x509 -keyout CA.key -out ca-cert -days $VALIDITY -subj "/CN=${HOST}/OU=MyTeam/O=MyCompany/L=Bucharest/S=Romania/C=RO" -passout pass:$PASSWORD
    keytool -keystore kafka.server.keystore.jks -alias $HOST -validity $VALIDITY -genkey -dname "CN=${HOST}, OU=MyTeam, O=MyCompany, L=Bucharest S=Romania C=RO" -storepass $KEYSTOREPASS -keypass $KEYSTOREPASS
    keytool -keystore kafka.server.keystore.jks -alias $HOST -certreq -file cert-file-${HOST}.host -storepass $KEYSTOREPASS
    openssl x509 -req -CA ca-cert -CAkey CA.key -in cert-file-${HOST}.host -out cert-signed-${HOST}.host -days $VALIDITY -CAcreateserial -passin pass:$PASSWORD
    keytool -keystore kafka.server.keystore.jks -alias CARoot -import -trustcacerts -file ca-cert -storepass $KEYSTOREPASS -noprompt
    keytool -keystore kafka.server.keystore.jks -alias $HOST -import -file cert-signed-${HOST}.host -storepass $KEYSTOREPASS -noprompt
    
    <% @servers.each do |server| -%>
    # <%= server %>
    keytool -keystore kafka.server.keystore.jks -alias <%= server %> -validity $VALIDITY -genkey -dname "CN=<%= server %>, OU=MyTeam, O=MyCompany, L=Bucharest S=Romania C=RO" -storepass $KEYSTOREPASS -keypass $KEYSTOREPASS
    keytool -keystore kafka.server.keystore.jks -alias <%= server %> -certreq -file cert-file-<%= server %>.host -storepass $KEYSTOREPASS
    openssl x509 -req -CA ca-cert -CAkey CA.key -in cert-file-<%= server %>.host -out cert-signed-<%= server %>.host -days $VALIDITY -CAcreateserial -passin pass:$PASSWORD
    keytool -keystore kafka.server.keystore.jks -alias <%= server %> -import -file cert-signed-<%= server %>.host -storepass $KEYSTOREPASS -noprompt
    <% end -%>
    
    keytool -keystore kafka.server.truststore.jks -alias CARoot -import -trustcacerts -file ca-cert -storepass $KEYSTOREPASS -noprompt
    

    Hope i don’t discover anything else that it’s wrong. If so, keep you informed

    PS: It seems that i was wrong again 😀 It’s strange that it works with Kafka until 2.0 but it will not validate on that version.
    The final right way to do it is to kave in the keystore only caroot and the alias correspondent to that server.
    Will post as soon as i have an implementation.

    And here it is.
    Cheers

  • Correct SSL script for Kafka deployment

    Hi,

    I wrote some time ago a post about certificate generation in order to secure Kafka cluster.

    Long story short, it was wrong!

    Here is the correct version that returns O (keystore is correctly generated and used)

    
    #!/bin/bash
    HOST=<%= @fqdn %>
    PASSWORD=<%= @pass %>
    KEYSTOREPASS=<%= @keystorepass %>
    VALIDITY=365
    
    keytool -keystore kafka.server.temp.keystore.jks -alias $HOST -validity $VALIDITY -genkey -dname "CN=${HOST}, OU=Myteam, O=Mycompany, L=Bucharest S=Romania C=RO" -storepass $KEYSTOREPASS -keypass $KEYSTOREPASS
    openssl req -new -x509 -keyout ca-key -out ca-cert -days $VALIDITY -subj "/CN=${HOST}/OU=Myteam/O=MyCompany/L=Bucharest/S=Romania/C=RO" -passout pass:$PASSWORD
    keytool -keystore kafka.server.temp.keystore.jks -alias $HOST -certreq -file cert-file-${HOST}.host -storepass $KEYSTOREPASS
    openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file-${HOST}.host -out cert-signed-${HOST}.host -days $VALIDITY -CAcreateserial -passin pass:$PASSWORD
    keytool -keystore kafka.server.keystore.jks -alias $HOST -import -file cert-signed-${HOST}.host -storepass $KEYSTOREPASS -noprompt
    keytool -keystore kafka.server.keystore.jks -alias CARoot -import -file ca-cert -storepass $KEYSTOREPASS -noprompt
    keytool -keystore kafka.server.truststore.jks -alias CARoot -import -file ca-cert -storepass $KEYSTOREPASS -noprompt
    
    
    <% @servers.each do |server| -%>
    # <%= server %>
    keytool -keystore kafka.server.temp.keystore.jks -alias <%= server %> -validity $VALIDITY -genkey -dname "CN=<%= server %>, OU=Myteam, O=MyCompany, L=Bucharest S=Romania C=RO" -storepass $KEYSTOREPASS -keypass $KEYSTOREPASS
    keytool -keystore kafka.server.temp.keystore.jks -alias <%= server %> -certreq -file cert-file-<%= server %>.host -storepass $KEYSTOREPASS
    openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file-<%= server %>.host -out cert-signed-<%= server %>.host -days $VALIDITY -CAcreateserial -passin pass:$PASSWORD
    keytool -keystore kafka.server.keystore.jks -alias <%= server %> -import -file cert-signed-<%= server %>.host -storepass $KEYSTOREPASS -noprompt
    <% end -%>
    
    keytool -keystore kafka.client.temp.keystore.jks -alias 'client' -validity $VALIDITY -genkey -dname "CN=${HOST}, OU=Myteam, O=MyCompany, L=Bucharest S=Romania C=RO" -storepass $KEYSTOREPASS -keypass $KEYSTOREPASS
    keytool -keystore kafka.client.temp.keystore.jks -alias 'client' -certreq -file cert-file-client.host -storepass $KEYSTOREPASS
    openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file-client.host -out cert-signed-client.host -days $VALIDITY -CAcreateserial -passin pass:$PASSWORD
    keytool -keystore kafka.client.keystore.jks -alias $HOST -import -file cert-signed-client.host -storepass $KEYSTOREPASS -noprompt
    keytool -keystore kafka.client.truststore.jks -alias CARoot -import -file ca-cert -storepass $KEYSTOREPASS -noprompt
    

    Here is also a link to the old article for comparison wrong way to do it

    PS: It seems that this is also wrong. Please check article

  • Kafka problem that wasn’t a problem after all

    Hi,

    Do not make my mistake from the last couple of weeks trying to connect to a “secured” kafka cluster using TLS. I wrote following article http://log-it.tech/2017/07/27/configure-kafka-truststore-keystore-using-puppet/ some time ago, and i know that it’s far from bullet proof but it does the job.
    Now let’s get to the subject, if you want to connect to the node once this is activated you can not use localhost anymore. And the way i figured it out is by trying to test the port using openssl command.
    The config in server.properties is

    'listeners'                     => "PLAINTEXT://${::fqdn}:9092,SSL://${::fqdn}:9093", #both listeners are enabled
    'advertised.listeners'          => "PLAINTEXT://${::fqdn}:9092,SSL://${::fqdn}:9093",

    So, please keep in mind that it’s configured to listen on FQDN, so normally the external interface is the target not the loopback adapter.
    Now if you try to test it using localhost you will surely get this output:

    /opt/kafka/bin# openssl s_client -debug -connect localhost:9093 -tls1
    connect: Connection refused
    connect:errno=111

    Do not try to check if the firewall or port it’s opened. You can easily check that using iptables -L or netstat -tulpen | grep 9093. The problem is that instead of localhost you should be using FQDN like openssl s_client -debug -connect ${fqdn}:9093 -tls1 and you will see a lot of keys/certificates.
    Now, if you want for example to use the standard .sh scripts that are delivered with kafka installation, you should created a file called config.properties (for example) and pass it as parameter. In case zookeeper connect (with the –zookeeper parameter) this is not needed but if you want to start a console consumer or producer, or you want to check the consumer groups, this will be needed. Let me just give you an example:

    /opt/kafka/bin# ./kafka-consumer-groups.sh --command-config /root/config.properties --bootstrap-server ${fqdn}:9093 --list
    Note: This will only show information about consumers that use the Java consumer API (non-ZooKeeper-based consumers).
    
    console-consumer-30514
    KMOffsetCache-kafka2
    KMOffsetCache-kafka0
    KMOffsetCache-kafka1
    

    Oterwise, it will not work. And my config file looks like this:

    security.protocol=SSL
    ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
    ssl.truststore.password=password
    ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
    ssl.keystore.password=password
    ssl.key.password=password
    

    I can not give you all the details to all the commands but at least i am confident i put you on the right track.

    Cheers

  • Fixing the keystore/trustore distribution code

    Hi,

    There is an extra thing to be added to my article http://log-it.tech/2017/07/27/configure-kafka-truststore-keystore-using-puppet/

    As is the code copies the files at each puppet run to the other nodes which not contain the keystore generation code. And to fix this i used yet again another puppet module that should share data between the nodes, you can find it here https://github.com/WhatsARanjit/puppet-share_data

    As far as i saw it gets the job done, and in order to use it, you will need to include the following pieces of code to your repo. First of all, one piece of custom fact:

    
    require 'facter'
    
    filename = '/home/kafka/kafka.server.keystore.jks'
    Facter.add(:kafkakeystore) do
        setcode do
            if File.file?(filename)
                kafkakeystore = "enabled"
            else
            	kafkakeystore = "disabled"    
            end
        end
    end
    

    If the file is present, this means that the setup is probably activated. For the kafka manifests, if it’s not the node on which the keystore it’s generated we need to share the fact which we actually added in form:

        share_data { "${fqdn}":
          data => [ $::fqdn,$::kafkakeystore ],
          label => 'keystore',
        }
    

    If it’s the node that actually generates and copies the keystore then we will need to include in the class that actually does this kafka_security_gen following piece:

     $data = share_data::retrieve('keystore')
         $data.each |$item| {
       # $servers.each |String $server| {
       if (member($servers,$item[0]) and $item[1] == "disabled") {
            exec{"copy files to ${item[0]}":
                cwd => '/home/kafka',
                path   => '/usr/bin:/usr/sbin:/bin',
                command => "scp /home/kafka/kafka* kafka@${item[0]}:/home/kafka",
                user => 'kafka',
            }
            }
         }
    

    And this should assure you that puppet will not try to copy the keystore on nodes that already has it. Now come to think of it, if you need to refresh the store, it should be a proble, but i will think also for a fix for that and come back.

    Cheers!