Puppet gems install workaround after TLS 1.0 switchoff


It seems that since Ruby disabled the TLS 1.0 protocol, there is an issue with installing custom gems in the puppet server.

If you run puppetserver gem environment you will probably see the following output:

/opt/puppetlabs/bin/puppetserver gem environment
RubyGems Environment:
  - RUBY VERSION: 1.9.3 (2015-06-10 patchlevel 551) [java]
  - INSTALLATION DIRECTORY: /opt/puppetlabs/server/data/puppetserver/jruby-gems
  - RUBY EXECUTABLE: java -jar /opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar
  - EXECUTABLE DIRECTORY: /opt/puppetlabs/server/data/puppetserver/jruby-gems/bin
  - SPEC CACHE DIRECTORY: /root/.gem/specs
  - SYSTEM CONFIGURATION DIRECTORY: file:/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar!/META-INF/jruby.home/etc
    - ruby
    - universal-java-1.7
     - /opt/puppetlabs/server/data/puppetserver/jruby-gems
     - /root/.gem/jruby/1.9
     - file:/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar!/META-INF/jruby.home/lib/ruby/gems/shared
     - :update_sources => true
     - :verbose => true
     - :backtrace => false
     - :bulk_threshold => 1000
     - "install" => "--no-rdoc --no-ri --env-shebang"
     - "update" => "--no-rdoc --no-ri --env-shebang"
     - /usr/local/sbin
     - /usr/local/bin
     - /usr/sbin
     - /usr/bin
     - /sbin
     - /bin
     - /usr/games
     - /usr/local/games
     - /opt/puppetlabs/bin

Also if you want to install a gem you will receive:

/opt/puppetlabs/bin/puppetserver gem install toml-rb
ERROR:  Could not find a valid gem 'toml-rb' (>= 0), here is why:
          Unable to download data from - Received fatal alert: protocol_version (

Short but unsafe fix for this is:

opt/puppetlabs/bin/puppetserver gem install --source "" toml-rb
Fetching: toml-rb-1.1.1.gem (100%)
Successfully installed toml-rb-1.1.1
WARNING:  Unable to pull data from '': Received fatal alert: protocol_version (
1 gem installed

It’s not that elegant, but it does the trick. You can also include this in an puppet exec block.


kafka puppet

Observer functionality for puppet zookeeper module


I know it’s been some time since i last posted but i didn’t had the time to play that much. Today i want to share with you the use case in which we needed to modify the module used for the deployment of zookeeper in order to include also observer role.

The link that describes how this should be activated from version 3.3.0 is located here:

Taking this situation we are using for deployment module

It’s not a nice module, trust me, i know, but since we did not want to take the development process from beginning and impact the infrastructure that it’s already deployed we had to cope with this situation by changing what we had.

Main idea in our case is that since the number of zookeeper members for the election process needs to be 2n+1 in order for the Quorum mechanism to work, deployment of even number of machines was pretty tricky, so to fix this, the extra zookeeper instances over requirements should be set as observers

A zookeeper observer is a node that it’s not included in the election process and just receives the updates from the cluster.

My vision is that the best approach for delivery is to activate it in Hiera with a zookeeper::observer parameter per host.

We can start by including it in the defaults.pp file as follows:

 $observer	      = hiera('zookeeper::observer', false)

The zoo.conf file deployed for the configuration is being written in the init.pp file so we need to add it also here as parameter

$observer	   = $::zookeeper::defaults::observer

Ok, now how do we share the status of each node in the required domain? We will need to use another module and include in our code something like:

 share_data { $::fqdn:
  	    data  => [ $::fqdn, $observer ],
  	    label => 'role',
   $obsrole = share_data::retrieve('role')

This guarantees us that all servers have and can use the observer flag in the erb template.

Jumping to the last component of this config, we need to modify the template to have it with the added observer role.

How do we do that? Basically by rewriting the server information in this format:

<% if @hosts
 @hosts.sort_by { |name, id| id }.each do |host_id| -%>
server.<%= host_id[1] %>=<%= host_id[0] %>:2182:2183<% @obsrole.each do |item| if (item[0] == host_id[0]) && item[1] -%>:observer<% end -%><% end -%> 
<% end -%>
<% end -%>

Straight forward this compares the values from the two lists and if the flag is true, it adds the observer configuration.
One last part needs to be added and that is

<% if @observer == true -%>
<% end -%>

And you are done, if you add zookeeper::observer: true to your yaml file, puppet should rewrite the file and restart Zookeeper service.


kafka puppet

Kafka limits implementation using puppet


I keep my promise and provide you with the two simple blocks that are needed to implement limits that we discussed in article

For the limits module you can use:

As for the actual puppet implementation, I took the decision not to restart the service immediately. This being said, it’s dead simple to do it:

	 file_line {"add_pamd_record":
	 path => '/etc/pam.d/common-session',
	 line => 'session required'
	 limits::fragment {
      		value => "100000";
      		value => "100000";
      		value => "100000";
      		value => "100000";

This is all you need.


kafka puppet

Kafka implementation using puppet at IMWorld Bucharest 2017


I recently had a presentation on how to deploy kafka using puppet and what do you need as a minimum in order to have success in production.
Here is the presentation:

Hope it is useful.



There is also an official version from IMWorld which you can find here:

And also the article on that describes it in more technical detail:


Eyaml hiera configuration for puppet, as promised


We managed to configure also the hiera backend in order to have eyaml module active. It is related to the following past article So in the hiera.yaml you bassicaly need to add the following configuration before hierarchy:

  - eyaml
  - yaml
  - puppetdb


    :datadir: /etc/puppetlabs/hieradata
    :pkcs7_private_key: /etc/puppetlabs/puppet/eyaml/private_key.pkcs7.pem
    :pkcs7_public_key:  /etc/puppetlabs/puppet/eyaml/public_key.pkcs7.pem 
    :extension: 'yaml

at the botton. After this is done, the most essential part is that you created the required symlinks so that the backend is enabled.
This should be done easily with a bash script like:

ln -s /opt/puppetlabs/puppet/lib/ruby/gems/2.1.0/gems/hiera-eyaml-2.1.0/lib/hiera/backend/eyaml /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/hiera/backend/eyaml
ln -s /opt/puppetlabs/puppet/lib/ruby/gems/2.1.0/gems/hiera-eyaml-2.1.0/lib/hiera/backend/eyaml_backend.rb /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/hiera/backend/eyaml_backend.rb
ln -s /opt/puppetlabs/puppet/lib/ruby/gems/2.1.0/gems/hiera-eyaml-2.1.0/lib/hiera/backend/eyaml.rb /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/hiera/backend/eyaml.rb
ln -s /opt/puppetlabs/puppet/lib/ruby/gems/2.1.0/gems/highline-1.6.21/lib/highline /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/highline/
ln -s /opt/puppetlabs/puppet/lib/ruby/gems/2.1.0/gems/highline-1.6.21/lib/highline.rb /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/highline.rb

After this is done, it is advised for a puppetdb and puppetserver restart, and you can try testing it by putting a string in hiera and see if a notice prints the required output. Something like

profiles::test::teststring: '[string generated with eyaml ecrypt -s 'test']'

and then creating a small class like :

class profiles::test{
$teststring = hiera('profiles::test::teststring')
notice {"${teststring}":}

That should be most of you need in order to do this. Hope it works! šŸ™‚


kafka puppet

Implementing logrotate for kafka


Yes, we will need to implement also logrotate if we want to keep kafka under control. My solution was with puppet, as you probably expected. After i took a look on the documentation related to log4j properties i this i had a configuration figured out that should look like the following erb template

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# See the License for the specific language governing permissions and
# limitations under the License.

log4j.rootLogger=INFO, stdout

log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.kafkaAppender.MaxFileSize=<%= @filesize %>
log4j.appender.kafkaAppender.MaxBackupIndex=<%= @backupindex %>
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.stateChangeAppender.MaxFileSize=<%= @filesize %>
log4j.appender.stateChangeAppender.MaxBackupIndex=<%= @backupindex %>
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.requestAppender.MaxFileSize=<%= @filesize%>
log4j.appender.requestAppender.MaxBackupIndex=<%= @backupindex %>
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.cleanerAppender.MaxFileSize=<%= @filesize %>
log4j.appender.cleanerAppender.MaxBackupIndex=<%= @backupindex %>
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.controllerAppender.MaxFileSize=<%= @filesize %>
log4j.appender.controllerAppender.MaxBackupIndex=<%= @backupindex %>
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.authorizerAppender.MaxFileSize=<%= @filesize %>
log4j.appender.authorizerAppender.MaxBackupIndex=<%= @backupindex %>
log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

# Turn on all our debugging info
#log4j.logger.kafka.producer.async.DefaultEventHandler=DEBUG, kafkaAppender
#log4j.logger.kafka.client.ClientUtils=DEBUG, kafkaAppender
#log4j.logger.kafka.perf=DEBUG, kafkaAppender
#log4j.logger.kafka.perf.ProducerPerformance$ProducerThread=DEBUG, kafkaAppender
log4j.logger.kafka=INFO, kafkaAppender$=WARN, requestAppender$=false, requestAppender
#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender
log4j.logger.kafka.request.logger=WARN, requestAppender

log4j.logger.kafka.controller=TRACE, controllerAppender

log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender

log4j.logger.state.change.logger=TRACE, stateChangeAppender

#Change this to debug to get the actual audit log for authorizer.
log4j.logger.kafka.authorizer.logger=WARN, authorizerAppender

It just adds the two most important values required for a working logrotate system MaxFileSize and MaxBackupIndex. The first tells us what is the max size for a file and the second one the amount of files of each type to be kept.
In order to use it you need to put it in a class that in my view is constructed as follows:

class profiles::kafkalogrotate {
    $filesize = hiera('kafkalogrotate::size','50MB')
    $backupindex = hiera('kafkalogrotate::backupindex',10)
	file {"adding_log4j":
	path => "/opt/kafka/config/",
	content => template("${module_name}/"),
	replace => true,
	owner => 'kafka',
  group => 'kafka',
  mode => '0644',

I put the path as we generally install kafka in /opt/kafka but that can be changed. In our case restart of kafka brokers is not needed immediately so i just called the class in my kafka manifests.

That is all, if i think of something additional to what i wrote, i will post it.


kafka puppet

Fixing the kafka-manager puppet code

Hi, we have a new code version for kafka-manager deploy. I will not give more details, just that now it also has a fact for the kafka-password and also some minor changes. Fact looks like this:

require 'facter'
Facter.add(:kafka_manager_pass) do
  setcode do
    if File.exist?(file)
        kafka_manager_pass = Facter::Core::Execution.exec("cat #{file} | grep basicAuthentication.password | cut -d'=' -f2 | tr -d '\"'")
        kafka_manager_pass = "undef"

And also the new puppet code for the class:

class profiles::kafkamanager {

   $zookeeper_connect = hiera('kafkamanager::zookeeperconnect')
   $password = hiera("kafkamanager::password:",'password')
	package {'kafka-manager':
		ensure => installed,
	group { 'kafka-manager':
		ensure => 'present',	
	user { 'kafka-manager':
		ensure => 'present',
		groups => 'kafka-manager'
	Group['kafka-manager'] -> User['kafka-manager']
	file { '/usr/share/kafka-manager' :
    		ensure    => directory,
    		owner     => 'kafka-manager',
    		group      => 'kafka-manager',
    		require     => [ User['kafka-manager'], Group['kafka-manager'], ],
    		recurse    => true,
	file { '/etc/kafka-manager/application.conf':
		ensure => present,
	file_line { 'config_zookeeper':
		path   => '/etc/kafka-manager/application.conf',
		line => "kafka-manager.zkhosts=\"${zookeeper_connect}\"",
		match => 'kafka-manager.zkhosts=\"\"',
    replace => true,
	if ($::kafka_manager_pass == "undef") {
	file_line { 'enable_pass_default':
     path => '/etc/kafka-manager/application.conf',
     match => "basicAuthentication.password=\"password\"",
     line => "basicAuthentication.password=\"${password}\"",
     replace => true,
	elsif ($password != $::kafka_manager_pass) {
  file_line { 'enable_pass':
     path => '/etc/kafka-manager/application.conf',
     match => "basicAuthentication.password=\"${::kafka_manager_pass}\"",
     line => "basicAuthentication.password=\"${password}\"",
     replace => true,
  exec {'restart_kafkamanager':
    command => '/usr/sbin/service kafka-manager restart',
    path => ['/bin','/sbin','/usr/bin','/usr/sbin'],
    refreshonly => true,
    subscribe => File_line['enable_pass'],
    file_line { 'enable_auth':
        path => '/etc/kafka-manager/application.conf',
        match => 'basicAuthentication.enabled=false',
        line => 'basicAuthentication.enabled=true',
        replace => true,
	service { 'kafka-manager':
		ensure => 'running',
		enable => true,
		require => [ File['/usr/share/kafka-manager'], File['/etc/kafka-manager/application.conf'] ],
		subscribe => File["/etc/kafka-manager/application.conf"],

Give it a try!


kafka puppet

Fixing the keystore/trustore distribution code


There is an extra thing to be added to my article

As is the code copies the files at each puppet run to the other nodes which not contain the keystore generation code. And to fix this i used yet again another puppet module that should share data between the nodes, you can find it hereĀ

As far as i saw it gets the job done, and in order to use it, you will need to include the following pieces of code to your repo. First of all, one piece of custom fact:

require 'facter'

filename = '/home/kafka/kafka.server.keystore.jks'
Facter.add(:kafkakeystore) do
    setcode do
        if File.file?(filename)
            kafkakeystore = "enabled"
        	kafkakeystore = "disabled"    

If the file is present, this means that the setup is probably activated. For the kafka manifests, if it’s not the node on which the keystore it’s generated we need to share the fact which we actually added in form:

    share_data { "${fqdn}":
      data => [ $::fqdn,$::kafkakeystore ],
      label => 'keystore',

If it’s the node that actually generates and copies the keystore then we will need to include in the class that actually does this kafka_security_gen following piece:

 $data = share_data::retrieve('keystore')
     $data.each |$item| {
   # $servers.each |String $server| {
   if (member($servers,$item[0]) and $item[1] == "disabled") {
        exec{"copy files to ${item[0]}":
            cwd => '/home/kafka',
            path   => '/usr/bin:/usr/sbin:/bin',
            command => "scp /home/kafka/kafka* kafka@${item[0]}:/home/kafka",
            user => 'kafka',

And this should assure you that puppet will not try to copy the keystore on nodes that already has it. Now come to think of it, if you need to refresh the store, it should be a proble, but i will think also for a fix for that and come back.


kafka puppet

Puppet implementation of traefik load balancer for kafka-manager


It’s time to give the puppet implementation for the traefik small case. It is related to the following articleĀ

Starting from that i tried to find a puppet module that can actually install the package more or less accurate and i found thisĀ

Now, for the service install it works, but for defining of the traefik.toml and rules. toml it was a real pain. First of all one of the function call in the module does not work, and after fixing it, it does’t really align the toml file as required, so i decided to do this in a more simple way. I put the traefik.toml in a file since it doesn’t really contain anything dynamically related to our environment. It looks like:

accessLogsFile = "/var/log/traefik/access.log"
traefikLogsFile = "/var/log/traefik/traefik.log"
logLevel = "DEBUG"
defaultEntryPoints = ["https"]
  address = ":80"
      entryPoint = "https"
  address = ":443"
      CertFile = "/etc/traefik/traefik.crt"
      KeyFile = "/etc/traefik/traefik.key"

address = ":8080"

filename = "/etc/traefik/rules.toml"
watch = true

Now, the config files are stored in /etc/traefik, and i made the convention to store also the self generated certificate for HTTPS also in this location. Sure you can set it dynamically, but for a small load balance and a cluster of a few nodes this should not be a problem.
Ok, as you can see we have a different rules.toml file which in our case it will be created by erb template, and the source is:

      method = "drr"
     <% @kafka_hosts_hash.each do |value, index| %>
    [backends.kafka-manager.servers.server<%= index %>]
    url = "http://<%= value %>:9000"
    weight = 1
    <% end %>
  entrypoints = ["http","https"]
  backend = "kafka-manager"
  passHostHeader = true
  priority = 10

This is pretty straightforward and it will be linked with the last piece of the puzzle, which is the puppet class and it actually looks like this:

class profiles::traefikinstall {
  $version = hiera("profiles::traefik::version",'1.3.5')

  class {'traefik': 
    version           => $version,
  exec {'generate_cert':
  command => "openssl req -newkey rsa:4096 -nodes -sha512 -x509 -days 3650 -nodes -subj \"/CN=${fqdn}/OU=traefik/\" -out /etc/traefik/traefik.crt -keyout /etc/traefik/traefik.key",
  path => ['/usr/bin','/usr/sbin','/bin','/sbin'],
  onlyif => "test ! -f /etc/traefik/traefik.crt"
  } ->
  file {"/etc/traefik/traefik.toml":
    source => 'puppet:///modules/profiles/traefik.toml',
    mode => '0644',
    replace => false,
    notify => Service['traefik'],
  $kafka_hosts = query_nodes("role='kafka'").sort #here it should be any role or fact that indicates that it should have kafka-manager installed
  $kafka_hosts_hash = $ | $index, $value| { [$value,$index+1] }.hash

  file {"/etc/traefik/rules.toml":
    content => template("${module_name}/rules.toml.erb"),
    mode => '0644',
    replace => false,

And this is all the code you need to deploy a traefik instance that it’s “secured” via HTTPS and has load balancing between all kafka-manager instances. Now it’s true that you can secure it by adding iptables rules that restrict traffic on port 9000 (the default kafka manager port) just from the hosts in the cluster, but i will come back also with that part in the future if it will be done.



Install puppet gems on puppet master using hiera


I needed to install a toml-rb gem in order to my puppet traefik module to work and i just want to short post my workaround on doing that automatically. There was some code in our repo for that but it used only hiera array, don’t really know, so i had to write a very short class that can take a hash for the installed process. It looks like this:

class profiles::puppetinstall {
    $packages = hiera_hash('profiles::puppetinstall::packages',undef)
    if packages {

And in my role file called puppetmaster.yaml in this case i had to put:

 - 'profiles::puppetinstall'

      provider: 'puppet_gem'

Now i know that maybe it’s not that elegant, but it fixed my problem. Hopefully i will put all the details related to traefik implementation. And yes, if you are wondering from were can you get the ensure_packages resource, i can tell you it is included in stdlib packageĀ

P.S:Ā That was for the puppet agent and standard gems, for the gems that need to be installed on puppet server i needed to write the following piece of code:

$packages_puppetserver = hiera_array('profiles::puppetinstall::puppetserver_packages',undef)
if $packages_puppetserver {
        $packages_puppetserver.each |String $package_name| {
            exec {"install ${package_name}":
                command => "/opt/puppetlabs/bin/puppetserver gem install ${package_name}",
                path => [ '/usr/bin','/usr/sbin','/bin','/sbin' ],
                unless => "/opt/puppetlabs/bin/puppetserver gem list | grep ${package_name}",

The way to put the packages in hiera is similar:

 - 'toml-rb'