• Install eyaml module on puppet master

    Hi,

    Today i will show how i installed module used for data encrypt in order to safely include it in hiera yaml files)
    It really simple as described on https://github.com/voxpupuli/hiera-eyaml. The actual step that i couldn’t find explicitly written in the doku and i had to figure it out myself is that you need to modify the config.yaml needed by the module.

    1. gem install hiera-eyaml
    2. puppetserver gem install hiera-eyaml
    3. eyaml createkeys
    4. mv ./keys /etc/puppetlabs/puppet/eyaml
    5. $ chown -R puppet:puppet /etc/puppetlabs/puppet/eyaml
      $ chmod -R 0500 /etc/puppetlabs/puppet/eyaml
      $ chmod 0400 /etc/puppetlabs/puppet/eyaml/*.pem
      $ ls -lha /etc/puppetlabs/puppet/eyaml
      -r——– 1 puppet puppet 1.7K Sep 24 16:24 private_key.pkcs7.pem
      -r——– 1 puppet puppet 1.1K Sep 24 16:24 public_key.pkcs7.pem
    6.  vim /etc/eyaml/config.yaml and add following content:
      ---
      pkcs7_private_key: '/etc/puppetlabs/puppet/eyaml/private_key.pkcs7.pem'
      pkcs7_public_key: '/etc/puppetlabs/puppet/eyaml/public_key.pkcs7.pem'

    If the last step is not executed, you will get the error: [hiera-eyaml-core] No such file or directory – ./keys/public_key.pkcs7.pem

    After these configurations you should be able to encrypt files or strings. Short example:

    eyaml encrypt -s 'test'
    [hiera-eyaml-core] Loaded config from /etc/eyaml/config.yaml
    string: ENC[PKCS7,MIIBeQYJKoZIhvcNAQcDoIIBajCCAWYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAvWHMltzNiYnp0iG6vl6tsgayYimoFQpCFeA8wdE3k6h2OGZAXHLOI+ueEcv+SXVtOsqbP2LxPHe19zJS9cLV4tHu1rUEAW2gstkImI4FoV1/SoPrXNsBBXuoG3j7R4NGPpkhvOQEYIRTT9ssh9hCrzkEMrZ5pZDhS4lNn01Ax1tX99NdmtXaGvTTML/kV061YyN3FaeztSUc01WwpeuHQ+nLouuoVxUUOy/d/5lD5wLKq9t8BYeFG6ekq/D9iGO6D/SNPB0UpVqdCFraAN7rIRNfVDaRbffCSdE59AZr/+atSdUk9cI0oYpG25tHT9x3eWYNNeCLrVAoVMiZ01uR7zA8BgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBBHO9P8JfkovKLMdtvaIxAzgBAjiu0/l+Hm+Xaezhp2AWjj]
    
    OR
    
    block: >
        ENC[PKCS7,MIIBeQYJKoZIhvcNAQcDoIIBajCCAWYCAQAxggEhMIIBHQIBADAFMAACAQEw
        DQYJKoZIhvcNAQEBBQAEggEAvWHMltzNiYnp0iG6vl6tsgayYimoFQpCFeA8
        wdE3k6h2OGZAXHLOI+ueEcv+SXVtOsqbP2LxPHe19zJS9cLV4tHu1rUEAW2g
        stkImI4FoV1/SoPrXNsBBXuoG3j7R4NGPpkhvOQEYIRTT9ssh9hCrzkEMrZ5
        pZDhS4lNn01Ax1tX99NdmtXaGvTTML/kV061YyN3FaeztSUc01WwpeuHQ+nL
        ouuoVxUUOy/d/5lD5wLKq9t8BYeFG6ekq/D9iGO6D/SNPB0UpVqdCFraAN7r
        IRNfVDaRbffCSdE59AZr/+atSdUk9cI0oYpG25tHT9x3eWYNNeCLrVAoVMiZ
        01uR7zA8BgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBBHO9P8JfkovKLMdtva
        IxAzgBAjiu0/l+Hm+Xaezhp2AWjj]
    

    Will write something similar for Hiera configuration to use the desired backend.

    Cheers!

  • Small Mirror Maker test between different Kafka clusters

    Hi,

    Today i am trying to show you what i have been playing with for the last day. There was a business case in which some colleagues from Analytics wanted to replicate all the data from other systems in their cluster.

    We will start with this, two independent configured clusters with 3 servers each (on each server we have one zookeeper and one kafka node). On both the source and target i created a topic replicated three times with five partitions. You can find the description

    /opt/kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic test-topic
    Topic:test-topic	PartitionCount:5	ReplicationFactor:3	Configs:
    	Topic: test-topic	Partition: 0	Leader: 1002	Replicas: 1002,1003,1001	Isr: 1002,1003,1001
    	Topic: test-topic	Partition: 1	Leader: 1003	Replicas: 1003,1001,1002	Isr: 1003,1001,1002
    	Topic: test-topic	Partition: 2	Leader: 1001	Replicas: 1001,1002,1003	Isr: 1001,1002,1003
    	Topic: test-topic	Partition: 3	Leader: 1002	Replicas: 1002,1001,1003	Isr: 1002,1001,1003
    	Topic: test-topic	Partition: 4	Leader: 1003	Replicas: 1003,1002,1001	Isr: 1003,1002,1001
    

    The command for creating this is actually pretty simple and it goes like this /opt/kafka/bin/kafka-topics.sh –zookeeper localhost:2181 –create –replication-factor 3 –partition 5 –topic test-topic

    Once the topic are created on both kafka instances we will need to start Mirror Maker (HortonWorks recommends that the process should be created on the destination cluster). In order to do that, we will need to create two config files on the destination. You can call them producer.config and consumer.config.

    For the consumer.config we have the following structure:

    bootstrap.servers=source_node0:9092,source_node1:9092,source_node2:9092
    exclude.internal.topics=true
    group.id=test-consumer-group
    client.id=mirror_maker_consumer
    

    For the producer.config we have the following structure:

    bootstrap.servers=destination_node0:9092,destination_node1:9092,destination_node2:9092
    acks=1
    batch.size=100
    client.id=mirror_maker_producer
    

    These are the principal requirements and also you will need to be sure that you have in you consumer.properties the following line group.id=test-consumer-group.

    Ok, so far so good, now lets start Mirror Maker with and once started you can see it beside kafka and zookeeper using ps -ef | grep java

    /opt/kafka/bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config ../config/consumer.config --producer.config ../config/producer.config --whitelist test-topic &

    To check the offset, at new versions of kafka you can always use

    /opt/kafka/bin# ./kafka-run-class.sh kafka.admin.ConsumerGroupCommand --group test-consumer-group --bootstrap-server localhost:9092 --describe
    GROUP                          TOPIC                          PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             OWNER
    test-consumer-group            test-topic                     0          2003            2003            0               test-consumer-group-0_/[dest_ip]
    test-consumer-group            test-topic                     1          2002            2002            0               test-consumer-group-0_/[dest_ip]
    test-consumer-group            test-topic                     2          2003            2003            0               test-consumer-group-0_/[dest_ip]
    test-consumer-group            test-topic                     3          2004            2004            0               test-consumer-group-0_/[dest_ip]
    test-consumer-group            test-topic                     4          2002            2002            0               test-consumer-group-0_/[dest_ip]
    

    I tested the concept by running a short loop in bash to create 10000 records and put them to a file for i in $( seq 1 10000); do echo $i; done >> test.txt  and this can be very easily imported on our producer by running the command /opt/kafka/bin/kafka-console-producer.sh –broker-list localhost:9092 –topic test-topic < test.txt

    After this is finished, please feel free to take a look in the topic using /opt/kafka/bin/kafka-console-consumer.sh –bootstrap-server localhost:9092 –topic test-topic –from-beginning and you should see a lot of lines 🙂

    Thank you for your time and if there are any parts that i missed, please reply.

    Cheers!

  • Check Kafka JMX node stats using JConsole

    Hi,

    As you probably know, Kafka is already publishing a lot of performance data on JMX to be collected.
    In order to do this, you will need to install jconsole (for Windows it’s already embedded in the jdk installation, for Linux you can use this article to check it out https://www.garron.me/en/linux/find-which-package-library-belongs.html. After you have done that, you will have just to export the JMX_PORT variable to you env (for example export JMX_PORT=9999) before you start the Kafka node. When you will open JConsole you will probably see something like

    After you select the Kafka node, it will tell you that the connection is not secure, but it doesn’t matter for my point of view and after that you can have a overview of the process. The statistics are available MBens tab and extra info regarding the meaning you can find in the official doku and also in the DataDog article.

    This is a single simple node configuration, if it is required i will post some complex configurations, but this is required in special cases, standard monitoring using DataDog/Prometheus or other solution needs to be implemented in case of a bigger infrastructure.

    Cheers

  • Small Vagrant config file for Rancher deploy

    Hi,

    Just wanted to post this also, if it’s not that nice the config using a jumpserver, surely we can convert that to code (Puppet/Ansible), you can also use Vagrant. The main issue that i faced when i tried to create my setup is that for a reason (not really sure why, Vagrant on Windows runs very slow). However, i chose to give you one piece of Vagrantfile for a minimal setup on which you can grab the Rancher server framework and also the client containers.

    Here is it:

    # -*- mode: ruby -*-
    # vi: set ft=ruby :
    Vagrant.configure("2") do |config|
    config.vm.define "master" do |master|
    master.vm.box = "centos/7"
    master.vm.hostname = 'master'
    master.vm.network "public_network", bridge: "enp0s25"
    end
    config.vm.define "slave" do |slave|
    slave.vm.box = "centos/7"
    slave.vm.hostname = 'slave'
    slave.vm.network "public_network", bridge: "enp0s25"
    end
    config.vm.define "swarmmaster" do |swarmmaster|
    swarmmaster.vm.box = "centos/7"
    swarmmaster.vm.hostname = 'swarmmaster'
    swarmmaster.vm.network "public_network", bridge: "enp0s25"
    end
    config.vm.define "swarmslave" do |swarmclient|
    swarmclient.vm.box = "centos/7"
    swarmclient.vm.hostname = 'swarmclient'
    swarmclient.vm.network "public_network", bridge: "enp0s25"
    end
    end
    

     

    Do not worry about the naming of the machines, you can change them to whatever you like, the main catch is to bridge the public network in all of them in order to be able to communicate with each other and also have access to the docker hub. Beside that everything else that i posted regarding the registry to the Rancher framework is still valid.

    Thank you for your time,

    Cheers!

  • Monitoring Kafka with DataDog

    Hi,

    A very interesting series of articles that should be checked regarding one option of Kafka monitoring with Datadog :

     https://www.datadoghq.com/blog/monitoring-kafka-performance-metrics/.

    I will have in the near future a task regarding this, will post the outcome when it’s done.

    P.S: It was done and you can find the implementation here :

    Integrate Kafka with Datadog monitoring using puppet

    As for the metrics point of view we will see if this is really an option, i have tried the same with Prometheus and Grafana and it seems to wok better. Keep you posted

    Cheers

  • Monitoring Kafka node using Docker

    Hi,

    Today i am just going to point you to a very interesting article related to monitoring of Kafka node/nodes using InfluxDB, Grafana and Docker. Hope it is useful, i will surely try it in one of the days.

    https://softwaremill.com/monitoring-apache-kafka-with-influxdb-grafana/

    Now this is not quite standard but nevertheless it is an option.

    Cheers!

  • Memory check by process in Linux

    Hi,

    I wanted to post this since it might be useful in some situations. On a Linux machine it seems that one way to check the memory usage by top processes is with ps aux –sort -rss (This means that it’s order by Resistent Set Size)  Once executed it will return an output similar to this:

    USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
    sorin 3673 0.6 27.3 3626020 563964 pts/1 Sl+ 02:24 1:09 java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+Disa
    sorin 1708 2.0 9.2 1835288 189692 ? Sl 02:11 3:56 /usr/bin/gnome-shell
    sorin 1967 0.6 8.0 1642280 166160 ? Sl 02:12 1:11 firefox-esr
    sorin 3413 0.1 3.7 2000252 77016 pts/0 Sl+ 02:21 0:19 java -Xmx512M -Xms512M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+
    root 576 0.5 2.6 263688 54172 tty7 Ssl+ 02:11 1:07 /usr/bin/Xorg :0 -novtswitch -background none -noreset -verbose 3 -auth /var/run/gdm3/auth-for-Debian-gdm-Bu1jB
    sorin 1813 0.0 2.2 1175504 47196 ? Sl 02:11 0:00 /usr/lib/evolution/evolution-calendar-factory
    root 486 0.1 1.2 377568 26584 ? Ssl 02:11 0:21 /usr/bin/dockerd -H fd://

    If you want to get more detail of a PID status you can go to /proc/[pid]/status and you can find a lot of other informations. For example the top process on my Linux machine has the following header:

    sorin@debian:/proc/3673$ cat status
    Name: java
    State: S (sleeping)
    Tgid: 3673
    Ngid: 0
    Pid: 3673
    PPid: 3660
    TracerPid: 0
    Uid: 1000 1000 1000 1000
    Gid: 1000 1000 1000 1000
    FDSize: 256
    Groups: 24 25 29 30 44 46 108 111 116 1000
    VmPeak: 3626024 kB
    VmSize: 3626020 kB

    As you can see, the RSS is the same as VmSize.

    Cheers!

  • Register RancherOs to the Rancher framework

    Hi,

    After we were able to login via ssh on our machines, it’s time to put them to use by subscribing them to a orchestrating framework. One free and pretty powerful framework of such kind is given by the same company. In order to use it you will need to have Docker installed, more info on this topic following the link

     https://docs.rancher.com/rancher/v1.5/en/installing-rancher/installing-server/

    Until now i haven’t tried the option for a HA configuration, i will do that in the near future and post the findings but for now it should be enough if we deploy a standard container for the management.

    Using the command from the documentation i have managed to grab the image and start the following container:

    417930c9f375 rancher/server "/usr/bin/entry /u..." 2 weeks ago Up 6 minutes 3306/tcp, 0.0.0.0:8080->8080/tcp eloquent_goodall

    We have also the possibility to check the image using the docker images  command and we will have the following result:

    rancher/server latest 2751db6ea7ec 4 weeks ago 935 MB

    Once the container is started, you can access the UI by going to the address http://127.0.0.1:8080 (please keep in mind that you have binded the ports to be forwarded and accessible from any IP range, that is what 0.0.0.0:8080->8080/tcp should mean, if you want to be accessible for a specific range or IP please change this on docker run command.

    Ok, once the administration console has been loaded you can go to Infrastructure -> Hosts -> Add Host. Please do not use the default site address, it is relevant only for the local container, instead it can be replaced by  http://[jumpserver ip address]:8080This will be used in order to obtain the registration string for the agents. When pressing OK, you will be redirected to a window with the necessary steps to be done for registration, please keep it open.

    After connecting via ssh to the Rancher machine, please make sure that you have access to the Docker hub repo. You can easily do that by running docker search rancher. If there is a timeout error, please take a look on configuring proxy for docker, in our case on private machines it can be done using the following lines in cloud-config.yml located under /var/lib/rancher/conf

    rancher:
    network:
    http_proxy: http://[user]:[password]@[proxyip]:[proxyport]
    https_proxy: http://[user]:[password]@[proxyip]:[proxyport]

    These lines being added you will need to reload the docker daemon by using the command system-docker restart docker and it should work.

    Now go to the UI page and copy the string at the last step in our Rancher server window, it will start downloading the necessary containers in order to link with the framework.

    This being done some images will be downloaded and started to the machine and started:

    [rancher@rancher conf]$ docker images | grep rancher
    rancher/scheduler v0.7.5 e7ff16ba4444 2 weeks ago 241.9 MB
    rancher/network-manager v0.5.3 0f224908d730 2 weeks ago 241.6 MB
    rancher/metadata v0.8.11 19b37bb3e242 5 weeks ago 251.5 MB
    rancher/agent v1.2.1 9cecf992679f 5 weeks ago 233.7 MB
    rancher/scheduler v0.7.4 7a32d7571cad 5 weeks ago 241.9 MB
    rancher/net v0.9.4 5ac4ae5d7fa4 5 weeks ago 264.3 MB
    rancher/network-manager v0.4.8 45bdcd2b1944 6 weeks ago 241.6 MB
    rancher/dns v0.14.1 4e37fc4150c2 6 weeks ago 239.8 MB
    rancher/healthcheck v0.2.3 491349141109 10 weeks ago 383.3 MB
    rancher/net holder bb516596ce5a 3 months ago 261.7 MB
    [rancher@rancher conf]$ docker ps -a | grep rancher
    a3fde18ebdbd rancher/scheduler:v0.7.5 "/.r/r /rancher-entry" 3 days ago Exited (0) 3 days ago r-scheduler-scheduler-1-37fd65ec
    35c7bbc1cb42 rancher/network-manager:v0.5.3 "/rancher-entrypoint." 3 days ago Up 30 minutes r-network-services-network-manager-1-57e1bbbd
    3a048010be3d rancher/scheduler:v0.7.4 "/.r/r /rancher-entry" 2 weeks ago Exited (0) 3 days ago r-scheduler-scheduler-1-de6ec66f
    fad7d11141aa rancher/net:v0.9.4 "/rancher-entrypoint." 2 weeks ago Up 29 minutes r-ipsec-ipsec-router-1-af053a8c
    b7ce7b4f8520 rancher/dns:v0.14.1 "/rancher-entrypoint." 2 weeks ago Up 30 minutes r-network-services-metadata-dns-1-438fbeaa
    30e5cab4b4c6 rancher/metadata:v0.8.11 "/rancher-entrypoint." 2 weeks ago Up 30 minutes r-network-services-metadata-1-827c71e3
    382ebf55c3c1 rancher/net:holder "/.r/r /rancher-entry" 2 weeks ago Up 30 minutes r-ipsec-ipsec-1-55aeea30
    0223f1ffe986 rancher/healthcheck:v0.2.3 "/.r/r /rancher-entry" 2 weeks ago Up 30 minutes r-healthcheck-healthcheck-1-f00a6858
    03652d781c9a rancher/net:v0.9.4 "/rancher-entrypoint." 2 weeks ago Up 30 minutes r-ipsec-ipsec-cni-driver-1-797e0060
    1b6d1664c801 rancher/agent:v1.2.1 "/run.sh run" 2 weeks ago Up 31 minutes rancher-agent
    c8b8e4ddf91c rancher/agent:v1.2.1 "/run.sh http://10.0." 2 weeks ago Exited (0) 2 weeks ago furious_bohr

    And also the server will appear in the UI. In next posts we will try to deploy some services from the catalog.

    Cheers

  • Install RancherOS on VirtualBox and configure it for ssh access

    Hi,

    If you are not familiar with what is RancherOS you can learn more from this link: Rancher docu It’s basically a very small Linux distro that runs all the processes as Docker containers (including the system processes).

    So, starting from here, we will need a RancherOS image which you can download from the following location: Rancher git. After doing that you will need a VirtualBox machine with minimum 1GB of RAM (the reason for this is that Rancher will run at first from the memory). The size of the root partition can be as big as you like, no extra video configurations are required since it will run in CLI mode.

    You also need to know that an extra jump server (or a server that is accessible over ssh protocol) is required in order to successfully configure your single/multiple running instance of Rancher and that is for a simple reason. As far as i managed to test, no mount command is working of an external USB storage (please be aware that we are talking about an isolated environment)  and also copy/paste is not running by default without Virtualbox Guest Tools installed (unfortunately this is also not possible because we will not have a GUI and these kind of releases are not supported, i think this is also the case of CoreOs). Please make sure that the servers are reachable and have sshd installed and configured.

    Since Rancher is available only with ssh key login, because of security reasons, you will need to add it before install to the cloud-config.yml

    On the jump server you need to generate a rsa key with the ssh-keygen command and it will create in the .ssh directory the following pair of files (this is a list from my test machine) :

    -rw-r–r– 1 sorin sorin 394 Mar 21 08:09 id_rsa.pub
    -rw——- 1 sorin sorin 1675 Mar 21 08:09 id_rsa

    The next step is to build the minimal cloud-config file in order to get access to the machine, and in that purpose we can run the command

    echo -e “#cloud-confignssh_authorized_keys:n – $(cat id_rsa.pub)” > $HOME/cloud-config.yml

    This will create the only file you need in order to install your “server”.

    Ok, it’s time to start our Rancher machine, please make sure that you have the Rancher image mounted in order to boot it. After this process is done you will need to connect to the jump server in order to grab the file created above. Please do that with the following command:

    After this is done, we can install it on the local drive. Since it’s more simple with a printscreen i will list another one 🙂

    Ok, this being done, you will be propted to restart the machine but before that please make sure that you have unmounted the rancher image from the virtual drive otherwise it will boot from it and not from the actual install.

    You are almost done, after restart you can access the server via ssh rancher@[rancher server ip] if you used the default id_rsa key from the .ssh directory, and if not, ssh -i [private key file location] rancher@[rancher server ip]

    More articles to come on this topic,

    Cheers!

  • We start here

    This should be a bold start of an IT technical blog