Microsoft Teams blocked by pfBlockerNG


One short tip to remember. I’ve been struggling for a while now with the fact that pfBlockerNG was blocking my Teams connection for whatever reason.

I couldn’t understand what was the correct way to fix this until today. I should have known that there isn’t a range of IPs that can be whitelisted to make it work, and it’s related to the domain that was blocked.

This became evident today when I took a look at the Reports tab and Alerts subtab and filtered by interface

In order to fix it, you will need to go to DNSBL tab and expand TLD Exclusion List so that you can add the general domain that should be excluded.

You could also whitelist each subdomain but since we are talking Microsoft, I think this is easier.

The way this works, at least from what I understood, is that it will allow all of hostnames with the general domain and only block the ones that are specifically blacklisted.

That would be all for today,



Traffic statistics – new project


For some time I wanted to understand how the traffic on my networking is actually shaped.

To that purpose, at first I purchased a Synology router but it seems that it hasn’t that much traffic logging capabilities, so I kept it and put in front of it the following box.

It’s a cool toy but ultimately I wanted to have Pfsense installed on it and logging activated so that I can gather as much data as possible.

It’s now installed and hopefully it should be the start of some articles related to the data manipulation and also, maybe, some administration insights.



cloud newtools

Exclusive SASL on Zookeeper connections

Something related to following article. It seems that even if SASL is configured until version 3.6.1, Zookeeper will still allow anonymous connections and actions.

There is now a new configuration available that will restrict such events and you can find it documented on the official Apache Zookeeper administration guide (zookeeper.sessionRequireClientSASLAuth)

The main catch is that it’s not suppose to be configured in zoo.cfg file, but added as a parameter in java.env as a part of SERVER_JVMFLAGS variable.

The old variable which was

zookeeperstd::jvm_flags: ""

will become

zookeeperstd::jvm_flags: " -Dzookeeper.allowSaslFailedClients=false -Dzookeeper.sessionRequireClientSASLAuth=true"

After this is implemented, when you try to connect using, it will let you, but when trying to list the main node of resource tree it won’t work.


Connecting to localhost:2181
Welcome to ZooKeeper!
JLine support is enabled


WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
KeeperErrorCode = Session closed because client failed to authenticate for /
[zk: localhost:2181(CONNECTED) 1] 

The same thing happens if you use -server [hostname]:2181

In order to connect you will have to add to java.env a line with:"

Client file that includes structure

Client {
       org.apache.zookeeper.server.auth.DigestLoginModule required


cloud newtools puppet

Datadog and GCP are “friends” up to a point


Since in the last period I preferred to publish more on Medium, let me give you the link to the latest article.

There is an interesting case in which the combination of automation, Goggle Cloud Platform and Datadog didn’t go as we expected.

Hope you enjoy! I will get back with more also with interesting topics on this blog also.


cloud newtools python

Multiple field query in ELK from Python


There are a lot of pages on how to query ELK stack from Python client library, however, it’s still hard to grab a useful pattern.

What I wanted is to translate some simple query in Kibana like AND beat.hostname:*test AND tags:test into a useful Query DSL JSON.

It’s worth mentioning that the Python library uses this DSL. Once you have this info, things get much simpler.

Well, if you search hard enough, you will find a solution, and it should look like.

another_query_body = {
    "query": {
        "query_string" : {
            "query": "(master) AND (*test) AND (test)",
            "fields": ["", "beat.hostname" , "tags"]

As you probably guessed, each field maps to a query entry.


kafka newtools

Kafka cluster nodes and controller using golang


Using the golang library for zookeeper from here you can get very easily the nodes that are registered in the cluster controller node.
In order to install this module, beside needing to setup the GOPATH you will have also to install packages from linux distro repo called:
bzr, gcc, libzookeeper-mt-dev
Once all of this is install just go get 🙂

And here is the small example:

package main

import (
func main() {
func GetResource() {
	zk, session, err := zookeeper.Dial("localhost:2181", 5e9)
 if err != nil {
	fmt. Println("Couldn't connect")
	defer zk.Close()

	event := <-session
	if event.State != zookeeper.STATE_CONNECTED {
		fmt.Println("Couldn't connect")


func GetController(connection *zookeeper.Conn) {
	rs ,_ , err := connection.Get("/controller")
 if err != nil {
        fmt. Println("Couldn't get the resource")
	controller := strings. Split(rs,",")
	// fmt.Println(controller[1])
	id := strings.Split(controller[1],":")
	fmt. Printf("\nCluster controller is: %s\n",id[1])
func GetBrokers(connection *zookeeper.Conn) {	
	trs ,_, err := connection.Children("/brokers/ids")

	 if err != nil {
        fmt. Println("Couldn't get the resource")
	fmt.Printf("List of brokers: ")
	for _, value := range trs {
	fmt.Printf(" %s",value)

Yeah, i know it's not the most elegant but it works:

go run zootest.go
List of brokers:  1011 1009 1001
Cluster controller is: 1011

That would be all.

Tnx and cheers!

newtools puppet

Error 127, not related to Puppet or Golang


Something from my experience playing with Golang and Puppet code this morning.
I wrote a very very simple script to restart a service that you can find here
Today i wanted to put it on the machine and run it with puppet, so i wrote a very small class that looked like this:

class profiles_test::updatekafka {

package { 'golang':
  ensure => installed,
  name   => 'golang',
file {"/root/servicerestart.go":
    source => 'puppet:///modules/profiles_test/servicerestart.go',
    mode => '0644',
    replace => true,
exec { 'go build servicerestart.go':
    cwd => '/root',
    creates => '/root/servicerestart',
    path => ['/usr/bin', '/usr/sbin'],
} ->
exec { '/root/servicerestart':
  cwd     => '/root',
  path    => ['/usr/bin', '/usr/sbin','/root'],
  onlyif => 'ps -u kafka',

On execution, surprise, it kept throw this feedback:

08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/File[/root/check_cluster_state.go]/ensure: defined content as '{md5}0817dbf82b74072e125f8b71ee914150'
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: panic: exit status 127
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: 
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: goroutine 1 [running]:
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: runtime.panic(0x4c2100, 0xc2100000b8)
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: 	/usr/lib/go/src/pkg/runtime/panic.c:266 +0xb6
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: main.check(0x7f45b08ed150, 0xc2100000b8)
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: 	/root/servicerestart.go:94 +0x4f
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: main.RunCommand(0x4ea7f0, 0x2c, 0x7f4500000000, 0x409928, 0x5d9f38)
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: 	/root/servicerestart.go:87 +0x112
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: main.GetBrokerList(0x7f45b08ecf00, 0xc21003fa00, 0xe)
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: 	/root/servicerestart.go:66 +0x3c
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: main.GetStatus(0xc21000a170)
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: 	/root/servicerestart.go:33 +0x1e
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: main.StopBroker()
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: 	/root/servicerestart.go:39 +0x21
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: main.main()
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: 	/root/servicerestart.go:15 +0x1e
08:19:44 Notice: /Stage[main]/Profiles_test::Updatekafka/Exec[go run servicerestart.go]/returns: exit status 2
08:19:44 Error: go run servicerestart.go returned 1 instead of one of [0]

The secret is in panic: exit status 127 and it seems that it’s not related to neither golang or puppet, but shell.
In my script they way to get the broker list is by

output1 := RunCommand("echo dump | nc localhost 2181 | grep brokers")

and the error is related to not having the required binaries in you path. For example if you run

whereis echo
echo: /bin/echo /usr/share/man/man1/echo.1.gz

So the right way to for the exec block is actually:

exec { '/root/servicerestart':
  cwd     => '/root',
  path    => ['/usr/bin', '/usr/sbin','/root','/bin','/sbin'],
  onlyif => 'ps -u kafka',

And then it will work.


kafka newtools

Golang example for kafka service restart script


Not much to say, a pretty decent script for Kafka service restart(i tried to write it for our rolling upgrade procedure) that it’s still work in progress. If there are any changes that needed to be made to it, i will post it.

Here is the script:

package main

import (

const (
    libpath = "/opt/kafka/libs"
func main(){
	fmt.Printf("You are running version %s",GetVersion(libpath))

func GetVersion(libpath string) string {
 	var KafkaVersion []string
	files, err := ioutil.ReadDir(libpath)
for _, f := range files {
	if (strings.HasPrefix(f.Name(),"kafka_")) {
		KafkaVersion = strings.Split(f.Name(),"-")
	return KafkaVersion[1]
func GetStatus() bool {
	brokers := GetBrokerList()
	id := GetBrokerId()
        status := Contain(id,brokers)
return status
func StopBroker() {
	status := GetStatus()
	brokers := GetBrokerList()
if (status == true) && (len(brokers) > 2) {
	stop := RunCommand("service kafka stop")
	time.Sleep(60000 * time.Millisecond)
if (status == false) {
	fmt.Println("Broker has been successful stopped")
	} else {
func StartBroker() {
	status := GetStatus()
	if (status == false) {
	start := RunCommand("service kafka start")
	time.Sleep(60000 * time.Millisecond)
func GetBrokerId() string {
	output := RunCommand("cat /srv/kafka/ | grep | cut -d'=' -f2")
	return strings.TrimRight(string(output),"\n")
func GetBrokerList() []string {
	output1 := RunCommand("echo dump | nc localhost 2181 | grep brokers")
	var brokers []string
	lines:= strings.Split(string(output1),"\n")
for _, line := range lines {
	trimString := strings.TrimSpace(line)
	finalString := strings.TrimPrefix(trimString, "/brokers/ids/")
	brokers = append(brokers, finalString)
 return brokers
func Contain(val1 string, val2 []string) bool {
  for _,value := range val2 {
	if (value == val1) {
		return true
	return false
func RunCommand(command string) []byte {
 cmd :=exec.Command("/bin/sh", "-c", command)
 result,err := cmd.Output()
 return result

func check(e error) {
    if e != nil {


docker newtools

Command to start sysdig container – redundant but useful


This is more like a easier way to find the command without searching the net:

docker run -it --rm --name=sysdig --privileged=true \
   --volume=/var/run/docker.sock:/host/var/run/docker.sock \
   --volume=/dev:/host/dev \
   --volume=/proc:/host/proc:ro \
   --volume=/boot:/host/boot:ro \
   --volume=/lib/modules:/host/lib/modules:ro \
   --volume=/usr:/host/usr:ro \

The actual command on starting a sysdig container. I will get more in depth with some Kafka cluster aggregated info from this amazing tool and also what it takes to send it to an elastic cluster.
It will be challenging, but this is how it goes in IT in our days.


kafka newtools

Consumer group coordinator in Kafka using some scala script


Just a small post regarding returning a consumer group coordinator for a specific consumer group.
We had the issue that consumer groups are re-balancing and we didn’t knew if it’s related to application logic or the Consumer Group Coordinator was changing and the Kafka cluster was reassign a different one each time. So, a small piece of code was needed. I was using the libraries that are sent with Kafka 1.0.0 for this test so be aware of the classpath update if you want to modify this.

In order to do the test, i started a standalone Confluent Kafka image which normally listens on port 29092. For more details please consult their documentation here

I also created a test topic with one partition and same replication factor. Produced some messages in the topic and after that started a console consumer:

sorin@debian-test:~/kafka_2.11-1.0.0/bin$ ./ --bootstrap-server localhost:29092 --topic test --from-beginning
test message

Once this is started you can also see it using consumer-groups command like this:

sorin@debian-test:~/kafka_2.11-1.0.0/bin$ ./ --bootstrap-server localhost:29092 --list
Note: This will not show information about old Zookeeper-based consumers.


Now my console consumer is identified by console-consumer-77631 and in order to see the group coordinator you will have to run something like:

./ localhost 29092 console-consumer-77631
warning: there were three deprecation warnings; re-run with -deprecation for details
one warning found
Creating connection to: localhost 29092 
log4j:WARN No appenders could be found for logger (
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See for more info.
Channel connected
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See for further details.

It’s clear that since we have only one broker, that is also the coordinator.
Regarding the details for the code i used this link and also, in order to search for all of the dependencies, since i don’t have a scala project, just a script the following command was of great use

for i in *.jar; do jar -tvf "$i" | grep -Hsi ClassName && echo "$i"; done

Here is also the code:

exec scala -classpath "/home/sorin/kafka_2.11-1.0.0/libs/kafka-clients-1.0.0.jar:/home/sorin/kafka_2.11-1.0.0/libs/kafka_2.11-1.0.0.jar:/home/sorin/kafka_2.11-1.0.0/libs/slf4j-api-1.7.25.jar:/home/sorin/kafka_2.11-1.0.0/libs/jackson-core-2.9.1.jar:/home/sorin/kafka_2.11-1.0.0/libs/jackson-databind-2.9.1.jar:/home/sorin/kafka_2.11-1.0.0/libs/jackson-annotations-2.9.1.jar:/home/sorin/kafka_2.11-1.0.0/libs/log4j-1.2.17.jar" "$0" "$1" "$2" "$@"

import com.fasterxml.jackson.core._
import com.fasterxml.jackson.databind.ObjectMapper
import kafka.api.GroupCoordinatorRequest
import kafka.api.GroupCoordinatorResponse
import org.slf4j.LoggerFactory

val hostname = args(0)
val port = args(1).toInt
val group = args(2)
println("Creating connection to: " + hostname + " " + port + " ")

var channel = new BlockingChannel(hostname, port, 1048576, 1048576, readTimeoutMs = 50000)
if (channel.isConnected) {
  println("Channel connected")
val metadataResponse = GroupCoordinatorResponse.readFrom(channel.receive.payload())
println(metadataResponse) }

Regarding the code, the first part is to run scala from shell script, you need to update the lasspath with all libraries and also specify how many parameters to be used. In our case this is three. Also, if you won’t add all of the jackson, log4j and slf4j dependencies, it won’t work.

P.S: It will work also by running exec scala -classpath "/home/sorin/kafka_2.11-1.0.0/libs/*