Hi,
I wish it would be mine but it isn’t. Quite good article from this week newsletter related to container stats from Docker containers:
Analyzing Docker container performance with native tools
Wish you an enjoyable read.
Cheers!
Hi,
I wish it would be mine but it isn’t. Quite good article from this week newsletter related to container stats from Docker containers:
Analyzing Docker container performance with native tools
Wish you an enjoyable read.
Cheers!
Morning,
I keep my promise and provide you with the two simple blocks that are needed to implement limits that we discussed in article http://log-it.tech/2017/10/16/ubuntu-change-ulimit-kafka-not-ignore/
For the limits module you can use:
https://forge.puppet.com/puppetlabs/limits
As for the actual puppet implementation, I took the decision not to restart the service immediately. This being said, it’s dead simple to do it:
file_line {"add_pamd_record": path => '/etc/pam.d/common-session', line => 'session required pam_limits.so' } limits::fragment { "*/soft/nofile": value => "100000"; "*/hard/nofile": value => "100000"; "kafka/soft/nofile": value => "100000"; "kafka/hard/nofile": value => "100000"; }
This is all you need.
Cheers
Hi,
Wanna share with you what managed to take me half a day to clarify. I just read in the following article https://docs.confluent.io/current/kafka/deployment.html#file-descriptors-and-mmap
and learned that in order to optimize kafka, you will need to also change the maximum number of open files. It is nice, but our clusters are deployed on Ubuntu and the images are pretty basic. Not really sure if this is valid for all of the distributions but at least for this one it’s absolutely needed.
Before trying to setup anything in
/etc/security/limits.conf
make sure that you have exported in
/etc/pam.d/common-session
line
session required pam_limits.so
It is needed in order for ssh, su processes to take the new limits for that user (in our case kafka).
Doing this will help you define new values on “limits” file. You are now free to setup nofile limit like this for example
* soft nofile 10000 * hard nofile 100000 kafka soft nofile 10000 kafka hard nofile 100000
After it is done, you can restart the cluster and check value by finding process with ps-ef | grep kafka and viewing limit file using cat /proc/[kafka-process]/limits.
I will come back later with also a puppet implementation for this.
Cheers!
Hi,
I recently had a presentation on how to deploy kafka using puppet and what do you need as a minimum in order to have success in production.
Here is the presentation:
Hope it is useful.
Cheers!
Update:
There is also an official version from IMWorld which you can find here:
And also the article on medium.com that describes it in more technical detail:
https://medium.com/@sorin.tudor/messaging-kafka-implementation-using-puppet-5438a0ed275d