Tag: consumer

  • Kafka_consumer.yaml (python style) and more

    Hi,

    As a followup to the article i posted earlier ( https://log-it.tech/2019/03/15/get-the-info-you-need-from-consumer-group-python-style/ ) , you can use that info to put in into kafka_consumer.yaml for Datadog integration.

    It’s not elegant by any means, but it works. As an advise, please don’t over complicate thinks more than they need.

    In the last example i figured i wanted to create a list of GroupInfo objects for each line that was returned from consumer group script. Bad idea as you shall see below

    So, in addition to what i wrote in the last article, now it’s not just printing the dictionary but order it, by partition.

    def constructgroupdict():
     groupagregate = {}
     group_list = getgroups()
     for group in group_list:
        groupagregate[group] = getgroupinfo(group)
     
     for v in groupagregate.values():
        v.sort(key = lambda re: int(re.partition))
     
     return groupagregate
    
    def printgroupdict():
     groupdict = constructgroupdict()
     infile = open('/etc/datadog-agent/conf.d/kafka_consumer.d/kafka_consumer.yaml.template','a')
     for k,v in groupdict.items():
        infile.write('      '+k+':\n')
        topics = []
        testdict = {}
        for re in v:
            if re.topic not in topics:
               topics.append(re.topic)
        for x in topics:
            partitions = []
            for re in v:
               if (re.topic == x):
                  partitions.append(re.partition)
            testdict[x] = partitions
        for gr,partlst in testdict.items():
            infile.write('        '+gr+': ['+', '.join(partlst)+']\n')
     infile.close()
     os.rename('/etc/datadog-agent/conf.d/kafka_consumer.d/kafka_consumer.yaml.template','/etc/datadog-agent/conf.d/kafka_consumer.d/kafka_consumer.yaml')
      
    printgroupdict()
    

    And after that, it’s quite hard to get only the unique value for the topic name.

    The logic i chose to grab all the data per consumer group is related to the fact that querying the cluster takes a very long time, so if i wanted to grab another set of data filtered by topic, i would have been very time costly.

    In the way that is written now, there are a lot of for loop, that should become challenging in care there are too many records to process. Fortunately, this should not be the case for consumer groups in a normal case.

    The easiest way to integrate the info in kafka_consumer.yaml, in our case is to create a template called kafka_consumer.yaml.template

    init_config:
      # Customize the ZooKeeper connection timeout here
      # zk_timeout: 5
      # Customize the Kafka connection timeout here
      # kafka_timeout: 5
      # Customize max number of retries per failed query to Kafka
      # kafka_retries: 3
      # Customize the number of seconds that must elapse between running this check.
      # When checking Kafka offsets stored in Zookeeper, a single run of this check
      # must stat zookeeper more than the number of consumers * topic_partitions
      # that you're monitoring. If that number is greater than 100, it's recommended
      # to increase this value to avoid hitting zookeeper too hard.
      # https://docs.datadoghq.com/agent/faq/how-do-i-change-the-frequency-of-an-agent-check/
      # min_collection_interval: 600
      #
      # Please note that to avoid blindly collecting offsets and lag for an
      # unbounded number of partitions (as could be the case after introducing
      # the self discovery of consumer groups, topics and partitions) the check
      # will collect at metrics for at most 200 partitions.
    
    
    instances:
      # In a production environment, it's often useful to specify multiple
      # Kafka / Zookeper nodes for a single check instance. This way you
      # only generate a single check process, but if one host goes down,
      # KafkaClient / KazooClient will try contacting the next host.
      # Details: https://github.com/DataDog/dd-agent/issues/2943
      #
      # If you wish to only collect consumer offsets from kafka, because
      # you're using the new style consumers, you can comment out all
      # zk_* configuration elements below.
      # Please note that unlisted consumer groups are not supported at
      # the moment when zookeeper consumer offset collection is disabled.
      - kafka_connect_str:
          - localhost:9092
        zk_connect_str:
          - localhost:2181
        # zk_iteration_ival: 1  # how many seconds between ZK consumer offset
                                # collections. If kafka consumer offsets disabled
                                # this has no effect.
        # zk_prefix: /0.8
    
        # SSL Configuration
    
        # ssl_cafile: /path/to/pem/file
        # security_protocol: PLAINTEXT
        # ssl_check_hostname: True
        # ssl_certfile: /path/to/pem/file
        # ssl_keyfile: /path/to/key/file
        # ssl_password: password1
    
        # kafka_consumer_offsets: false
        consumer_groups:
    

    It’s true that i keep only one string for connectivity on Kafka and Zookeeper, and that things are a little bit more complicated once SSL is configured (but this is not our case, yet).

      - kafka_connect_str:
          - localhost:9092
        zk_connect_str:
          - localhost:2181

    And append the info at the bottom of it after which it is renamed. Who is putting that template back? Easy, that would be puppet.

    It works, it has been tested. One last thing that i wanted to warn you about.

    There is a limit of metrics that can be uploaded per machine, and that is 350. Please be aware and think very serious if you want to activate it.

    Than would be all for today.

    Cheers

  • Kafka consumer group info retrieval using Python

    Hi,

    I’ve been playing with kafka-python module to grab the info i need in order to reconfigure Datadog integration.

    Unfortunately, there is a catch also on this method. And i will show you below.

    Here is a little bit of not so elegant code.

    from kafka import BrokerConnection
    from kafka.protocol.admin import *
    import socket
    
    fqdn = socket.getfqdn()
    bc = BrokerConnection(fqdn,9092,socket.AF_INET)
    try:
      bc.connect_blocking()
    except Exception as e:
      print(e)
    if bc.connected():
       print("Connection to", fqdn, " established")
    
    def getgroup():
     list_groups_request = ListGroupsRequest_v1()
    
     future0 = bc.send(list_groups_request)
     while not future0.is_done:
         for resp, f in bc.recv():
             f.success(resp)
     group_ids = ()
     for group in future0.value.groups:
         group_ids += (group[0],)
    
     print(group_ids)
         
     description = DescribeGroupsRequest_v1(group_ids)
     future1 = bc.send(description)
     while not future1.is_done:
        for resp, f in bc.recv():
            f.success(resp)
    
     for groupid in future1.value.groups:
         print('For group ',groupid[1],':\n')
         for meta in groupid[5]:
             print(meta[0],meta[2],sep="\n")
             print(meta[3])
     if future1.is_done:
        print("Group query is done")
    
    getgroup()
    

    As you will see, print(meta[3]) will return a very ugly binary data with topic names in it, that is not converted if you try with meta[3].decode(‘utf-8’)

    I hope i can find a way to decrypt it.

    Cheers

  • Adding custom kafka check consumer.lag to datadog from JMX

    Hi,

    We had the necessity to add the consumer.lag check to datadog. Since we did not have access to the kafka.consumer domain which from what i believe it’s on the client side i decided to connect to the kafka node using JMX (so JConsole was the tool). At MBeans tab you will gladly see that there isn’t what you need by default for kafka.consumer.max_lag. The actual info you can grab is located under kafka.server and even more accurate under FetcherLagMetrics as shown.

    If you go all the way down on the hierarchy, you will get partition 0 details for one of the topics. I will use this example for base construct of regex tag:

    kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=ReplicaFetcherThread-0-1001,topic=__consumer_offsets,partition=0

    Using the same template you can directly construct the block that you need to add to kafka.yaml and it should look like this:

     - include:
            domain: 'kafka.server'
            bean_regex: 'kafka\.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=.*'
            attribute:
              Value:
                metric_type: rate
                alias: kafka.consumer.lag

    After a agent restart you will gladly see that the number of metrics that are collected increases and you have a new check in the datadog web interface.
    After that, you can manually add in the dashboard a new graph that uses this check and you can also configure it on specific criteria, like host, topic, partition.

    Hope this is helpful,

    Cheers