Categories
linux

Reset Cinnamon desktop interface

Hi,

I recently had an issue with Cinnamon interface, more exactly, my menu panel dissapeared.

After some quick searches on the net, I found this command:

gsettings reset-recursively org.cinnamon

It seems to do the trick.

Cheers

Categories
linux

Logs check without ELK :)

Hi,

We didn’t have the time to implement ELK stack for Kafka logs so if a issue appears it should be done the old fashion way.

To that purpose, here are two commands that should help you surfing the logs in an easy manner.

First of all, there is the grep command that should show you the hole line and number.

A simple example looks like

grep -nw "2019-06-03" server.log

This should show you all the lines with date 03.06 from the log of the Kafka broker. The idea is that you can not use it with the standard construct cat server.log | grep -nw “[string]”. It must be used in this specific format.

Once you found the line number (and it could look just like 95138:java.lang.OutOfMemoryError: Java heap space there is the less command that we can use.

less +95138 server.log

And that should give you the line.

Thanks all folks!

Categories
linux puppet

Order Linux processes by memory usage

This one is more for me actually. We have some issues with one puppet instance on which the processes fail, and i wanted to see if there is any way to order them by memory usage.

So i searched the net and found this link https://unix.stackexchange.com/questions/92493/sorting-down-processes-by-memory-usage

The command is like

ps aux --sort -rss | head -10

And it provides you with following output, at least in my case

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
puppet    6327 70.1 25.5 3585952 1034532 ?     Sl   06:53   7:33 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Djava.security.egd=/dev/urandom -javaagent:/usr/share/java/jolokia-jvm-agent.jar=port=8778 -Xms1024m -Xmx1024m -cp /opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar clojure.main -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetserver/conf.d -b /etc/puppetlabs/puppetserver/bootstrap.cfg
jenkins   6776  9.6 16.6 4648236 671980 ?      Sl   06:55   0:51 /usr/bin/java -Djava.awt.headless=true -javaagent:/usr/share/java/jolokia-jvm-agent.jar=port=8780 -Xms1024m -Xmx1024m -jar /usr/share/jenkins/jenkins.war --webroot=/var/cache/jenkins/war --httpPort=8080 --httpListenAddress=127.0.0.1
puppetdb  5987 16.8 11.7 3845896 474164 ?      Sl   06:52   2:01 /usr/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Djava.security.egd=/dev/urandom -Xmx192m -javaagent:/usr/share/java/jolokia-jvm-agent.jar=port=8779 -cp /opt/puppetlabs/server/apps/puppetdb/puppetdb.jar clojure.main -m puppetlabs.puppetdb.main --config /etc/puppetlabs/puppetdb/conf.d -b /etc/puppetlabs/puppetdb/bootstrap.cfg
postgres  1458  0.0  2.1 249512 88656 ?        Ss   Nov21   3:10 postgres: checkpointer process                                                                                              
postgres  6206  0.0  1.4 253448 57984 ?        Ss   06:53   0:00 postgres: puppetdb puppetdb 127.0.0.1(36882) idle                                                                           
postgres  6209  0.0  0.7 252580 29820 ?        Ss   06:53   0:00 postgres: puppetdb puppetdb 127.0.0.1(36886) idle                                                                           
postgres  6210  0.0  0.5 254892 22440 ?        Ss   06:53   0:00 postgres: puppetdb puppetdb 127.0.0.1(36888) idle                                                                           
postgres  6213  0.0  0.5 254320 21416 ?        Ss   06:53   0:00 postgres: puppetdb puppetdb 127.0.0.1(36894) idle                                                                           
postgres  6205  0.0  0.5 253524 20324 ?        Ss   06:53   0:00 postgres: puppetdb puppetdb 127.0.0.1(36878) idle                       

As you can probably see, the components are taking slowly but surely more and more memory and since the machine has only 4GB allocated it will probably crash again.

If this happens, i will manually increase the memory with another 2GB and see where we will go from there.

Cheers!

Categories
golang linux

Golang logging using USER profile on Mint 19

Hi,

I committed on learning Golang and as a part of this task i came to play with logging examples. It seems that if you user syslog.LOG_USER the info is stored in the /var/log/syslog.

Here is the code and also the output

package main
import (
	"io"
	"log"
	"log/syslog"
	"os"
	"path/filepath"
)
func main() {
	progname := filepath.Base(os.Args[0])
	sysLog, err := syslog.New(syslog.LOG_INFO|syslog.LOG_USER,progname)
	if err != nil {
	log.Fatal(err)
} else {
	log.SetOutput(sysLog)
	}
	log.Println("LOG_INFO + LOG_USER: Logging in Go!")
	io.WriteString(os.Stdout,"Will you see this?")
}

The second line (Will you see this?) is outputed only in console.

Oct 29 14:30:25 mintworkstation logging[4835]: 2018/10/29 14:30:25 LOG_INFO + LOG_USER: Logging in Go!
Oct 29 14:30:25 mintworkstation logging[4835]: 2018/10/29 14:30:25 LOG_INFO + LOG_USER: Logging in Go!

P.S.: Managed to find a config file located under /etc/rsyslog.d, called 50-default.conf.
In this file there is a commented line

#user.*				-/var/log/user.log

If you uncomment it and restart service with systemctl restart rsyslog, the output will be moved to /var/log/user.log

Oct 29 14:48:32 mintworkstation NetworkManager[836]:   [1540817312.1683] connectivity: (enp0s31f6) timed out
Oct 29 14:49:37 mintworkstation gnome-terminal-[2196]: g_menu_insert_item: assertion 'G_IS_MENU_ITEM (item)' failed
Oct 29 14:49:59 mintworkstation gnome-terminal-[2196]: g_menu_insert_item: assertion 'G_IS_MENU_ITEM (item)' failed
Oct 29 14:50:28 mintworkstation gnome-terminal-[2196]: g_menu_insert_item: assertion 'G_IS_MENU_ITEM (item)' failed
Oct 29 14:50:59 mintworkstation logging[5144]: 2018/10/29 14:50:59 LOG_INFO + LOG_USER: Logging in Go!
Oct 29 14:51:14 mintworkstation gnome-terminal-[2196]: g_menu_insert_item: assertion 'G_IS_MENU_ITEM (item)' failed

Cheers

Categories
linux

Cgroups management on Linux – first steps

Hi,

I didn’t know that much about control groups but i see that there are a big thing in performance and process optimization.
For the moment i would like to share two important info that i found.
First, there are three options that you need to activate in you want to play with control group management:

DefaultCPUAccounting=yes
DefaultBlockIOAccounting=yes
DefaultMemoryAccounting=yes

thtat you can find under /etc/systemd/system.conf.

And, there is also a command that shows CPU utilization along with other info related to the user/system slices – systemd-cgtop.
If the accounting is not enabled, no details are shown…..once you do that you will have info like this:

Path                                                                                                                                                                        Tasks   %CPU   Memory  Input/s Output/s

/                                                                                                                                                                              66    9.2        -        -        -
/user.slice                                                                                                                                                                     -    5.0        -        -        -
/user.slice/user-1000.slice                                                                                                                                                     -    5.0        -        -        -
/user.slice/user-1000.slice/session-1.scope                                                                                                                                    47    5.0        -        -        -
/system.slice                                                                                                                                                                   -    3.8        -        -        -
/system.slice/lightdm.service                                                                                                                                                   2    3.5        -        -        -
/system.slice/docker.service                                                                                                                                                    2    0.3        -        -        -
/system.slice/vboxadd-service.service                                                                                                                                           1    0.0        -        -        -
/system.slice/ModemManager.service                                                                                                                                              1      -        -        -        -
/system.slice/NetworkManager.service                                                                                                                                            2      -        -        -        -
/system.slice/accounts-daemon.service                                                                                                                                           1      -        -        -        -
/system.slice/acpid.service                                                                                                                                                     1      -        -        -        -
/system.slice/atd.service                                                                                                                                                       1      -        -        -        -
/system.slice/avahi-daemon.service                                                                                                                                              2      -        -        -        -
/system.slice/colord.service                                                                                                                                                    1      -        -        -        -
/system.slice/cron.service                                                                                                                                                      1      -        -        -        -
/system.slice/cups-browsed.service                                                                                                                                              1      -        -        -        -
/system.slice/cups.service                                                                                                                                                      1      -        -        -        -
/system.slice/dbus.service

That is all so far. I will let you know once i discover new info.

Cheers

Categories
kafka linux

Kernel not compatible with zookeeper version

Morning,

It’s important to share this situation with you. This morning i came to the office to see that a cluster that was upgraded/restarted had an issue with Zookeeper instances.

Symptoms¬† were clear: instances won’t start completely. But why?

After a little bit of investigation, i went to the /var/log/syslog (/var/log/zookeeper did not contain any information at all) to see that there is a bad page table in the jvm.

Java version is:

java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)

So, the log showed following lines:

Aug 16 07:16:04 kafka0 kernel: [  742.349010] init: zookeeper main process ended, respawning
Aug 16 07:16:04 kafka0 kernel: [  742.925427] java: Corrupted page table at address 7f6a81e5d100
Aug 16 07:16:05 kafka0 kernel: [  742.926589] PGD 80000000373f4067 PUD b7852067 PMD b1c08067 PTE 80003ffffe17c225
Aug 16 07:16:05 kafka0 kernel: [  742.928011] Bad pagetable: 000d [#1643] SMP 
Aug 16 07:16:05 kafka0 kernel: [  742.928011] Modules linked in: dm_crypt serio_raw isofs crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd psmouse floppy

Why should the JVM throw a memory error? The main reason is incompatibility with kernel version.

Let’s take a look in the GRUB config file.

Looks like we are using for boot:

menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-baf292e5-0bb6-4e58-8a71-5b912e0f09b6' {
	recordfail
	load_video
	gfxmode $linux_gfx_mode
	insmod gzio
	insmod part_msdos
	insmod ext2
	if [ x$feature_platform_search_hint = xy ]; then
	  search --no-floppy --fs-uuid --set=root  baf292e5-0bb6-4e58-8a71-5b912e0f09b6
	else
	  search --no-floppy --fs-uuid --set=root baf292e5-0bb6-4e58-8a71-5b912e0f09b6
	fi
	linux	/boot/vmlinuz-3.13.0-155-generic root=UUID=baf292e5-0bb6-4e58-8a71-5b912e0f09b6 ro  console=tty1 console=ttyS0
	initrd	/boot/initrd.img-3.13.0-155-generic

There was also an older version of kernel image available 3.13.0-153.

Short fix for this is to update the grub.cfg file with the old version and reboot the server.

Good fix is still in progress. Will post as soon as i have it.

P.S: I forgot to mention the Zookeeper version:

Zookeeper version: 3.4.5--1, built on 06/10/2013 17:26 GMT

P.S 2: It seems that the issue is related with the java processes in general not only zookeeper

Cheers

Categories
linux

Convert mdx image (daemon tools) to iso

Hi,

Some time ago i created some images that were saved in mdx format using (Daemon Tools). Those were Windows times, but i migrated to Debian since then.

I created a Windows virtual-box machine in order to use them but unfortunately it does not allow to mount them in this format.

In order to convert them you will have to install a small package called iat using command sudo apt-get install iat . It is found in the default repo.

One it’s installed , just run iat old_image.mdx old_image.iso

Cheers

Categories
docker linux newtools

List differences between two sftp hosts using golang

Hi,

Just as a intermediate post as i wanted to play a little bit with golang, let me show you what i managed to put together in some days. I created a virtual machine on which i installed docker and grabbed a sftp image. You can try first two from Docker Hub, it should work.
So i pulled this image and initiated two containers as shown below:

eaf3b93798b5        asavartzeth/sftp    "/entrypoint.sh /u..."   21 hours ago        Up About a minute         0.0.0.0:2225->22/tcp   server4
ec7d7e1d029f        asavartzeth/sftp    "/entrypoint.sh /u..."   21 hours ago        Up About a minute         0.0.0.0:2224->22/tcp   server3

The command to do this looks like:

docker run --name server3 -v /home/sorin/sftp1:/chroot/sorin:rw -e SFTP_USER=sorin -e SFTP_PASS=pass -p 2224:22 -d asavartzeth/sftp
docker run --name server4 -v /home/sorin/sftp2:/chroot/sorin:rw -e SFTP_USER=sorin -e SFTP_PASS=pass -p 2225:22 -d asavartzeth/sftp

Main info to know about these containers is that they should be accessible by user sorin and the path were the external directories are mapped is on /chroot/sorin.

You can manually test the connection by using a simple command like:

sftp -P 2224 sorin@localhost

If you are using the container ip address i observed that you will use the default 22 port to connect to them. Not really clear why but this is not about that.

Once the servers are up and running you can test the differences between the structure using following code:


package main

import (
	"fmt"

	"github.com/pkg/sftp"
	"golang.org/x/crypto/ssh"
)

type ServerFiles struct {
	Name  string
	files []string
}

func main() {

	server1client := ConnectSftp("localhost:2224", "sorin", "pass")
	server1files := ReadPath(server1client)
	server1struct := BuildStruct("172.17.0.2", server1files)
	server2client := ConnectSftp("localhost:2225", "sorin", "pass")
	server2files := ReadPath(server2client)
	server2struct := BuildStruct("172.17.0.3", server2files)
	diffilesstruct := CompareStruct(server1struct, server2struct)
        for _, values := range diffilestruct.files {
        fmt.Printf("%s %s\n", diffilesstruct.Name, values)
 }
	CloseConnection(server1client)
	CloseConnection(server2client)
}
func CheckError(err error) {
	if err != nil {
		panic(err)
	}
}
func ConnectSftp(address string, user string, password string) *sftp.Client {
	config := &ssh.ClientConfig{
		User: user,
		Auth: []ssh.AuthMethod{
			ssh.Password(password),
		},
		HostKeyCallback: ssh.InsecureIgnoreHostKey(),
	}
	conn, err := ssh.Dial("tcp", address, config)
	CheckError(err)

	client, err := sftp.NewClient(conn)
	CheckError(err)

	return client
}
func ReadPath(client *sftp.Client) []string {
	var paths []string
	w := client.Walk("/")
	for w.Step() {
		if w.Err() != nil {
			continue
		}
		
		paths = append(paths, w.Path())
	}
	return paths
}
func BuildStruct(address string, files []string) *ServerFiles {
	server := new(ServerFiles)
	server.Name = address
	server.files = files

	return server
}
func CompareStruct(struct1 *ServerFiles, struct2 *ServerFiles) *ServerFiles {

	diff := difference(struct1.files, struct2.files)
	diffstruct := new(ServerFiles)
	for _, value := range diff {
		for _, valueP := range struct1.files {
			if valueP == value {
				
				diffstruct.Name = struct1.Name
				diffstruct.files = append(diffstruct.files, valueP)
			}
		}
		for _, valueQ := range struct2.files {
			if valueQ == value {
				
				diffstruct.Name = struct2.Name
				diffstruct.files = append(diffstruct.files, valueQ)
			}
		}
	}
	return diffstruct
}
func difference(slice1 []string, slice2 []string) []string {
	var diff []string

	// Loop two times, first to find slice1 strings not in slice2,
	// second loop to find slice2 strings not in slice1
	for i := 0; i < 2; i++ {
		for _, s1 := range slice1 {
			found := false
			for _, s2 := range slice2 {
				if s1 == s2 {
					found = true
					break
				}
			}
			// String not found. We add it to return slice
			if !found {
				diff = append(diff, s1)
			}
		}
		// Swap the slices, only if it was the first loop
		if i == 0 {
			slice1, slice2 = slice2, slice1
		}
	}

	return diff
}
func CloseConnection(client *sftp.Client) {
	client.Close()
}

This actually connects to each server, reads the hole filepath and puts it on a structure. After this is done for both servers, there is a method that compares only the slice part of the struct and returns the differences. On this differences there is another structure constructed with only the differences.
It is true that i took the differences func from stackoverflow, and it's far from good code, but i am working on it, this is the first draft, i will post different versions as it gets better.

The output if there are differences will look like this:

172.17.0.2 /sorin/subdirectory
172.17.0.2 /sorin/subdirectory/subtest.file
172.17.0.2 /sorin/test.file
172.17.0.3 /sorin/test2

If there are no differences that it will just exit.
Working on improving my golang experience. Keep you posted.

Cheers!

Categories
linux

How to change root password on Debian – after vacation

Morning,

Since i had a vacation and completely forgot all my passwords for Debian VM i fixed it using this article. Very useful!

https://pve.proxmox.com/wiki/Root_Password_Reset

Cheers!

Categories
kafka linux

Ubuntu – change ulimit for kafka, do not ignore

Hi,

Wanna share with you what managed to take me half a day to clarify. I just read in the following article https://docs.confluent.io/current/kafka/deployment.html#file-descriptors-and-mmap
and learned that in order to optimize kafka, you will need to also change the maximum number of open files. It is nice, but our clusters are deployed on Ubuntu and the images are pretty basic. Not really sure if this is valid for all of the distributions but at least for this one it’s absolutely needed.
Before trying to setup anything in

/etc/security/limits.conf

make sure that you have exported in

/etc/pam.d/common-session

line

session required pam_limits.so

It is needed in order for ssh, su processes to take the new limits for that user (in our case kafka).
Doing this will help you define new values on “limits” file. You are now free to setup nofile limit like this for example

*               soft    nofile          10000
*		hard	nofile		100000
kafka		soft 	nofile		10000
kafka		hard	nofile		100000

After it is done, you can restart the cluster and check value by finding process with ps-ef | grep kafka and viewing limit file using cat /proc/[kafka-process]/limits.

I will come back later with also a puppet implementation for this.

Cheers!