This one is more for me actually. We have some issues with one puppet instance on which the processes fail, and i wanted to see if there is any way to order them by memory usage.
So i searched the net and found this link https://unix.stackexchange.com/questions/92493/sorting-down-processes-by-memory-usage
The command is like
ps aux --sort -rss | head -10
And it provides you with following output, at least in my case
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND puppet 6327 70.1 25.5 3585952 1034532 ? Sl 06:53 7:33 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Djava.security.egd=/dev/urandom -javaagent:/usr/share/java/jolokia-jvm-agent.jar=port=8778 -Xms1024m -Xmx1024m -cp /opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar clojure.main -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetserver/conf.d -b /etc/puppetlabs/puppetserver/bootstrap.cfg jenkins 6776 9.6 16.6 4648236 671980 ? Sl 06:55 0:51 /usr/bin/java -Djava.awt.headless=true -javaagent:/usr/share/java/jolokia-jvm-agent.jar=port=8780 -Xms1024m -Xmx1024m -jar /usr/share/jenkins/jenkins.war --webroot=/var/cache/jenkins/war --httpPort=8080 --httpListenAddress=127.0.0.1 puppetdb 5987 16.8 11.7 3845896 474164 ? Sl 06:52 2:01 /usr/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Djava.security.egd=/dev/urandom -Xmx192m -javaagent:/usr/share/java/jolokia-jvm-agent.jar=port=8779 -cp /opt/puppetlabs/server/apps/puppetdb/puppetdb.jar clojure.main -m puppetlabs.puppetdb.main --config /etc/puppetlabs/puppetdb/conf.d -b /etc/puppetlabs/puppetdb/bootstrap.cfg postgres 1458 0.0 2.1 249512 88656 ? Ss Nov21 3:10 postgres: checkpointer process postgres 6206 0.0 1.4 253448 57984 ? Ss 06:53 0:00 postgres: puppetdb puppetdb 127.0.0.1(36882) idle postgres 6209 0.0 0.7 252580 29820 ? Ss 06:53 0:00 postgres: puppetdb puppetdb 127.0.0.1(36886) idle postgres 6210 0.0 0.5 254892 22440 ? Ss 06:53 0:00 postgres: puppetdb puppetdb 127.0.0.1(36888) idle postgres 6213 0.0 0.5 254320 21416 ? Ss 06:53 0:00 postgres: puppetdb puppetdb 127.0.0.1(36894) idle postgres 6205 0.0 0.5 253524 20324 ? Ss 06:53 0:00 postgres: puppetdb puppetdb 127.0.0.1(36878) idle
As you can probably see, the components are taking slowly but surely more and more memory and since the machine has only 4GB allocated it will probably crash again.
If this happens, i will manually increase the memory with another 2GB and see where we will go from there.
Cheers!