Managing Multiple Machines Simultaneously With Ansible

If I have to do it more than once, it’s probably going to get scripted. That has been my general attitude towards mundane system administration tasks for many years, and is also shared by many others. How about taking that idea a little further and applying it to multiple machines? Well there’s a tool for that too, and it’s named ansible.

We need ansible installed on the system we will be using as the client/bastion. This machine needs to be able to SSH into all of the remote systems we want to manage without issue. So stop and make sure that works unhindered before continuing. On the remote machine, the requirements are fairly low and typically revolve around python2. In Gentoo python2 is already installed as it is required by several things including emerge itself. On Ubuntu 16.04 LTS, python2 is not installed by default and you will need to install the package ‘python-minimal’ to regain it.

Once we have python installed on the remote machines and ansible installed on the local machine, we can move on to editing the ansible configuration with a list of our hosts. This file is fairly simple and there are lots of examples available, but here is a snippet of my /etc/ansible/hosts file:



Here you can see I have three hosts listed under a group named ubuntu-staging.

Once we have hosts defined we can do a simple command line test:

ansible ubuntu-staging -m command -a “w”

The ‘-m’ tells ansible we wish to use a module named ‘command’ and ‘-a’ indicates that it has arguments that need to be passed which is immediately given as ‘w’. The output from this command should be similar to this:

$ ansible ubuntu-staging -m command -a “w”
ubuntu-staging-www | SUCCESS | rc=0 >>
10:25:57 up 8 days, 12:29, 1 user, load average: 0.22, 0.31, 0.35
canuteth pts/2 10:25 1.00s 0.25s 0.01s w

ubuntu-staging-dev | SUCCESS | rc=0 >>
10:25:59 up 8 days, 12:17, 1 user, load average: 0.16, 0.03, 0.01
canuteth pts/0 10:25 0.00s 0.37s 0.00s w

ubuntu-staging-db | SUCCESS | rc=0 >>
10:26:02 up 8 days, 12:25, 1 user, load average: 0.17, 0.09, 0.09
canuteth pts/0 10:26 0.00s 0.28s 0.00s w

Okay, that shows promise right? Let’s try something a little more complicated:

$ ansible ubuntu-staging -s -K -m command -a “apt-get update”
SUDO password:
[WARNING]: Consider using apt module rather than running apt-get

ubuntu-staging-db | SUCCESS | rc=0 >>
Hit:1 xenial InRelease
Get:2 xenial-updates InRelease [102 kB]
Get:3 xenial-security InRelease [102 kB]
Get:4 xenial-backports InRelease [102 kB]
Fetched 306 kB in 5s (59.3 kB/s)
Reading package lists…

ubuntu-staging-www | SUCCESS | rc=0 >>
Hit:1 xenial InRelease
Get:2 xenial-updates InRelease [102 kB]
Get:3 xenial-security InRelease [102 kB]
Hit:4 ubuntu-xenial InRelease
Get:5 xenial-backports InRelease [102 kB]
Get:6 xenial-updates/main amd64 Packages [544 kB]
Get:7 xenial-updates/main i386 Packages [528 kB]
Get:8 xenial-updates/main Translation-en [220 kB]
Get:9 xenial-updates/universe amd64 Packages [471 kB]
Get:10 xenial-updates/universe i386 Packages [456 kB]
Get:11 xenial-updates/universe Translation-en [185 kB]
Get:12 xenial-security/main amd64 Packages [276 kB]
Get:13 xenial-security/main i386 Packages [263 kB]
Get:14 xenial-security/main Translation-en [118 kB]
Get:15 xenial-security/universe amd64 Packages [124 kB]
Get:16 xenial-security/universe i386 Packages [111 kB]
Get:17 xenial-security/universe Translation-en [64.2 kB]
Fetched 3,666 kB in 6s (598 kB/s)
Reading package lists…

ubuntu-staging-dev | SUCCESS | rc=0 >>
Hit:1 zesty InRelease
Get:2 zesty-updates InRelease [89.2 kB]
Get:3 zesty-security InRelease [89.2 kB]
Get:4 zesty-backports InRelease [89.2 kB]
Get:5 zesty-updates/main i386 Packages [94.4 kB]
Get:6 zesty-updates/main amd64 Packages [96.2 kB]
Get:7 zesty-updates/main Translation-en [43.0 kB]
Get:8 zesty-updates/main amd64 DEP-11 Metadata [41.8 kB]
Get:9 zesty-updates/main DEP-11 64×64 Icons [14.0 kB]
Get:10 zesty-updates/universe i386 Packages [53.4 kB]
Get:11 zesty-updates/universe amd64 Packages [53.5 kB]
Get:12 zesty-updates/universe Translation-en [31.1 kB]
Get:13 zesty-updates/universe amd64 DEP-11 Metadata [54.1 kB]
Get:14 zesty-updates/universe DEP-11 64×64 Icons [43.5 kB]
Get:15 zesty-updates/multiverse amd64 DEP-11 Metadata [2,464 B]
Get:16 zesty-backports/universe amd64 DEP-11 Metadata [3,980 B]
Get:17 zesty-security/main amd64 Packages [67.0 kB]
Get:18 zesty-security/main i386 Packages [65.5 kB]
Get:19 zesty-security/main Translation-en [29.6 kB]
Get:20 zesty-security/main amd64 DEP-11 Metadata [5,812 B]
Get:21 zesty-security/universe amd64 Packages [28.8 kB]
Get:22 zesty-security/universe i386 Packages [28.7 kB]
Get:23 zesty-security/universe Translation-en [19.9 kB]
Get:24 zesty-security/universe amd64 DEP-11 Metadata [5,040 B]
Fetched 1,049 kB in 6s (168 kB/s)
Reading package lists…

This time we passed ansible the paramater ‘-s’ which tells ansible we want to use sudo and we also passed ‘-K’ which tells ansible to prompt us for a password. You’ll also notice that it warns us to use the ‘apt’ module, which is a better choice for interacting with apt-get.

The command module will work with pretty much any command that is non-interactive and doesn’t use pipes or redirection. I often use it for checking things on multiple machines quickly. For example, if I need to install updates and I want to know if anyone is using a particular machine, I can use w, who, users, etc. to see who is logged in before proceeding.

If we needed to interact with one a few hosts and not an entire group, we can name the hosts, separated by a comma, in the same fashion: ‘ansible ubuntu-staging-www,ubuntu-staging-db …’

Now lets look at trying something a bit more complicated.. say we need to copy a configuration file /etc/ssmtp/ssmtp.conf to all of our hosts. For this we will write an ansible playbook that I named ssmtp.yml:

# copy ssmtp.conf to all ubuntu-staging hosts

– hosts: ubuntu-staging
user: canutethegreat
sudo: yes

– copy: src=/home/canutethegreat/staging/conf/etc/ssmtp/ssmtp.conf

We can invoke the command with ‘ansible-playbook ssmtp.yml’ and it will do as directed. The syntax is fairly straightforward and there are quite a number of examples.

There are lots of examples for a wide range of tasks in the Ansible github repo and be sure to take a look at the intro to playbooks page. Just remember that you are doing things to multiple servers at once so if you do something dumb it’ll be carried out on all of the selected servers! Testing things on staging servers and using pretend/simulate are always good ideas anyway.

What LTS Really Means…

In the business world we love to see software that has a lifecycle that is clearly defined. In relation to this, we typically go for Linux distributions that have long term support (LTS) such as Ubuntu, et cetera al. The reasons why we like these LTS releases is fairly simple: we want to know that our servers are going to have updates, or more specifically security updates, for a few years. What we don’t want is to have an operating system that has few or no updates between releases that leaves us vulnerable. Furthermore, we don’t want an operating system that has new releases frequently. So LTS releases sound great right? Not really…

What LTS releases really do is delay things. They put off updates and upgrades by keeping stale software patched against security vulnerabilities. Maybe we don’t care about the newest features in software x, y, or z – that’s pretty normal in production. However, backporting fixes is not always the best choice either. The problem we run into at the end of an LTS lifecycle is that the step to the next release is bigger – much, much, bigger! There have been LTS to LTS upgrades that have broken so much that a fresh install is either the only option left or it is often faster than trying to muddle through the upgrade. If you skip an LTS upgrade because the currently installed release is still supported, you are going to be in a world of hurt when you finally decide to pull the trigger on that dist-upgrade. The opposite end of the spectrum isn’t always ideal for production either: rolling releases will have the latest features, bug fixes, and security patches, but they also have less time in the oven and sometimes come out half-baked.

There is no easy solution here, no quick fixes. The best use of LTS I’ve seen is when the servers they are installed on have a short lifecycle themselves. If the servers are going to be replaced inside of 5 years then LTS might just be a good fit because you’ll be replacing the while kittle kaboodle before you reach end of life. For the reast of us, I feel like LTS really stands for long term stress – stress that builds up over the lifecycle and then gets dumped on you all at once.

A Central Logging Server with syslog-ng

I have a lot of Linux-based devices in my office and even throughout my home. One day I had a machine that I was having issues with (bad hardware) but couldn’t catch a break and see an error message and at the time of death nothing of use was being written to disk in the logs. I decided to try setting up remote logging to another machine in hopes that it would be transmitted before sudden death. Turned out I got lucky and was able to get an error logged on the remote machine that helped me figure out what the issues was. Since then I’ve had all of my devices that use a compatible logger log to a dedicated machine (a Raspberry Pi) that runs syslog-ng, which is my logger of preference.

Setting up a dedicated logger is easy. Once syslog-ng is installed, we only need to add a few lines to its configuration file to turn it into a logging server:

source net { tcp(); };
destination remote { file(“/var/log/remote/${FULLHOST}.log”); };
log { source(net); destination(remote); };

Here I use TCP as the transport, but you could also use UDP. The remote logs will be saved to /var/log/remote/<the name of the host>.log

Be sure to create the directory for the logging:

# mkdir /var/log/remote

Then restart syslog-ng:

service syslog-ng restart

Next we need to configure a client to log to our new dedicated logging host:

# send everything to log host
destination remote_log_server {
tcp(“” port(514));
log { source(src); destination(remote_log_server); };

In the above example the remote logging server has an IP of so you will want to change that to the IP or hostname of your log server.

Finally, be sure to restart the logging on the client like we did for syslog-ng on the logging server.

That’s all there is to it, very simple, and quick to setup.