Nested (Nested (Nested SSH) SSH)) SSH

There are occasions where I need to reach a server via SSH that is only reachable through multiple bastions. Sometimes this is because of security reasons and other times it is because the machines are on different networks with no direct route. One can of course SSH to the first bastion, then from there to the next, and so forth, but that is annoying to have to type each time. We can do this from the command line as well as in the SSH config.

An example from the command line (for scripting, not typing) using strung together commands:

ssh -t user@host1 ssh -t user@host2 ssh -t user@host3 … ssh user@destination

The ‘-t’ flag tells SSH to use a pseudo terminal on the remote machine. This is required if you intend on running a command, such as SSH itself, that expects to be executed in a terminal instead of as a detached/background process. The final SSH command doesn’t need the ‘-t’ flag if you are aiming for a remote shell such as bash.

An example from the command line (again, for scripting) using jumphost flag:

ssh -J user@host1,user@host2,user@host3,… user@destination

Okay so that’s pretty cool, but what if we want to make it a permanent setting in our SSH config? Well, we can do that too by adding these lines to our ~/.ssh/config:

# host1
host host1
HostName host1.fqdn
User user

# host2
host host2
HostName host2.fqdn
User user
ProxyJump host1

# host3
host host3
HostName host3.fqdn
User user
ProxyJump host2

# destination
host destination
HostName destination.fqdn
User user
ProxyJump host3

Now we can use ‘ssh destination’ and SSH will handle the rest for us.

That covers the basics and should give you a glimpse of how chill SSH is with being nested, strung together, and so on.

Advertisements

Using AWS S3 as Primary Storage on Nextcloud

I have been testing/using Nextcloud for the last couple of months in hopes of getting rid of Dropbox, Google Drive, etc. I recently experimented with having external storage connected to it. That’s all fine and dandy, but then I wondered if an external storage could be used as the primary storage? A little searching revealed I wasn’t the first person to think of that. In fact it is supported by Nextcloud and is documented. To get started create a bucket with the desired settings and create an IAM user that has access.

The official Nextcloud documentation gives this example:

‘objectstore’ => array(
‘class’ => ‘OC\\Files\\ObjectStore\\S3’,
‘arguments’ => array(
‘bucket’ => ‘nextcloud’,
‘autocreate’ => true,
‘key’ => ‘EJ39ITYZEUH5BGWDRUFY’,
‘secret’ => ‘M5MrXTRjkyMaxXPe2FRXMTfTfbKEnZCu+7uRTVSj’,
‘hostname’ => ‘example.com’,
‘port’ => 1234,
‘use_ssl’ => true,
‘region’ => ‘optional’,
// required for some non amazon s3 implementations
‘use_path_style’=>true
),
),

Based on my experience using AWS S3 as an external storage device, I ended up with this as my config:

‘objectstore’ => array(
‘class’ => ‘OC\\Files\\ObjectStore\\S3’,
‘arguments’ => array(
‘bucket’ => ‘<my_bucket>’,
‘key’ => ‘<key>’,
‘secret’ => ‘<so_secret>’,
‘use_ssl’ => true,
‘region’ => ‘<region>,
// required for some non amazon s3 implementations
‘use_path_style’=>true
),
),

Specifically, I found it necessary to specify the region (i.e. us-west-2) and SSL otherwise I got errors.

I have been running this for a few days now and have not seen any issues.

Nextcloud, Docker, and upgrades

I have been running Nextcloud via a Docker image for a few months and recently a new version of Nextcloud was released. This seemed like the perfect opportunity to test out upgrading to a newer Nextcloud Docker image and keeping my data. Since I mount a volume to keep the configuration data in, it will be a fairly easy to upgrade.

First step is to make sure we have backups and verify their integrity. The Nextcloud backups page details these instructions pretty well, but just to cover the basics you need to backup your data and database at a minimum. I also went ahead and grabbed a copy of the config.php by itself and stored it outside of the container. Tip: I didn’t initially know which volume store was the right one, so I entered the container by loading bash and created a temporary file named ‘findmehere’ that I could search for from the host.

Next we will stop the existing container by issuing a ‘docker stop <id>’ where <id> is the container id listed in the output of ‘docker ps’. Then we will start a new docker image using the same command we did the first time. For me this looked like ‘docker run -d -v nextcloud:/var/www/html -p 8181:80 nextcloud’ but YMMV.

The occ script should detect the new version of Nextcloud and start the upgrade. Check the status by visiting your Nextcloud web page. Since we used a volume to keep our data in we should be all set!

Nextcloud and Docker with Apache Proxy

I decided I wanted to try migrating away from Dropbox, Amazon Drive, and Google Drive to my own server using open source tools. After a bit of research I determined Nextcloud would be the best fit for what I wanted to do right now and some optional features later on. Nextcloud can be installed via packages in the major distributions, but I wanted to use this opportunity to test drive Docker at the same time. One of the reasons I wanted to use a container was so that the installation is mostly isolated from the host install which is good for security purposes but also makes it easier if I want to migrate the whole thing to a different host later on. Now the host I want to use already has a web server, Apache, listening on ports 80 and 443, so we’ll configure it to act as a proxy between the web server in the Nextcloud Docker image and the client. This will also fit well with the SSL certificate the host has.

The first step is getting Docker installed and running. This is pretty easy for most distributions and is covered in detail on the official Docker Community Edition site for Ubuntu, CentOS, and the Gentoo Wiki even has instructions as well.

Once you’ve got Docker up and running. let’s test it out first:

docker run hello-world

This should download the hello-world image and run it.

Now, let’s get to docker. First let me mention that I already have a database (PostgreSQL) that I used so I’ll skip that step here, but if you don’t already have a database available now would be the point to pause and go get that resolved. Since data is not saved between Docker containers, we will need to instruct Docker to create a mount point for the Nextcloud image that will keep our data safe. This can be accomplished with the ‘-v’ flag. We also need to tell Docker what port we want to open up, but in my case I don’t want it using port 80 or 443 so we’ll have to further instruct Docker to forward the port in our ‘-p’ flag.

docker run -d -v nextcloud:/var/www/html -p 8181:80 nextcloud

In the above command I have select port 8181 on the host to be passed to port 80 on the Nextcloud Docker image. Once the Docker container loads completely you should be able to access it via http://your_ip:8181 and see the setup page, but before we do that let’s setup our proxy.

On the host side we will create an Apache vhost with a few lines like this:

<VirtualHost *:80>
ServerName nextcloud.my_domain.com
http://localhost:8181/
Redirect permanent / https://nextcloud.my_domain.com </VirtualHost>

<VirtualHost *:443>
ServerName nextcloud.my_domain.com

<proxy *>
Require host localhost
Require all granted
</proxy>
ProxyPass / http://localhost:8181/
<IfModule mpm_peruser_module>
ServerEnvironment apache apache
</IfModule>
<IfModule mod_headers.c>
Header always set Strict-Transport-Security “max-age=15552000; includeSubDomains”
</IfModule>

SSLEngine on
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL
SSLCertificateFile /etc/my_certificate.pem
SSLCertificateKeyFile /etc/my_certificate-private.pem
SSLCertificateChainFile /etc/my_certificate-full.pem
</VirtualHost>

What this configuration does is force all non-SSL connections to use SSL. Then under the SSL configuration it proxies all connections to port 8181 on localhost where nextcloud is running. Then finally we use our SSL certificate from the host. Don’t forget to setup DNS for your nextcloud domain!

At this point we should be ready to continue with the setup via web. Load up https://nextcloud.my_domain.com in your web browser and follow the on-screen instructions. One of the pages should detect the proxy setup and ask for additional domain(s) to be configured so be sure to add https://nextcloud.my_domain.com to the list in addition to http://host.my_domain.com:8181 (if desired).

One final step that I suggest is setting up a cron job on the host (not inside the Nextcloud Docker image). I have mine set to run ever 15 minutes. In order for this to work we need to install sudo in the container by entering the container:

docker exec -it 16765e565e25 bash

Update apt sources:

apt-get update

Install sudo:

apt-get install sudo

Now finally on the host (NOT container) create a crontab with this line:

*/15 * * * * docker exec -d <docker id> /usr/bin/sudo -u www-data /usr/local/bin/php /var/www/html/cron.php

Be sure to replace <docker id> with the real one which you can find by running ‘docker ps’ on the host.

At this point you should have a fully functional Nextcloud server!

Client PPtP Connection From A VM

I encountered an issue recently with trying to make a PPtP connection from a Linux VM as the client to a remote commercial device or server where the GRE packets were being dropped. The same PPtP credentials worked on another server that is bare metal. This lead me speculate that the issue might be something between the routing devices and the client. After a bit of investigative work with wireshark I discovered the GRE packets were in fact getting to the virtualization host but not to the guest VM. I suspect this issue may be present with other types of virtualization software, but to be clear this particular VM host is running KVM/QEMU.

It has been a while (read: years) since I’ve done much with PPtP beyond just using it. Adding a configuration that was working on another server to this particular system I discovered the connection would not complete much to my dismay. Looking at what ppp logged to the system log revealed it never got a proper GRE reply. Well, there were a lot of things in the log but the one that stood out looked like this:

warn[decaps_hdlc:pptp_gre.c:204]: short read (-1): Input/output error

After a bit of Googling and reading the documentation for pptp-client I decided re-try the setup on the previously mentioned working system and watch the log closely for further clues. Where the second system was failing the original system sailed right past and worked fine. My next attempt was to look at what connections the first system had going which lead to me realize and make a mental connection to the documentation/Googling had revealed about PPtP using protocol 47 (GRE) on TCP port 1723 for the control. Watching another attempt on the second system showed the outgoing request for GRE but nothing coming back. Repeating the last test but watching for incoming GRE on the host showed that it was being received but not being passed on to the guest VM. Looking at my options I discovered that there is a whole set of modules and a kernel configuration option to allow forwarding of PPtP.

The missing pieces to the puzzle include adding a line to your sysctl.conf:

net.netfilter.nf_conntrack_helper=1

Then loading these kernel modules:

nf_conntrack_proto_gre
nf_nat_proto_gre
nf_conntrack_pptp
nf_nat_pptp

As soon as these were in place PPtP started working as expected in the guest VM. What started out as a mystery turned out to be a fairly simple solution. While there are probably not a lot of people still using PPtP these days, it is a better alternative to using a proprietary VPN client.

Placing A Buffer Between Your Cell and The World

This might be a familiar problem for some people: I’ve had the same personal cell phone number for 15+ years. During this time I have used my number for personal, business, personal business, and the list goes on. Over the years the number of telemarketers has increased to the point where it is sometimes multiple calls per day. This has been annoying but I can usually deal with it by tapping decline on numbers I don’t know. However, about a year ago I started getting text/SMS spam and that is far more irritating to me. When this SMS spam reached the multiple per day I decided it might be time to get a new phone number, but I didn’t want the same problem to reappear. My solution is to make my own answering service and give out that number and never my cell. This covers the phone calls, but what about texts? I wouldn’t want to miss a legitimate message. Well both aspects can be accomplished by using Twilio

For those that do not know, Twilio is a programmable phone service with voice, SMS, Fax, and other features. Twilio has an API for several languages including Python, PHP, node.js, C#, Java, and Ruby. I already have a web server so for me it seemed easiest (quickest to setup) to use that to house some PHP and use Twilio to handle my automated voice and SMS messaging service.

So what does the end result look like? People (or automated telemarketers) can call my Twilio phone number and are greeted with a message of my choosing. Since I don’t want the automated calls leaving me messages I have created a phone (tree) menu that requires the caller to enter a specific number (extension) to leave me a message. Then for SMS, I have a PHP script set up that takes the message and sends a copy to my email then autoresponds and tells the sender that I’ll get back to them as soon as possible.

Lets start with the voice part as that is the more involved piece in this setup. In the Twilio web console, under the section titled “Voice & Fax” I have it set to “Webhook” and have a URL pointing to a specific URL on my webserver. The URL looks something like https://mydomain.com/twilio/main.php The contents of main.php is fairly simple:

<?php

header(“content-type: text/xml”);
echo “<?xml version=\”1.0\” encoding=\”UTF-8\”?>\n”;
$from = $_REQUEST[‘From’];
// email me the number every number that calls
mail(‘myemailaddress@gmail.com’, ‘Call System: call from ‘.$from, $from.”\n”, ‘From: myemailaddress@gmail.com’);
?>
<Response>
<Say voice=”woman” language=”en”>Hello. You may select from the following options.</Say>
<Gather numDigits=”1″ action=”main-handler.php” method=”POST”>
<Say voice=”woman” language=”en” loop=”1″>
To leave a message for Ron select one.
</Say>
<Pause length=”15″/>
</Gather>
</Response>

If the caller selects one they will be sent to main-handler.php, if they select anything else the message replays. In main-handler.php I have:

<?php

// if the caller pressed anything but these digits, send them back
if($_REQUEST[‘Digits’] != ‘1’) {
header(“Location: main.php”);
die;
}

header(“content-type: text/xml”);
echo “<?xml version=\”1.0\” encoding=\”UTF-8\”?>\n”;
?>

<Response>
<?php if ($_REQUEST[‘Digits’] == ‘117’) { ?>
<Say voice=”woman” language=”en”>Please leave a message for Ron. You may hang up when finished.</Say>
<Record maxLength=”90″ transcribe=”true” action=”ron-recording.php” transcribeCallback=”ron-recording-transcribe.php” />
<?php } ?>
</Response>

If the caller selects one, the flow gets sent to ron-recording.php:

<?php
header(“content-type: text/xml”); echo “<?xml version=\”1.0\” encoding=\”UTF-8\”?>\n”;
?>
<Response>
<Say voice=”woman” language=”en”>Thank you for leaving a message for Ron.</Say>
</Response>

If the caller leaves a message, transcription is handled by ron-recording-transcribe.php:

<?php
$from = $_REQUEST[‘From’];

// email me
mail(‘myemailaddress@gmail.com’, ‘Call System: message for Ron from ‘.$from, $from.”\n”.$_REQUEST[‘TranscriptionText’].”\n”, ‘From: myemailaddress@gmail.com’);

?>

That covers the voice aspect of my Twilio setup, the last piece is handling SMS. In the Twilio web console under “Messaging” I have it set to Webhook and the URL looks something like https://mydomain.com/twilio/incomingsms.php This handles all SMS text messaging that are sent to my Twilio number:

<?php
header(“content-type: text/xml”);
echo “<?xml version=\”1.0\” encoding=\”UTF-8\”?>\n”;
// email me
$from = $_REQUEST[‘From’];
mail(‘myemailaddress@gmail.com’, ‘Call System: SMS for Ron from ‘.$from, $from.”\n”.$_REQUEST[‘Body’].”\n”, ‘From: myemailaddress@gmail.com’);
?>
<Response>
<Message>I am busy right now but will try to reply to your message as soon as possible.</Message>
</Response>

When a text is sent to my Twilio number the contents of the text get sent to my email immediately and a message reading “I am busy right now but will try to reply to your message as soon as possible.” is sent to sender.

Well that covers my simple Twilio setup for handling voice messages and SMS texts. Hopefully it proves useful in the years to come with regards to reducing the amount of telemarketers and spam texts sent to my cell phone.

Creating Your Own Encrypted File “Safe”

I often think about, no scratch that – I often worry about what would happen if my laptop was stolen or fell into “evil” hands. I mean there isn’t a lot on any of my machines that could be misused as most things are locked down. My Internet-based accounts such as my Google account require two factor authentication, important files are backed up, etc. However, there are special files, and here I’m specifically thinking about SSH private keys, that should never be out of my control. My solution is fairly simple: create an encrypted file that can be mounted as a loopback device.

The first step is deciding how much speed we are going to need as we cannot directly resize our encrypted file once it is created. If we later need more storage (or less) our only option is to create a new one and copy the contents of the old (mounted) safe to the new one. I use mine to store my entire ~/.ssh, ~/.gpg, and a few other files so my needs are fairly small. All of my files together account for less than 100MB, but knowing that I might want to expand later I decided on 1GB.

If we are using ext2/3/4, xfs, and probably a few other filesystems we can use fallocate to reserve our disk space. I say probably a few others as I know of at least one it doesn’t work on which is zfs.

fallocate -l 1G safe.img

The next step is to create an encrypted device on our new blank image:

cryptsetup luksFormat safe.img

During this step you will be prompted for a password and this is really the only weak spot (bugs not withstanding) in the entire setup. Make sure your password is long enough to make brute force unreasonably long and make sure it cannot be aided with any of the known dictionaries floating around. I made mine 31 characters long because it is long enough to make brute force unprofitable.

Once the encrypted data is written, we can proceed to opening the device:

cryptsetup open safe.img

You will be prompted to enter your password each time you open it so make sure you are using a trusted keyboard (i.e. not wireless).

The next step is to create a filesystem on our new safe:

mkfs.ext4 /dev/mapper/safe

Now, finally, we can mount it and start using it!

mount /dev/mapper/safe /mnt/safe

At this point you should be able to add files into your safe as if were any other mounted device.

Once you are done using your safe, don’t forget to unmount it and close it so that no-one can access it:

umount /mnt/safe

cryptsetup close safe

So now we know how to create, open, and close the device, but what sorts of things are good for storing in there? Well as previously mentioned I store my entire ~/.ssh/ directory in my safe. I moved the directory into /mnt/safe/ and then created a symlink from there to ~/.ssh which allows me to use everything I normally would (ssh, mosh, scp, etc.) without having to reconfigure anything.

What to do next is up to you, but I do tot trust the quality of USB thumb drives out there these days. So I opted to stick my safe on my local hard drive and include it in my backup scheme.

Turning /etc Into A Git Repo With etckeeper

Whether it be for production or development purposes, it is often desirable to turn /etc into a file repository on our servers. There is a great tool named etckeeper that automates pushing changes to a repo for us. That is, once we have it set up and do an initial push. etckeeper supports several version control, but we only care about git.

Install using your package manager of choice, for Gentoo users make sure if you have ‘cron’ USE flag enabled.

If we are going to be pushing to a remote repo (recommended) we need to edit /etc/etckeeper/etckeeper.conf and modify the REMOTE_PUSH line to look like:

PUSH_REMOTE=”origin”

Now we need to instruct etckeeper to create an initial (empty) repository using /etc:

# etckeeper init -d /etc
Initialized empty Git repository in /etc/.git/

Next we will want to tell git/etckeeper where our remote repo is, but first we need to make sure we are in /etc:

# cd /etc

# git remote add origin https://USERNAME:PASSWORD@GITREPOHOST/DIR/repo.git

If that is successful there will be no output.

Now we want to do an initial commit:

# etckeeper commit “Initial commit.”
[master (root-commit) d918775] Initial commit.

<snipped>

Finally we need to push our changes:

# git push -u origin master
Branch master set up to track remote branch master from origin.
Everything up-to-date

We can check the status at any time in the normal way:

# git status
On branch master
Your branch is up-to-date with ‘origin/master’.
nothing to commit, working tree clean

Depending on your distribution there should be an automatic cron.daily job installed. On Gentoo, we can take it a step further and force changes to be committed during an emerge by editing (or creating) /etc/portage/bashrc:

case “${EBUILD_PHASE}” in
setup|prerm) etckeeper pre-install ;;
postinst|postrm) etckeeper post-install ;;
esac

That’s all there is for getting a basic setup going and you should start seeing commits when there are changes in /etc to the repo.

Working Around A Touchy Touchpad

One of my computers (laptop) has a touchpad that is a bit too egar to click and I sometimes find myself initiating accidential clicks with my palms. To make matters worse this machine does not have a hardware button to turn the touchpad on or off, nor does it have a function key to enable or disable it. Further towards this end, I often sit at a desk, table, or use a desk-like surface to work on and make sure of a blutooth mouse. When I am using an external mouse I have no need or want for the touchpad to be working. This touchpad is a synaptics branded one which is well supported. My solution? Write a simple bash script to enable or disable the touchpad and make a keyboard shortcut to execute it.

First off we need to make a bash script to do the magic by creating a file in /usr/local/bin/touchy:

#!/bin/bash

RUNFILE=/tmp/touchy.run

if [ -e $RUNFILE ]; then

# currently disabled, enabling

rm -f $RUNFILE

synclient Touchpadoff=0

else

# currently enabled, disabling

touch $RUNFILE

synclient Touchpadoff=1

fi

Next make the script executable by running “chmod +x /usr/local/bin/touchy”.

Now you can configure your window manager of choice to make a hotkey, also of your key combo choice, to run the script.

Managing Multiple Machines Simultaneously With Ansible

If I have to do it more than once, it’s probably going to get scripted. That has been my general attitude towards mundane system administration tasks for many years, and is also shared by many others. How about taking that idea a little further and applying it to multiple machines? Well there’s a tool for that too, and it’s named ansible.

We need ansible installed on the system we will be using as the client/bastion. This machine needs to be able to SSH into all of the remote systems we want to manage without issue. So stop and make sure that works unhindered before continuing. On the remote machine, the requirements are fairly low and typically revolve around python2. In Gentoo python2 is already installed as it is required by several things including emerge itself. On Ubuntu 16.04 LTS, python2 is not installed by default and you will need to install the package ‘python-minimal’ to regain it.

Once we have python installed on the remote machines and ansible installed on the local machine, we can move on to editing the ansible configuration with a list of our hosts. This file is fairly simple and there are lots of examples available, but here is a snippet of my /etc/ansible/hosts file:

[ubuntu-staging]
ubuntu-staging-dev
ubuntu-staging-www
ubuntu-staging-db

 

Here you can see I have three hosts listed under a group named ubuntu-staging.

Once we have hosts defined we can do a simple command line test:

ansible ubuntu-staging -m command -a “w”

The ‘-m’ tells ansible we wish to use a module named ‘command’ and ‘-a’ indicates that it has arguments that need to be passed which is immediately given as ‘w’. The output from this command should be similar to this:

$ ansible ubuntu-staging -m command -a “w”
ubuntu-staging-www | SUCCESS | rc=0 >>
10:25:57 up 8 days, 12:29, 1 user, load average: 0.22, 0.31, 0.35
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
canuteth pts/2 192.168.13.221 10:25 1.00s 0.25s 0.01s w

ubuntu-staging-dev | SUCCESS | rc=0 >>
10:25:59 up 8 days, 12:17, 1 user, load average: 0.16, 0.03, 0.01
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
canuteth pts/0 192.168.13.221 10:25 0.00s 0.37s 0.00s w

ubuntu-staging-db | SUCCESS | rc=0 >>
10:26:02 up 8 days, 12:25, 1 user, load average: 0.17, 0.09, 0.09
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
canuteth pts/0 192.168.13.221 10:26 0.00s 0.28s 0.00s w

Okay, that shows promise right? Let’s try something a little more complicated:

$ ansible ubuntu-staging -s -K -m command -a “apt-get update”
SUDO password:
[WARNING]: Consider using apt module rather than running apt-get

ubuntu-staging-db | SUCCESS | rc=0 >>
Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]
Fetched 306 kB in 5s (59.3 kB/s)
Reading package lists…

ubuntu-staging-www | SUCCESS | rc=0 >>
Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Hit:4 https://apt.dockerproject.org/repo ubuntu-xenial InRelease
Get:5 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]
Get:6 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [544 kB]
Get:7 http://us.archive.ubuntu.com/ubuntu xenial-updates/main i386 Packages [528 kB]
Get:8 http://us.archive.ubuntu.com/ubuntu xenial-updates/main Translation-en [220 kB]
Get:9 http://us.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [471 kB]
Get:10 http://us.archive.ubuntu.com/ubuntu xenial-updates/universe i386 Packages [456 kB]
Get:11 http://us.archive.ubuntu.com/ubuntu xenial-updates/universe Translation-en [185 kB]
Get:12 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [276 kB]
Get:13 http://security.ubuntu.com/ubuntu xenial-security/main i386 Packages [263 kB]
Get:14 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [118 kB]
Get:15 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [124 kB]
Get:16 http://security.ubuntu.com/ubuntu xenial-security/universe i386 Packages [111 kB]
Get:17 http://security.ubuntu.com/ubuntu xenial-security/universe Translation-en [64.2 kB]
Fetched 3,666 kB in 6s (598 kB/s)
Reading package lists…

ubuntu-staging-dev | SUCCESS | rc=0 >>
Hit:1 http://us.archive.ubuntu.com/ubuntu zesty InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu zesty-updates InRelease [89.2 kB]
Get:3 http://security.ubuntu.com/ubuntu zesty-security InRelease [89.2 kB]
Get:4 http://us.archive.ubuntu.com/ubuntu zesty-backports InRelease [89.2 kB]
Get:5 http://us.archive.ubuntu.com/ubuntu zesty-updates/main i386 Packages [94.4 kB]
Get:6 http://us.archive.ubuntu.com/ubuntu zesty-updates/main amd64 Packages [96.2 kB]
Get:7 http://us.archive.ubuntu.com/ubuntu zesty-updates/main Translation-en [43.0 kB]
Get:8 http://us.archive.ubuntu.com/ubuntu zesty-updates/main amd64 DEP-11 Metadata [41.8 kB]
Get:9 http://us.archive.ubuntu.com/ubuntu zesty-updates/main DEP-11 64×64 Icons [14.0 kB]
Get:10 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe i386 Packages [53.4 kB]
Get:11 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe amd64 Packages [53.5 kB]
Get:12 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe Translation-en [31.1 kB]
Get:13 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe amd64 DEP-11 Metadata [54.1 kB]
Get:14 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe DEP-11 64×64 Icons [43.5 kB]
Get:15 http://us.archive.ubuntu.com/ubuntu zesty-updates/multiverse amd64 DEP-11 Metadata [2,464 B]
Get:16 http://us.archive.ubuntu.com/ubuntu zesty-backports/universe amd64 DEP-11 Metadata [3,980 B]
Get:17 http://security.ubuntu.com/ubuntu zesty-security/main amd64 Packages [67.0 kB]
Get:18 http://security.ubuntu.com/ubuntu zesty-security/main i386 Packages [65.5 kB]
Get:19 http://security.ubuntu.com/ubuntu zesty-security/main Translation-en [29.6 kB]
Get:20 http://security.ubuntu.com/ubuntu zesty-security/main amd64 DEP-11 Metadata [5,812 B]
Get:21 http://security.ubuntu.com/ubuntu zesty-security/universe amd64 Packages [28.8 kB]
Get:22 http://security.ubuntu.com/ubuntu zesty-security/universe i386 Packages [28.7 kB]
Get:23 http://security.ubuntu.com/ubuntu zesty-security/universe Translation-en [19.9 kB]
Get:24 http://security.ubuntu.com/ubuntu zesty-security/universe amd64 DEP-11 Metadata [5,040 B]
Fetched 1,049 kB in 6s (168 kB/s)
Reading package lists…

This time we passed ansible the paramater ‘-s’ which tells ansible we want to use sudo and we also passed ‘-K’ which tells ansible to prompt us for a password. You’ll also notice that it warns us to use the ‘apt’ module, which is a better choice for interacting with apt-get.

The command module will work with pretty much any command that is non-interactive and doesn’t use pipes or redirection. I often use it for checking things on multiple machines quickly. For example, if I need to install updates and I want to know if anyone is using a particular machine, I can use w, who, users, etc. to see who is logged in before proceeding.

If we needed to interact with one a few hosts and not an entire group, we can name the hosts, separated by a comma, in the same fashion: ‘ansible ubuntu-staging-www,ubuntu-staging-db …’

Now lets look at trying something a bit more complicated.. say we need to copy a configuration file /etc/ssmtp/ssmtp.conf to all of our hosts. For this we will write an ansible playbook that I named ssmtp.yml:


# copy ssmtp.conf to all ubuntu-staging hosts

– hosts: ubuntu-staging
user: canutethegreat
sudo: yes

tasks:
– copy: src=/home/canutethegreat/staging/conf/etc/ssmtp/ssmtp.conf
dest=/etc/ssmtp/ssmtp.conf
owner=root
group=ssmtp
mode=0640

We can invoke the command with ‘ansible-playbook ssmtp.yml’ and it will do as directed. The syntax is fairly straightforward and there are quite a number of examples.

There are lots of examples for a wide range of tasks in the Ansible github repo and be sure to take a look at the intro to playbooks page. Just remember that you are doing things to multiple servers at once so if you do something dumb it’ll be carried out on all of the selected servers! Testing things on staging servers and using pretend/simulate are always good ideas anyway.