Managing Multiple Machines Simultaneously With Ansible

If I have to do it more than once, it’s probably going to get scripted. That has been my general attitude towards mundane system administration tasks for many years, and is also shared by many others. How about taking that idea a little further and applying it to multiple machines? Well there’s a tool for that too, and it’s named ansible.

We need ansible installed on the system we will be using as the client/bastion. This machine needs to be able to SSH into all of the remote systems we want to manage without issue. So stop and make sure that works unhindered before continuing. On the remote machine, the requirements are fairly low and typically revolve around python2. In Gentoo python2 is already installed as it is required by several things including emerge itself. On Ubuntu 16.04 LTS, python2 is not installed by default and you will need to install the package ‘python-minimal’ to regain it.

Once we have python installed on the remote machines and ansible installed on the local machine, we can move on to editing the ansible configuration with a list of our hosts. This file is fairly simple and there are lots of examples available, but here is a snippet of my /etc/ansible/hosts file:

[ubuntu-staging]
ubuntu-staging-dev
ubuntu-staging-www
ubuntu-staging-db

 

Here you can see I have three hosts listed under a group named ubuntu-staging.

Once we have hosts defined we can do a simple command line test:

ansible ubuntu-staging -m command -a “w”

The ‘-m’ tells ansible we wish to use a module named ‘command’ and ‘-a’ indicates that it has arguments that need to be passed which is immediately given as ‘w’. The output from this command should be similar to this:

$ ansible ubuntu-staging -m command -a “w”
ubuntu-staging-www | SUCCESS | rc=0 >>
10:25:57 up 8 days, 12:29, 1 user, load average: 0.22, 0.31, 0.35
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
canuteth pts/2 192.168.13.221 10:25 1.00s 0.25s 0.01s w

ubuntu-staging-dev | SUCCESS | rc=0 >>
10:25:59 up 8 days, 12:17, 1 user, load average: 0.16, 0.03, 0.01
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
canuteth pts/0 192.168.13.221 10:25 0.00s 0.37s 0.00s w

ubuntu-staging-db | SUCCESS | rc=0 >>
10:26:02 up 8 days, 12:25, 1 user, load average: 0.17, 0.09, 0.09
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
canuteth pts/0 192.168.13.221 10:26 0.00s 0.28s 0.00s w

Okay, that shows promise right? Let’s try something a little more complicated:

$ ansible ubuntu-staging -s -K -m command -a “apt-get update”
SUDO password:
[WARNING]: Consider using apt module rather than running apt-get

ubuntu-staging-db | SUCCESS | rc=0 >>
Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]
Fetched 306 kB in 5s (59.3 kB/s)
Reading package lists…

ubuntu-staging-www | SUCCESS | rc=0 >>
Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Hit:4 https://apt.dockerproject.org/repo ubuntu-xenial InRelease
Get:5 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]
Get:6 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [544 kB]
Get:7 http://us.archive.ubuntu.com/ubuntu xenial-updates/main i386 Packages [528 kB]
Get:8 http://us.archive.ubuntu.com/ubuntu xenial-updates/main Translation-en [220 kB]
Get:9 http://us.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [471 kB]
Get:10 http://us.archive.ubuntu.com/ubuntu xenial-updates/universe i386 Packages [456 kB]
Get:11 http://us.archive.ubuntu.com/ubuntu xenial-updates/universe Translation-en [185 kB]
Get:12 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [276 kB]
Get:13 http://security.ubuntu.com/ubuntu xenial-security/main i386 Packages [263 kB]
Get:14 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [118 kB]
Get:15 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [124 kB]
Get:16 http://security.ubuntu.com/ubuntu xenial-security/universe i386 Packages [111 kB]
Get:17 http://security.ubuntu.com/ubuntu xenial-security/universe Translation-en [64.2 kB]
Fetched 3,666 kB in 6s (598 kB/s)
Reading package lists…

ubuntu-staging-dev | SUCCESS | rc=0 >>
Hit:1 http://us.archive.ubuntu.com/ubuntu zesty InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu zesty-updates InRelease [89.2 kB]
Get:3 http://security.ubuntu.com/ubuntu zesty-security InRelease [89.2 kB]
Get:4 http://us.archive.ubuntu.com/ubuntu zesty-backports InRelease [89.2 kB]
Get:5 http://us.archive.ubuntu.com/ubuntu zesty-updates/main i386 Packages [94.4 kB]
Get:6 http://us.archive.ubuntu.com/ubuntu zesty-updates/main amd64 Packages [96.2 kB]
Get:7 http://us.archive.ubuntu.com/ubuntu zesty-updates/main Translation-en [43.0 kB]
Get:8 http://us.archive.ubuntu.com/ubuntu zesty-updates/main amd64 DEP-11 Metadata [41.8 kB]
Get:9 http://us.archive.ubuntu.com/ubuntu zesty-updates/main DEP-11 64×64 Icons [14.0 kB]
Get:10 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe i386 Packages [53.4 kB]
Get:11 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe amd64 Packages [53.5 kB]
Get:12 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe Translation-en [31.1 kB]
Get:13 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe amd64 DEP-11 Metadata [54.1 kB]
Get:14 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe DEP-11 64×64 Icons [43.5 kB]
Get:15 http://us.archive.ubuntu.com/ubuntu zesty-updates/multiverse amd64 DEP-11 Metadata [2,464 B]
Get:16 http://us.archive.ubuntu.com/ubuntu zesty-backports/universe amd64 DEP-11 Metadata [3,980 B]
Get:17 http://security.ubuntu.com/ubuntu zesty-security/main amd64 Packages [67.0 kB]
Get:18 http://security.ubuntu.com/ubuntu zesty-security/main i386 Packages [65.5 kB]
Get:19 http://security.ubuntu.com/ubuntu zesty-security/main Translation-en [29.6 kB]
Get:20 http://security.ubuntu.com/ubuntu zesty-security/main amd64 DEP-11 Metadata [5,812 B]
Get:21 http://security.ubuntu.com/ubuntu zesty-security/universe amd64 Packages [28.8 kB]
Get:22 http://security.ubuntu.com/ubuntu zesty-security/universe i386 Packages [28.7 kB]
Get:23 http://security.ubuntu.com/ubuntu zesty-security/universe Translation-en [19.9 kB]
Get:24 http://security.ubuntu.com/ubuntu zesty-security/universe amd64 DEP-11 Metadata [5,040 B]
Fetched 1,049 kB in 6s (168 kB/s)
Reading package lists…

This time we passed ansible the paramater ‘-s’ which tells ansible we want to use sudo and we also passed ‘-K’ which tells ansible to prompt us for a password. You’ll also notice that it warns us to use the ‘apt’ module, which is a better choice for interacting with apt-get.

The command module will work with pretty much any command that is non-interactive and doesn’t use pipes or redirection. I often use it for checking things on multiple machines quickly. For example, if I need to install updates and I want to know if anyone is using a particular machine, I can use w, who, users, etc. to see who is logged in before proceeding.

If we needed to interact with one a few hosts and not an entire group, we can name the hosts, separated by a comma, in the same fashion: ‘ansible ubuntu-staging-www,ubuntu-staging-db …’

Now lets look at trying something a bit more complicated.. say we need to copy a configuration file /etc/ssmtp/ssmtp.conf to all of our hosts. For this we will write an ansible playbook that I named ssmtp.yml:


# copy ssmtp.conf to all ubuntu-staging hosts

– hosts: ubuntu-staging
user: canutethegreat
sudo: yes

tasks:
– copy: src=/home/canutethegreat/staging/conf/etc/ssmtp/ssmtp.conf
dest=/etc/ssmtp/ssmtp.conf
owner=root
group=ssmtp
mode=0640

We can invoke the command with ‘ansible-playbook ssmtp.yml’ and it will do as directed. The syntax is fairly straightforward and there are quite a number of examples.

There are lots of examples for a wide range of tasks in the Ansible github repo and be sure to take a look at the intro to playbooks page. Just remember that you are doing things to multiple servers at once so if you do something dumb it’ll be carried out on all of the selected servers! Testing things on staging servers and using pretend/simulate are always good ideas anyway.

What LTS Really Means…

In the business world we love to see software that has a lifecycle that is clearly defined. In relation to this, we typically go for Linux distributions that have long term support (LTS) such as Ubuntu, et cetera al. The reasons why we like these LTS releases is fairly simple: we want to know that our servers are going to have updates, or more specifically security updates, for a few years. What we don’t want is to have an operating system that has few or no updates between releases that leaves us vulnerable. Furthermore, we don’t want an operating system that has new releases frequently. So LTS releases sound great right? Not really…

What LTS releases really do is delay things. They put off updates and upgrades by keeping stale software patched against security vulnerabilities. Maybe we don’t care about the newest features in software x, y, or z – that’s pretty normal in production. However, backporting fixes is not always the best choice either. The problem we run into at the end of an LTS lifecycle is that the step to the next release is bigger – much, much, bigger! There have been LTS to LTS upgrades that have broken so much that a fresh install is either the only option left or it is often faster than trying to muddle through the upgrade. If you skip an LTS upgrade because the currently installed release is still supported, you are going to be in a world of hurt when you finally decide to pull the trigger on that dist-upgrade. The opposite end of the spectrum isn’t always ideal for production either: rolling releases will have the latest features, bug fixes, and security patches, but they also have less time in the oven and sometimes come out half-baked.

There is no easy solution here, no quick fixes. The best use of LTS I’ve seen is when the servers they are installed on have a short lifecycle themselves. If the servers are going to be replaced inside of 5 years then LTS might just be a good fit because you’ll be replacing the while kittle kaboodle before you reach end of life. For the reast of us, I feel like LTS really stands for long term stress – stress that builds up over the lifecycle and then gets dumped on you all at once.

A Central Logging Server with syslog-ng

I have a lot of Linux-based devices in my office and even throughout my home. One day I had a machine that I was having issues with (bad hardware) but couldn’t catch a break and see an error message and at the time of death nothing of use was being written to disk in the logs. I decided to try setting up remote logging to another machine in hopes that it would be transmitted before sudden death. Turned out I got lucky and was able to get an error logged on the remote machine that helped me figure out what the issues was. Since then I’ve had all of my devices that use a compatible logger log to a dedicated machine (a Raspberry Pi) that runs syslog-ng, which is my logger of preference.

Setting up a dedicated logger is easy. Once syslog-ng is installed, we only need to add a few lines to its configuration file to turn it into a logging server:

source net { tcp(); };
destination remote { file(“/var/log/remote/${FULLHOST}.log”); };
log { source(net); destination(remote); };

Here I use TCP as the transport, but you could also use UDP. The remote logs will be saved to /var/log/remote/<the name of the host>.log

Be sure to create the directory for the logging:

# mkdir /var/log/remote

Then restart syslog-ng:

service syslog-ng restart

Next we need to configure a client to log to our new dedicated logging host:

# send everything to log host
destination remote_log_server {
tcp(“10.7.3.1” port(514));
};
log { source(src); destination(remote_log_server); };

In the above example the remote logging server has an IP of 10.7.3.1 so you will want to change that to the IP or hostname of your log server.

Finally, be sure to restart the logging on the client like we did for syslog-ng on the logging server.

That’s all there is to it, very simple, and quick to setup.

Private Gentoo Mirror For A Large LAN

I run Gentoo on some 30 or so devices including PCs, Raspberry Pis, Virtual Machines, rackmounted servers, and so forth. These devices are mostly housed in my home office with a few random ones scattered throughout the house. To me it seems like a waste of bandwidth to have each of them download packages from the Internet directly. This is especially apparent when doing updates and watching the same exact things get downloaded to multiple devices. There is also an issue where most mirrors, and this applies to all mirrors not just Gentoo, that they have limits on how many connections per day from the same IP they allowed. My solution to the problem is to run my own Gentoo mirror on one of the machines with my own local (LAN) copy of the portage tree and also the distfiles.

Originally I ran my Gentoo mirror on one of my servers, but recently I moved it to its own dedicated VM to make management a little easier. That allows me to move it between machines if needed as well as take advantage of the ZFS RAID array on one of my servers. The disk space required is currently 367GB for all of the files. I allocated 500GB for my setup to allow room for growth. Anyway, I’ll assume you have a base Gentoo system up and running ready to be turned into a mirror.

First step is to install ‘gentoo-rsync-mirror’ package. This will install a script to /opt/gentoo-rsync/rsync-gentoo-portage.sh which we will copy to /usr/local/bin/ and modify it to look like this:

#!/bin/bash

LOG=/var/log/rsync-gentoo.log
LOCKFILE=/tmp/gentoo-mirror-sync

source /etc/rsync/gentoo-mirror.conf

if [ -e $LOCKFILE ]; then
echo “sync still running, or stale lock file!”
logger -t rsync “sync still running, or stale lock file!”
else
touch $LOCKFILE
fi

echo “Started Gentoo Portage sync at” `date` >> $LOG 2>&1
logger -t rsync “re-rsyncing the gentoo-portage tree”
${RSYNC} ${OPTS} ${PORT_SRC} ${PORT_DST} >> $LOG 2>&1
logger -t rsync “deleting spurious Changelog files”
find ${PORT_DST} -iname “.ChangeLog*” | xargs rm -rf
echo “End of Gentoo Portage sync: “`date` >> $LOG 2>&1
#
echo “Started Gentoo main sync at” `date` >> $LOG 2>&1
logger -t rsync “re-rsyncing the gentoo main tree”
${RSYNC} ${OPTS} ${GEN_SRC} ${GEN_DST} >> $LOG 2>&1
logger -t rsync “deleting spurious Changelog files”
find ${GEN_DST} -iname “.ChangeLog*” | xargs rm -rf
echo “End of Gentoo main sync: “`date` >> $LOG 2>&1

rm -f $LOCKFILE

Now edit /etc/rsync/rsyncd.conf to look like:

# Copyright 1999-2004 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Id$

uid = nobody
gid = nobody
use chroot = yes
max connections = 20
pid file = /var/run/rsyncd.pid
log file = /var/log/rsync.log
motd file = /etc/rsync/rsyncd.motd
transfer logging = yes
log format = %t %a %m %f %b
syslog facility = local3
timeout = 300

[gentoo-portage]
#modern versions of portage use this entry
path = /mirror/gentoo-portage
comment = Gentoo Linux Portage tree mirror
exclude = distfiles

[gentoo]
path = /mirror/gentoo
comment = Gentoo Linux mirror

You can change the path as needed. In my setup I have a mount point /mirror that is used to house the files. You can also edit /etc/rsync/rsyncd.motd if you want to display a custom message when a system syncs.

Now edit /etc/rsync/gentoo-mirror.conf to look like:

# Gentoo rsync mirror config

RSYNC=”/usr/bin/rsync”
OPTS=”–quiet –recursive –links –perms –times –devices –delete –timeout=300″
#Uncomment the following line only if you have been granted access to rsync1.us.gentoo.org
#SRC=”rsync://rsync1.us.gentoo.org/gentoo-portage”
#If you are waiting for access to our master mirror, select one of our mirrors to mirror from:
#SRC=”rsync://rsync.de.gentoo.org/gentoo-portage”
PORT_SRC=”rsync://mirrors.kernel.org/gentoo-portage”
GEN_SRC=”rsync://mirrors.kernel.org/gentoo”
PORT_DST=”/mirror/gentoo-portage/”
GEN_DST=”/mirror/gentoo/”

Again, change the path if needed and you can also change the mirror to a closer one if you wish.

Now we need to make a cron job to do the work:

crontab -e

0 */12 * * * /usr/local/bin/rsync-gentoo.sh

Here I am syncing every 12 hours, which technically is more than the once-per-day limit, but I figure I’m saving the mirrors a bunch of traffic/work as a trade off.

Now we need to set rsyncd to autostart and start it:

rc-update add rsyncd default

service rsyncd start

Now we should have rsync working. Next we need to provide either FTP or HTTP retrieval of distfiles. I prefer HTTP, so emerge apache and set it to autostart just like we did for rsyncd. The last step is to edit /etc/apache2/vhosts.d/default_vhost.include to point the document root to the mirror location.

The initial sync takes a while, but once it is completed keeping things up-to-date does not involve much bandwidth.  Once the first sync is finished, the last step is to configure each individual machine to use the local mirror.

Edit /etc/portage/repos.conf/gentoo.conf to use this line:

sync-uri = rsync://<YOUR MIRRORS IP>/gentoo-portage

Then edit /etc/portage/make.conf to include:

GENTOO_MIRRORS=”http://<YOUR MIRRORS IP>/gentoo”

Now you should be able to ’emerge –sync’ using your local mirror and the distfiles should be pulled from your mirror as well.

Raspberry Pi Security Camera

If you are like me you have a set of things that stay in the office all the time. For example, tape dispenser, scissors, whiteboard makers, etc. These items have a do-not-leave-the-office rule so that you can find them during when hours when needed. However, you find yourself in need of knowing who keeps taking the office-only items out of the office. In my case I’d be in a phone call (often conference) with a customer and need to jot something down on the whiteboard and all 10 dry-erase pens would be missing. Nothing like trying to be a professional and you have to excuse yourself to go ask your kids where something is! As a workaround, I do hide one marker, but still I want to know who keeps breaking the keep-it-in-the-office rule! Well thanks to Linux, a Raspberry Pi, and a webcam, we can create our own customization security camera for the home office.

We have two main choices here: we can use our own Linux install and install the components we need or we can use a purpose-built distribution. I have tried both and they each have their own pros and cons. Currently I am using a purpose-built distribution called https://github.com/ccrisan/motioneyeos (meyeos). meyeos is Debian-based (Raspian) Linux distribution that uses Motion for the back-end and motionEye for the front-end, providing a easy to use web interface with lots of configuration options.

The first step is to download meyeos onto your Linux workstation. Once downloaded, you can use the included writeimage.sh to install onto your microsd card. Example usage:

# ./writeimage.sh -d /dev/mmcblk0 -i “<path to uncompressed img file>”

The first boot takes a few minutes (I think about 4 on a B) depending on how fast your device is. After it is up and running you can access it via your web browser of choice by going to meye-XXXX, where the Xs represent the digits of your Raspberry Pis serial number which are printed on the board. If you don’t know what the serial number is because you already put your Pi in a case, you can look for the IP in your router’s DHCP logs.

Configuration is very simple and there are little “i” icons next to things that may not be obvious that include further information. I have mine setup to upload to a dropbox location so that I can access the images from anywhere. There are lots of things that can be configured and tweaked including overclocking. I tend to prefer motion activated still images over videos and so I have mine configured to favor that.

Once you are up and running you should be able to view a live feed of the camera and depending on your configuration you may be able to see images being added to your dropbox directory.

Now we put our security camera in a location that gives the best view of the office. For me that location is in the corner of the room facing my desk on the same wall as the door. Now we tell everyone of the new camera and wait a few days until something goes missing. As soon as it does, we can check the upload folder for our photo evidence to identify the culprit!

Gentoo In Production Because One-Size-Rarely-Fits-All

Who in their right mind would run Gentoo on a production server? I would!

Maybe I’m not in my right mind, but Gentoo is one of the best Linux distributions out there for several use cases from large server farms all the way down to embedded devices. One thing that is an issue with almost every Linux distribution – scratch that – one thing that is an issue with nearly every binary system is that someone else decides what is best for everyone. In a Linux distribution that means someone else, a package maintainer, decides what software and features are included in a particular package. Sometimes that just fine, other times it can be an issue. For example with openoffice and libreoffice there is very little that can be customized about the install and as such a binary install package is acceptable. However, with other software take Apache for example, I might not want every other feature enabled – in fact I rarely do! This is where Gentoo really shines brightest: the ability to choose what features are enabled system-wide and on a per-package basis.

Another area Gentoo excels is with the security tool glsa-check It is better than anything else I’ve ever seen or used on any distro. I can check for vulnerabilities, read about the vulnerability, test what the recommended fix is, and even have it apply the fix all within one utility.

I need enterprise support from the vendor! No, you don’t! If you think you do step aside and let someone else run things. If your IT team says they need enterprise support go ahead and fire them right now as they don’t know how to do their job. Microsoft, Oracle, etc. provide paid support services for their products. The thing is they don’t provide access to anything that couldn’t have been found with a Google search or reading related discussion forums. I don’t have time for that, well then hire an IT person/team that does because that’s part of their job!

Gentoo supports multilib and slots which makes it very flexible. Most modern distributions support mulilib so I won’t go into that, but you may be wonder what slots are. Slots allow many packages, particularly libraries, to have more than one version installed at the same time. On my dev box this is a life safer as I can have multiple versions of a library installed without having to resort to any trickery. For example, on my laptop I have webkit-gtk 2.4.11 and 2.14.5 installed at the same time.

In Gentoo software is installed, configured, and uninstalled with the tool named ’emerge’ which is part of the portage (software package) system. Emerge can handle dependencies, complication, binary packages, and tons of other abilities and features. You can use wildcards with emerge, in fact some use flags even support regular expression, which is very handy when you need to manipulate multiple packages with the same or similar names. For example, say I had gtk+ installed to use two different x11 applications that each required a different version of gtk+ but I no longer need them. With emerge, just like most package manager, I can name each package I wish to uninstall by listing them out – or – with emerge I can use a wildcard and have it remove all of them: ’emerge -Ca x11-libs/gtk+*’. This command will remove, ask before doing so, and search/remove any package that matches that pattern.

Another area where Gentoo is a level above the rest is with freedom of choice. My boss, whom I generally think is quite intelligent, thinks systemd is the cats meow and anyone who doesn’t get on board is an idiot. I, on the other hand, think it is a pile of s**t that is turning Linux into a binary blob operating system. If I wanted to run a binary blob operating system, I’d run the original aka Windows. Well with Gentoo we get the freedom to select what we want. You want to put all your eggs in one basket? Install systemd! You want a system that doesn’t try to assimilate everything like the Borg on Star Trek or destroys what it cannot? Install OpenRC. The choice is yours and all of the documentation is written to handle either path you take.

While on the topic of documentation, Gentoo has some of the best out there. I even refer to it often when working on other distributions because of the quality and detail often found in the guides, wiki, and forums. Regardless of the operating system used, if you are not willing to learn some basics then you have no business being an admin. The same argument could be made for using automated tools that hide what is going on and make assumptions on the best approach.

In Gentoo we use what are called USE flags to select the features to be used in the system or for a particular package. On any Gentoo system these can be found in /usr/portage/profile/use.desc for global to all packages and /usr/portage/profile/use.local.desc for USE flags that are specific/specialized to only one package. System-wide USE flags are set in /etc/portage/make.conf and per-package USE flags can be set in /etc/portage/package.use/foo (where foo is any filename you wish). For example, I may want to have postgresql set on a system-wide level but a specific package, say zabbix, I do not want to have postgresql support. To do this I would add ‘postgresql’ to my USE flag in /etc/portage/make.conf and then in /etc/portage/package.use/foo I would add a line that reads ‘net-analyzer/zabbix -postgresql’. Here you can see the minus sign in front ‘-postgresql’ which tells emerge that we do not want that feature enabled which overrides the system-wide setting that is in /etc/portage/make.conf.

Compiling everything from source takes too long! Well sometimes, but not always. We can also mitigate this to some degree with a combination of distcc and cross-compile if we have other machines at our disposal. Most source packages are quite small and do not take long to compile and install. There are a few exceptions such as glibc and gcc which can takes more than an hour to compile and install. If you have more than one machine available the compiles can be distributed to them with distcc an with cross-dev even different architectures can be used. I do this to speed things up on my Raspberry Pi systems by making use of my faster multi-CPU/multi-core machines. I mentioned previously that some packages, such as libreoffice, offer very little customization and can be time consuming to compile. For these infrequent cases there are binary packages available in the portage tree. I use a libreoffice binary package on my laptop because my laptop is slow and there is not much I wish to change about libreoffice to begin with. On my desktop I use the standard source package because that machines has 8 cores and compiles pretty quick.

Gentoo is a rolling release which means the software is always being updated. Many distributions have started to move to this model and I personally prefer it over the huge steps that seem to happen in the other major distros. There have been so many upgrades in Debian GNU/Linux that broke things due to the massive time between releases that it has become standard practice to use a staging server to test on before attempting the real upgrade. I’m all for testing things, but the fact that you are almost forced to does not sit well with me. In recent years Ubuntu has gotten nearly as bad not to mention they seem to willy-nilly decide what packages to include and drop between releases *cough*mediawiki*cough*. With a rolling release we avoid these big steps and have less time spent testing and fixing things in staging. One could argue the disadvantage is that you have to upgrade more often, but you don’t have to. You are in control here, not someone else. Given that there are almost weekly updates purely for security purposes in the major distributions, I feel like this is a moot issue anyway.

We also find that there is superior configuration management in Gentoo. There are two different tools that can be used depending on what you wish to do. If you want a quickly see if there is a difference between the installed configuration and a new one you can use ‘etc-update’ which will list the files that are different and will allow you to chose your existing one, the new one, attempt a simple merge, or exit without doing anything. For bigger changes or finer control there is a tool called ‘dispatch-conf’ that allows line-by-line comparison between configuration files. In all cases configuration files are NEVER overwritten automatically during a package upgrade if you have made changes to the original!

So when it comes down to is the initial install is time consuming and a bit labor intensive as you have to make a lot of decisions about your system and then implement them. Suck it up and get it done. Once the installation is finished and the system is up and running there is never a software reason to reinstall. I have one system that has been happily updated since 2008 with only kernel upgrades causing reboots.

Gentoo isn’t for everyone, but the older I get the more I know what I want and expect out of things. With Gentoo the system is setup how I want it, things are configured how I like it, and only those features I wish to use are present. The one-size-fits-most attitude of the other major distributions does not work for me. I want my cake and I’m gonna eat it too!

SSH Over Flaky Connection With MOSH

If you are like me and have to occasionally work while traveling, you may find yourself having to use Internet connections that are a bit on the flaky side. Sometimes I’m tethering to my phone while riding in a car or RV and pass though places where cell service is poor or non-existent. These situations are of lesser concern to web browsers and most email clients as they’ll just continue when the connection resumes, but for SSH any interruption usually results in a disconnect. Well thanks to MOSH (mobile shell) we can say goodbye to disconnects.

First we need to install mosh on the remote system and on our local system. On Gentoo systems this can be accomplished with ’emerge -av net-misc/mosh’ my USE flags:

net-misc/mosh client examples mosh-hardening server -ufw utempter

On Debian/Ubuntu systems you can do an ‘apt-get install mosh’ to get going.

Next you’ll need to open some ports up on the firewall. Mosh uses a port between 60000 and 61000 by default and the UDP protocol on the remote system. I personally open the full range as I sometimes have a lot of logins simultaneously.

Finally, you can connect to the remote machine in very much the same way you would with SSH:

mosh <remote system name or IP>

Now you should be able to login once and have the connection stay up despite loss of connectivity and even moving from one ISP to another. I even leave them running while suspending (sleep) my laptop overnight.

Double SSH Tunnel, Port Forwarding To Access A Remote, Firewalled Webserver

Here’s the scenario: you need to access the webserver running on a UNIX machine that sits behind a firewall but we have access to a different machine via SSH that’s on the same network. Well not to fear because SSH is here to the rescue!

First we need to be sure that we can reach the remote SSH machine, so check that now. Next we need to make sure that we can get to the destination machine from the remote SSH machine so check that at the same time.

So how does all this work? It works by forwarding a local port to the remote SSH machine and then a second connection on the remote SSH machine will forward to our destination machine.

The command on the local machine:

ssh -L 127.0.0.1:1234:127.0.0.1:1234 <remote SSH machine>

The command on the remote SSH machine:

ssh -L 127.0.0.1:1234:127.0.0.1:80<destination webserver>

Once both pieces are up and running all we have to do is point our web browser of choice to localhost:1234 and we’ll be accessing the destination webserver on port 80 as if we were on the same network, or thereabouts.

There really isn’t a limit, at least not that I’ve encountered, to how many times/machines you can tunnel through. This makes it ideal if you are trying to access a location when there are multiple firewalls in between. That’s all there is to it, it’s fairly simple and straightforward.

Google Chrome and Touchscreen Linux

Google Chrome and Linux both work with touchscreens, but sometimes Google Chrome on Linux does not want to behave properly. This is easy to fix by changing the Google Chrome startup to specify your touch device:

/usr/bin/google-chrome-stable –touch-devices=10

In my case the device is “10” but yours may be different. You can determine yours with ‘xinput list’. However, sometimes the device number changes especially if a new devices is connected at boot. So I created a script in /usr/bin/google-chrome with the following contents:

#!/bin/bash
/usr/bin/google-chrome-stable –touch-devices=`xinput list|grep ELAN|sed ‘s/.*id=//’|sed ‘s/\[.*//’|awk ‘{print $1}’

In my system the name of the touchscreen is ELAN so you will need to adjust this if you have a different brand.

After adding that line to your Google Chrome startup you should have proper touch scrolling and button clicking.

A Quick SSH Tunnel For Bypassing A Webfilter/Firewall

I was recently traveling in the central part of the U.S. and while using the public WiFi at a local destination I came across a social website that I frequent that was blocked by a webfilter or firewall rule. On my home machine I have OpenVPN running on two different ports: on one port I can create a VPN connection that allows access to my home network and on the other I get the same functionality plus being able to route all traffic across my home network. Unfortunately those ports were blocked at this location. A little research showed that outbound SSH was not blocked so and many higher level ports above 1000 did not seem to be blocked. So I did a few tests and found a combination that worked:

ssh -D 1234 -f -C -q -N me@homemachineip

What this does is create a SOCKS connection on local port 1234, forks the process to the background (freeing our terminal for other use), enable compression, tells SSH to be quiet, and tells SSH no remote command will be sent.

Next step is to tell our web browser to use the SOCKS connection by telling our browser of choice to use a SOCKS proxy on localhost port 1234 for all connections.

To test, do a Google search for “what’s my ip” and you should see that it comes back with your home IP now.

If the firewall blocks SSH there is not much you can do. As a preemptive step I run SSH on a second, alternate port for places that block port 22.

Now you should be free to browse the web as if at home without the local webfilter restrictions!