Client PPtP Connection From A VM

I encountered an issue recently with trying to make a PPtP connection from a Linux VM as the client to a remote commercial device or server where the GRE packets were being dropped. The same PPtP credentials worked on another server that is bare metal. This lead me speculate that the issue might be something between the routing devices and the client. After a bit of investigative work with wireshark I discovered the GRE packets were in fact getting to the virtualization host but not to the guest VM. I suspect this issue may be present with other types of virtualization software, but to be clear this particular VM host is running KVM/QEMU.

It has been a while (read: years) since I’ve done much with PPtP beyond just using it. Adding a configuration that was working on another server to this particular system I discovered the connection would not complete much to my dismay. Looking at what ppp logged to the system log revealed it never got a proper GRE reply. Well, there were a lot of things in the log but the one that stood out looked like this:

warn[decaps_hdlc:pptp_gre.c:204]: short read (-1): Input/output error

After a bit of Googling and reading the documentation for pptp-client I decided re-try the setup on the previously mentioned working system and watch the log closely for further clues. Where the second system was failing the original system sailed right past and worked fine. My next attempt was to look at what connections the first system had going which lead to me realize and make a mental connection to the documentation/Googling had revealed about PPtP using protocol 47 (GRE) on TCP port 1723 for the control. Watching another attempt on the second system showed the outgoing request for GRE but nothing coming back. Repeating the last test but watching for incoming GRE on the host showed that it was being received but not being passed on to the guest VM. Looking at my options I discovered that there is a whole set of modules and a kernel configuration option to allow forwarding of PPtP.

The missing pieces to the puzzle include adding a line to your sysctl.conf:

net.netfilter.nf_conntrack_helper=1

Then loading these kernel modules:

nf_conntrack_proto_gre
nf_nat_proto_gre
nf_conntrack_pptp
nf_nat_pptp

As soon as these were in place PPtP started working as expected in the guest VM. What started out as a mystery turned out to be a fairly simple solution. While there are probably not a lot of people still using PPtP these days, it is a better alternative to using a proprietary VPN client.

Advertisements

Placing A Buffer Between Your Cell and The World

This might be a familiar problem for some people: I’ve had the same personal cell phone number for 15+ years. During this time I have used my number for personal, business, personal business, and the list goes on. Over the years the number of telemarketers has increased to the point where it is sometimes multiple calls per day. This has been annoying but I can usually deal with it by tapping decline on numbers I don’t know. However, about a year ago I started getting text/SMS spam and that is far more irritating to me. When this SMS spam reached the multiple per day I decided it might be time to get a new phone number, but I didn’t want the same problem to reappear. My solution is to make my own answering service and give out that number and never my cell. This covers the phone calls, but what about texts? I wouldn’t want to miss a legitimate message. Well both aspects can be accomplished by using Twilio

For those that do not know, Twilio is a programmable phone service with voice, SMS, Fax, and other features. Twilio has an API for several languages including Python, PHP, node.js, C#, Java, and Ruby. I already have a web server so for me it seemed easiest (quickest to setup) to use that to house some PHP and use Twilio to handle my automated voice and SMS messaging service.

So what does the end result look like? People (or automated telemarketers) can call my Twilio phone number and are greeted with a message of my choosing. Since I don’t want the automated calls leaving me messages I have created a phone (tree) menu that requires the caller to enter a specific number (extension) to leave me a message. Then for SMS, I have a PHP script set up that takes the message and sends a copy to my email then autoresponds and tells the sender that I’ll get back to them as soon as possible.

Lets start with the voice part as that is the more involved piece in this setup. In the Twilio web console, under the section titled “Voice & Fax” I have it set to “Webhook” and have a URL pointing to a specific URL on my webserver. The URL looks something like https://mydomain.com/twilio/main.php The contents of main.php is fairly simple:

<?php

header(“content-type: text/xml”);
echo “<?xml version=\”1.0\” encoding=\”UTF-8\”?>\n”;
$from = $_REQUEST[‘From’];
// email me the number every number that calls
mail(‘myemailaddress@gmail.com’, ‘Call System: call from ‘.$from, $from.”\n”, ‘From: myemailaddress@gmail.com’);
?>
<Response>
<Say voice=”woman” language=”en”>Hello. You may select from the following options.</Say>
<Gather numDigits=”1″ action=”main-handler.php” method=”POST”>
<Say voice=”woman” language=”en” loop=”1″>
To leave a message for Ron select one.
</Say>
<Pause length=”15″/>
</Gather>
</Response>

If the caller selects one they will be sent to main-handler.php, if they select anything else the message replays. In main-handler.php I have:

<?php

// if the caller pressed anything but these digits, send them back
if($_REQUEST[‘Digits’] != ‘1’) {
header(“Location: main.php”);
die;
}

header(“content-type: text/xml”);
echo “<?xml version=\”1.0\” encoding=\”UTF-8\”?>\n”;
?>

<Response>
<?php if ($_REQUEST[‘Digits’] == ‘117’) { ?>
<Say voice=”woman” language=”en”>Please leave a message for Ron. You may hang up when finished.</Say>
<Record maxLength=”90″ transcribe=”true” action=”ron-recording.php” transcribeCallback=”ron-recording-transcribe.php” />
<?php } ?>
</Response>

If the caller selects one, the flow gets sent to ron-recording.php:

<?php
header(“content-type: text/xml”); echo “<?xml version=\”1.0\” encoding=\”UTF-8\”?>\n”;
?>
<Response>
<Say voice=”woman” language=”en”>Thank you for leaving a message for Ron.</Say>
</Response>

If the caller leaves a message, transcription is handled by ron-recording-transcribe.php:

<?php
$from = $_REQUEST[‘From’];

// email me
mail(‘myemailaddress@gmail.com’, ‘Call System: message for Ron from ‘.$from, $from.”\n”.$_REQUEST[‘TranscriptionText’].”\n”, ‘From: myemailaddress@gmail.com’);

?>

That covers the voice aspect of my Twilio setup, the last piece is handling SMS. In the Twilio web console under “Messaging” I have it set to Webhook and the URL looks something like https://mydomain.com/twilio/incomingsms.php This handles all SMS text messaging that are sent to my Twilio number:

<?php
header(“content-type: text/xml”);
echo “<?xml version=\”1.0\” encoding=\”UTF-8\”?>\n”;
// email me
$from = $_REQUEST[‘From’];
mail(‘myemailaddress@gmail.com’, ‘Call System: SMS for Ron from ‘.$from, $from.”\n”.$_REQUEST[‘Body’].”\n”, ‘From: myemailaddress@gmail.com’);
?>
<Response>
<Message>I am busy right now but will try to reply to your message as soon as possible.</Message>
</Response>

When a text is sent to my Twilio number the contents of the text get sent to my email immediately and a message reading “I am busy right now but will try to reply to your message as soon as possible.” is sent to sender.

Well that covers my simple Twilio setup for handling voice messages and SMS texts. Hopefully it proves useful in the years to come with regards to reducing the amount of telemarketers and spam texts sent to my cell phone.

Creating Your Own Encrypted File “Safe”

I often think about, no scratch that – I often worry about what would happen if my laptop was stolen or fell into “evil” hands. I mean there isn’t a lot on any of my machines that could be misused as most things are locked down. My Internet-based accounts such as my Google account require two factor authentication, important files are backed up, etc. However, there are special files, and here I’m specifically thinking about SSH private keys, that should never be out of my control. My solution is fairly simple: create an encrypted file that can be mounted as a loopback device.

The first step is deciding how much speed we are going to need as we cannot directly resize our encrypted file once it is created. If we later need more storage (or less) our only option is to create a new one and copy the contents of the old (mounted) safe to the new one. I use mine to store my entire ~/.ssh, ~/.gpg, and a few other files so my needs are fairly small. All of my files together account for less than 100MB, but knowing that I might want to expand later I decided on 1GB.

If we are using ext2/3/4, xfs, and probably a few other filesystems we can use fallocate to reserve our disk space. I say probably a few others as I know of at least one it doesn’t work on which is zfs.

fallocate -l 1G safe.img

The next step is to create an encrypted device on our new blank image:

cryptsetup luksFormat safe.img

During this step you will be prompted for a password and this is really the only weak spot (bugs not withstanding) in the entire setup. Make sure your password is long enough to make brute force unreasonably long and make sure it cannot be aided with any of the known dictionaries floating around. I made mine 31 characters long because it is long enough to make brute force unprofitable.

Once the encrypted data is written, we can proceed to opening the device:

cryptsetup open safe.img

You will be prompted to enter your password each time you open it so make sure you are using a trusted keyboard (i.e. not wireless).

The next step is to create a filesystem on our new safe:

mkfs.ext4 /dev/mapper/safe

Now, finally, we can mount it and start using it!

mount /dev/mapper/safe /mnt/safe

At this point you should be able to add files into your safe as if were any other mounted device.

Once you are done using your safe, don’t forget to unmount it and close it so that no-one can access it:

umount /mnt/safe

cryptsetup close safe

So now we know how to create, open, and close the device, but what sorts of things are good for storing in there? Well as previously mentioned I store my entire ~/.ssh/ directory in my safe. I moved the directory into /mnt/safe/ and then created a symlink from there to ~/.ssh which allows me to use everything I normally would (ssh, mosh, scp, etc.) without having to reconfigure anything.

What to do next is up to you, but I do tot trust the quality of USB thumb drives out there these days. So I opted to stick my safe on my local hard drive and include it in my backup scheme.

Turning /etc Into A Git Repo With etckeeper

Whether it be for production or development purposes, it is often desirable to turn /etc into a file repository on our servers. There is a great tool named etckeeper that automates pushing changes to a repo for us. That is, once we have it set up and do an initial push. etckeeper supports several version control, but we only care about git.

Install using your package manager of choice, for Gentoo users make sure if you have ‘cron’ USE flag enabled.

If we are going to be pushing to a remote repo (recommended) we need to edit /etc/etckeeper/etckeeper.conf and modify the REMOTE_PUSH line to look like:

PUSH_REMOTE=”origin”

Now we need to instruct etckeeper to create an initial (empty) repository using /etc:

# etckeeper init -d /etc
Initialized empty Git repository in /etc/.git/

Next we will want to tell git/etckeeper where our remote repo is, but first we need to make sure we are in /etc:

# cd /etc

# git remote add origin https://USERNAME:PASSWORD@GITREPOHOST/DIR/repo.git

If that is successful there will be no output.

Now we want to do an initial commit:

# etckeeper commit “Initial commit.”
[master (root-commit) d918775] Initial commit.

<snipped>

Finally we need to push our changes:

# git push -u origin master
Branch master set up to track remote branch master from origin.
Everything up-to-date

We can check the status at any time in the normal way:

# git status
On branch master
Your branch is up-to-date with ‘origin/master’.
nothing to commit, working tree clean

Depending on your distribution there should be an automatic cron.daily job installed. On Gentoo, we can take it a step further and force changes to be committed during an emerge by editing (or creating) /etc/portage/bashrc:

case “${EBUILD_PHASE}” in
setup|prerm) etckeeper pre-install ;;
postinst|postrm) etckeeper post-install ;;
esac

That’s all there is for getting a basic setup going and you should start seeing commits when there are changes in /etc to the repo.

Working Around A Touchy Touchpad

One of my computers (laptop) has a touchpad that is a bit too egar to click and I sometimes find myself initiating accidential clicks with my palms. To make matters worse this machine does not have a hardware button to turn the touchpad on or off, nor does it have a function key to enable or disable it. Further towards this end, I often sit at a desk, table, or use a desk-like surface to work on and make sure of a blutooth mouse. When I am using an external mouse I have no need or want for the touchpad to be working. This touchpad is a synaptics branded one which is well supported. My solution? Write a simple bash script to enable or disable the touchpad and make a keyboard shortcut to execute it.

First off we need to make a bash script to do the magic by creating a file in /usr/local/bin/touchy:

#!/bin/bash

RUNFILE=/tmp/touchy.run

if [ -e $RUNFILE ]; then

# currently disabled, enabling

rm -f $RUNFILE

synclient Touchpadoff=0

else

# currently enabled, disabling

touch $RUNFILE

synclient Touchpadoff=1

fi

Next make the script executable by running “chmod +x /usr/local/bin/touchy”.

Now you can configure your window manager of choice to make a hotkey, also of your key combo choice, to run the script.

Managing Multiple Machines Simultaneously With Ansible

If I have to do it more than once, it’s probably going to get scripted. That has been my general attitude towards mundane system administration tasks for many years, and is also shared by many others. How about taking that idea a little further and applying it to multiple machines? Well there’s a tool for that too, and it’s named ansible.

We need ansible installed on the system we will be using as the client/bastion. This machine needs to be able to SSH into all of the remote systems we want to manage without issue. So stop and make sure that works unhindered before continuing. On the remote machine, the requirements are fairly low and typically revolve around python2. In Gentoo python2 is already installed as it is required by several things including emerge itself. On Ubuntu 16.04 LTS, python2 is not installed by default and you will need to install the package ‘python-minimal’ to regain it.

Once we have python installed on the remote machines and ansible installed on the local machine, we can move on to editing the ansible configuration with a list of our hosts. This file is fairly simple and there are lots of examples available, but here is a snippet of my /etc/ansible/hosts file:

[ubuntu-staging]
ubuntu-staging-dev
ubuntu-staging-www
ubuntu-staging-db

 

Here you can see I have three hosts listed under a group named ubuntu-staging.

Once we have hosts defined we can do a simple command line test:

ansible ubuntu-staging -m command -a “w”

The ‘-m’ tells ansible we wish to use a module named ‘command’ and ‘-a’ indicates that it has arguments that need to be passed which is immediately given as ‘w’. The output from this command should be similar to this:

$ ansible ubuntu-staging -m command -a “w”
ubuntu-staging-www | SUCCESS | rc=0 >>
10:25:57 up 8 days, 12:29, 1 user, load average: 0.22, 0.31, 0.35
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
canuteth pts/2 192.168.13.221 10:25 1.00s 0.25s 0.01s w

ubuntu-staging-dev | SUCCESS | rc=0 >>
10:25:59 up 8 days, 12:17, 1 user, load average: 0.16, 0.03, 0.01
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
canuteth pts/0 192.168.13.221 10:25 0.00s 0.37s 0.00s w

ubuntu-staging-db | SUCCESS | rc=0 >>
10:26:02 up 8 days, 12:25, 1 user, load average: 0.17, 0.09, 0.09
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
canuteth pts/0 192.168.13.221 10:26 0.00s 0.28s 0.00s w

Okay, that shows promise right? Let’s try something a little more complicated:

$ ansible ubuntu-staging -s -K -m command -a “apt-get update”
SUDO password:
[WARNING]: Consider using apt module rather than running apt-get

ubuntu-staging-db | SUCCESS | rc=0 >>
Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]
Fetched 306 kB in 5s (59.3 kB/s)
Reading package lists…

ubuntu-staging-www | SUCCESS | rc=0 >>
Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Hit:4 https://apt.dockerproject.org/repo ubuntu-xenial InRelease
Get:5 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]
Get:6 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [544 kB]
Get:7 http://us.archive.ubuntu.com/ubuntu xenial-updates/main i386 Packages [528 kB]
Get:8 http://us.archive.ubuntu.com/ubuntu xenial-updates/main Translation-en [220 kB]
Get:9 http://us.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [471 kB]
Get:10 http://us.archive.ubuntu.com/ubuntu xenial-updates/universe i386 Packages [456 kB]
Get:11 http://us.archive.ubuntu.com/ubuntu xenial-updates/universe Translation-en [185 kB]
Get:12 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [276 kB]
Get:13 http://security.ubuntu.com/ubuntu xenial-security/main i386 Packages [263 kB]
Get:14 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [118 kB]
Get:15 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [124 kB]
Get:16 http://security.ubuntu.com/ubuntu xenial-security/universe i386 Packages [111 kB]
Get:17 http://security.ubuntu.com/ubuntu xenial-security/universe Translation-en [64.2 kB]
Fetched 3,666 kB in 6s (598 kB/s)
Reading package lists…

ubuntu-staging-dev | SUCCESS | rc=0 >>
Hit:1 http://us.archive.ubuntu.com/ubuntu zesty InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu zesty-updates InRelease [89.2 kB]
Get:3 http://security.ubuntu.com/ubuntu zesty-security InRelease [89.2 kB]
Get:4 http://us.archive.ubuntu.com/ubuntu zesty-backports InRelease [89.2 kB]
Get:5 http://us.archive.ubuntu.com/ubuntu zesty-updates/main i386 Packages [94.4 kB]
Get:6 http://us.archive.ubuntu.com/ubuntu zesty-updates/main amd64 Packages [96.2 kB]
Get:7 http://us.archive.ubuntu.com/ubuntu zesty-updates/main Translation-en [43.0 kB]
Get:8 http://us.archive.ubuntu.com/ubuntu zesty-updates/main amd64 DEP-11 Metadata [41.8 kB]
Get:9 http://us.archive.ubuntu.com/ubuntu zesty-updates/main DEP-11 64×64 Icons [14.0 kB]
Get:10 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe i386 Packages [53.4 kB]
Get:11 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe amd64 Packages [53.5 kB]
Get:12 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe Translation-en [31.1 kB]
Get:13 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe amd64 DEP-11 Metadata [54.1 kB]
Get:14 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe DEP-11 64×64 Icons [43.5 kB]
Get:15 http://us.archive.ubuntu.com/ubuntu zesty-updates/multiverse amd64 DEP-11 Metadata [2,464 B]
Get:16 http://us.archive.ubuntu.com/ubuntu zesty-backports/universe amd64 DEP-11 Metadata [3,980 B]
Get:17 http://security.ubuntu.com/ubuntu zesty-security/main amd64 Packages [67.0 kB]
Get:18 http://security.ubuntu.com/ubuntu zesty-security/main i386 Packages [65.5 kB]
Get:19 http://security.ubuntu.com/ubuntu zesty-security/main Translation-en [29.6 kB]
Get:20 http://security.ubuntu.com/ubuntu zesty-security/main amd64 DEP-11 Metadata [5,812 B]
Get:21 http://security.ubuntu.com/ubuntu zesty-security/universe amd64 Packages [28.8 kB]
Get:22 http://security.ubuntu.com/ubuntu zesty-security/universe i386 Packages [28.7 kB]
Get:23 http://security.ubuntu.com/ubuntu zesty-security/universe Translation-en [19.9 kB]
Get:24 http://security.ubuntu.com/ubuntu zesty-security/universe amd64 DEP-11 Metadata [5,040 B]
Fetched 1,049 kB in 6s (168 kB/s)
Reading package lists…

This time we passed ansible the paramater ‘-s’ which tells ansible we want to use sudo and we also passed ‘-K’ which tells ansible to prompt us for a password. You’ll also notice that it warns us to use the ‘apt’ module, which is a better choice for interacting with apt-get.

The command module will work with pretty much any command that is non-interactive and doesn’t use pipes or redirection. I often use it for checking things on multiple machines quickly. For example, if I need to install updates and I want to know if anyone is using a particular machine, I can use w, who, users, etc. to see who is logged in before proceeding.

If we needed to interact with one a few hosts and not an entire group, we can name the hosts, separated by a comma, in the same fashion: ‘ansible ubuntu-staging-www,ubuntu-staging-db …’

Now lets look at trying something a bit more complicated.. say we need to copy a configuration file /etc/ssmtp/ssmtp.conf to all of our hosts. For this we will write an ansible playbook that I named ssmtp.yml:


# copy ssmtp.conf to all ubuntu-staging hosts

– hosts: ubuntu-staging
user: canutethegreat
sudo: yes

tasks:
– copy: src=/home/canutethegreat/staging/conf/etc/ssmtp/ssmtp.conf
dest=/etc/ssmtp/ssmtp.conf
owner=root
group=ssmtp
mode=0640

We can invoke the command with ‘ansible-playbook ssmtp.yml’ and it will do as directed. The syntax is fairly straightforward and there are quite a number of examples.

There are lots of examples for a wide range of tasks in the Ansible github repo and be sure to take a look at the intro to playbooks page. Just remember that you are doing things to multiple servers at once so if you do something dumb it’ll be carried out on all of the selected servers! Testing things on staging servers and using pretend/simulate are always good ideas anyway.

What LTS Really Means…

In the business world we love to see software that has a lifecycle that is clearly defined. In relation to this, we typically go for Linux distributions that have long term support (LTS) such as Ubuntu, et cetera al. The reasons why we like these LTS releases is fairly simple: we want to know that our servers are going to have updates, or more specifically security updates, for a few years. What we don’t want is to have an operating system that has few or no updates between releases that leaves us vulnerable. Furthermore, we don’t want an operating system that has new releases frequently. So LTS releases sound great right? Not really…

What LTS releases really do is delay things. They put off updates and upgrades by keeping stale software patched against security vulnerabilities. Maybe we don’t care about the newest features in software x, y, or z – that’s pretty normal in production. However, backporting fixes is not always the best choice either. The problem we run into at the end of an LTS lifecycle is that the step to the next release is bigger – much, much, bigger! There have been LTS to LTS upgrades that have broken so much that a fresh install is either the only option left or it is often faster than trying to muddle through the upgrade. If you skip an LTS upgrade because the currently installed release is still supported, you are going to be in a world of hurt when you finally decide to pull the trigger on that dist-upgrade. The opposite end of the spectrum isn’t always ideal for production either: rolling releases will have the latest features, bug fixes, and security patches, but they also have less time in the oven and sometimes come out half-baked.

There is no easy solution here, no quick fixes. The best use of LTS I’ve seen is when the servers they are installed on have a short lifecycle themselves. If the servers are going to be replaced inside of 5 years then LTS might just be a good fit because you’ll be replacing the while kittle kaboodle before you reach end of life. For the reast of us, I feel like LTS really stands for long term stress – stress that builds up over the lifecycle and then gets dumped on you all at once.

A Central Logging Server with syslog-ng

I have a lot of Linux-based devices in my office and even throughout my home. One day I had a machine that I was having issues with (bad hardware) but couldn’t catch a break and see an error message and at the time of death nothing of use was being written to disk in the logs. I decided to try setting up remote logging to another machine in hopes that it would be transmitted before sudden death. Turned out I got lucky and was able to get an error logged on the remote machine that helped me figure out what the issues was. Since then I’ve had all of my devices that use a compatible logger log to a dedicated machine (a Raspberry Pi) that runs syslog-ng, which is my logger of preference.

Setting up a dedicated logger is easy. Once syslog-ng is installed, we only need to add a few lines to its configuration file to turn it into a logging server:

source net { tcp(); };
destination remote { file(“/var/log/remote/${FULLHOST}.log”); };
log { source(net); destination(remote); };

Here I use TCP as the transport, but you could also use UDP. The remote logs will be saved to /var/log/remote/<the name of the host>.log

Be sure to create the directory for the logging:

# mkdir /var/log/remote

Then restart syslog-ng:

service syslog-ng restart

Next we need to configure a client to log to our new dedicated logging host:

# send everything to log host
destination remote_log_server {
tcp(“10.7.3.1” port(514));
};
log { source(src); destination(remote_log_server); };

In the above example the remote logging server has an IP of 10.7.3.1 so you will want to change that to the IP or hostname of your log server.

Finally, be sure to restart the logging on the client like we did for syslog-ng on the logging server.

That’s all there is to it, very simple, and quick to setup.

Private Gentoo Mirror For A Large LAN

I run Gentoo on some 30 or so devices including PCs, Raspberry Pis, Virtual Machines, rackmounted servers, and so forth. These devices are mostly housed in my home office with a few random ones scattered throughout the house. To me it seems like a waste of bandwidth to have each of them download packages from the Internet directly. This is especially apparent when doing updates and watching the same exact things get downloaded to multiple devices. There is also an issue where most mirrors, and this applies to all mirrors not just Gentoo, that they have limits on how many connections per day from the same IP they allowed. My solution to the problem is to run my own Gentoo mirror on one of the machines with my own local (LAN) copy of the portage tree and also the distfiles.

Originally I ran my Gentoo mirror on one of my servers, but recently I moved it to its own dedicated VM to make management a little easier. That allows me to move it between machines if needed as well as take advantage of the ZFS RAID array on one of my servers. The disk space required is currently 367GB for all of the files. I allocated 500GB for my setup to allow room for growth. Anyway, I’ll assume you have a base Gentoo system up and running ready to be turned into a mirror.

First step is to install ‘gentoo-rsync-mirror’ package. This will install a script to /opt/gentoo-rsync/rsync-gentoo-portage.sh which we will copy to /usr/local/bin/ and modify it to look like this:

#!/bin/bash

LOG=/var/log/rsync-gentoo.log
LOCKFILE=/tmp/gentoo-mirror-sync

source /etc/rsync/gentoo-mirror.conf

if [ -e $LOCKFILE ]; then
echo “sync still running, or stale lock file!”
logger -t rsync “sync still running, or stale lock file!”
else
touch $LOCKFILE
fi

echo “Started Gentoo Portage sync at” `date` >> $LOG 2>&1
logger -t rsync “re-rsyncing the gentoo-portage tree”
${RSYNC} ${OPTS} ${PORT_SRC} ${PORT_DST} >> $LOG 2>&1
logger -t rsync “deleting spurious Changelog files”
find ${PORT_DST} -iname “.ChangeLog*” | xargs rm -rf
echo “End of Gentoo Portage sync: “`date` >> $LOG 2>&1
#
echo “Started Gentoo main sync at” `date` >> $LOG 2>&1
logger -t rsync “re-rsyncing the gentoo main tree”
${RSYNC} ${OPTS} ${GEN_SRC} ${GEN_DST} >> $LOG 2>&1
logger -t rsync “deleting spurious Changelog files”
find ${GEN_DST} -iname “.ChangeLog*” | xargs rm -rf
echo “End of Gentoo main sync: “`date` >> $LOG 2>&1

rm -f $LOCKFILE

Now edit /etc/rsync/rsyncd.conf to look like:

# Copyright 1999-2004 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Id$

uid = nobody
gid = nobody
use chroot = yes
max connections = 20
pid file = /var/run/rsyncd.pid
log file = /var/log/rsync.log
motd file = /etc/rsync/rsyncd.motd
transfer logging = yes
log format = %t %a %m %f %b
syslog facility = local3
timeout = 300

[gentoo-portage]
#modern versions of portage use this entry
path = /mirror/gentoo-portage
comment = Gentoo Linux Portage tree mirror
exclude = distfiles

[gentoo]
path = /mirror/gentoo
comment = Gentoo Linux mirror

You can change the path as needed. In my setup I have a mount point /mirror that is used to house the files. You can also edit /etc/rsync/rsyncd.motd if you want to display a custom message when a system syncs.

Now edit /etc/rsync/gentoo-mirror.conf to look like:

# Gentoo rsync mirror config

RSYNC=”/usr/bin/rsync”
OPTS=”–quiet –recursive –links –perms –times –devices –delete –timeout=300″
#Uncomment the following line only if you have been granted access to rsync1.us.gentoo.org
#SRC=”rsync://rsync1.us.gentoo.org/gentoo-portage”
#If you are waiting for access to our master mirror, select one of our mirrors to mirror from:
#SRC=”rsync://rsync.de.gentoo.org/gentoo-portage”
PORT_SRC=”rsync://mirrors.kernel.org/gentoo-portage”
GEN_SRC=”rsync://mirrors.kernel.org/gentoo”
PORT_DST=”/mirror/gentoo-portage/”
GEN_DST=”/mirror/gentoo/”

Again, change the path if needed and you can also change the mirror to a closer one if you wish.

Now we need to make a cron job to do the work:

crontab -e

0 */12 * * * /usr/local/bin/rsync-gentoo.sh

Here I am syncing every 12 hours, which technically is more than the once-per-day limit, but I figure I’m saving the mirrors a bunch of traffic/work as a trade off.

Now we need to set rsyncd to autostart and start it:

rc-update add rsyncd default

service rsyncd start

Now we should have rsync working. Next we need to provide either FTP or HTTP retrieval of distfiles. I prefer HTTP, so emerge apache and set it to autostart just like we did for rsyncd. The last step is to edit /etc/apache2/vhosts.d/default_vhost.include to point the document root to the mirror location.

The initial sync takes a while, but once it is completed keeping things up-to-date does not involve much bandwidth.  Once the first sync is finished, the last step is to configure each individual machine to use the local mirror.

Edit /etc/portage/repos.conf/gentoo.conf to use this line:

sync-uri = rsync://<YOUR MIRRORS IP>/gentoo-portage

Then edit /etc/portage/make.conf to include:

GENTOO_MIRRORS=”http://<YOUR MIRRORS IP>/gentoo”

Now you should be able to ’emerge –sync’ using your local mirror and the distfiles should be pulled from your mirror as well.

Raspberry Pi Security Camera

If you are like me you have a set of things that stay in the office all the time. For example, tape dispenser, scissors, whiteboard makers, etc. These items have a do-not-leave-the-office rule so that you can find them during when hours when needed. However, you find yourself in need of knowing who keeps taking the office-only items out of the office. In my case I’d be in a phone call (often conference) with a customer and need to jot something down on the whiteboard and all 10 dry-erase pens would be missing. Nothing like trying to be a professional and you have to excuse yourself to go ask your kids where something is! As a workaround, I do hide one marker, but still I want to know who keeps breaking the keep-it-in-the-office rule! Well thanks to Linux, a Raspberry Pi, and a webcam, we can create our own customization security camera for the home office.

We have two main choices here: we can use our own Linux install and install the components we need or we can use a purpose-built distribution. I have tried both and they each have their own pros and cons. Currently I am using a purpose-built distribution called https://github.com/ccrisan/motioneyeos (meyeos). meyeos is Debian-based (Raspian) Linux distribution that uses Motion for the back-end and motionEye for the front-end, providing a easy to use web interface with lots of configuration options.

The first step is to download meyeos onto your Linux workstation. Once downloaded, you can use the included writeimage.sh to install onto your microsd card. Example usage:

# ./writeimage.sh -d /dev/mmcblk0 -i “<path to uncompressed img file>”

The first boot takes a few minutes (I think about 4 on a B) depending on how fast your device is. After it is up and running you can access it via your web browser of choice by going to meye-XXXX, where the Xs represent the digits of your Raspberry Pis serial number which are printed on the board. If you don’t know what the serial number is because you already put your Pi in a case, you can look for the IP in your router’s DHCP logs.

Configuration is very simple and there are little “i” icons next to things that may not be obvious that include further information. I have mine setup to upload to a dropbox location so that I can access the images from anywhere. There are lots of things that can be configured and tweaked including overclocking. I tend to prefer motion activated still images over videos and so I have mine configured to favor that.

Once you are up and running you should be able to view a live feed of the camera and depending on your configuration you may be able to see images being added to your dropbox directory.

Now we put our security camera in a location that gives the best view of the office. For me that location is in the corner of the room facing my desk on the same wall as the door. Now we tell everyone of the new camera and wait a few days until something goes missing. As soon as it does, we can check the upload folder for our photo evidence to identify the culprit!