Gentoo In Production Because One-Size-Rarely-Fits-All

Who in their right mind would run Gentoo on a production server? I would!

Maybe I’m not in my right mind, but Gentoo is one of the best Linux distributions out there for several use cases from large server farms all the way down to embedded devices. One thing that is an issue with almost every Linux distribution – scratch that – one thing that is an issue with nearly every binary system is that someone else decides what is best for everyone. In a Linux distribution that means someone else, a package maintainer, decides what software and features are included in a particular package. Sometimes that just fine, other times it can be an issue. For example with openoffice and libreoffice there is very little that can be customized about the install and as such a binary install package is acceptable. However, with other software take Apache for example, I might not want every other feature enabled – in fact I rarely do! This is where Gentoo really shines brightest: the ability to choose what features are enabled system-wide and on a per-package basis.

Another area Gentoo excels is with the security tool glsa-check It is better than anything else I’ve ever seen or used on any distro. I can check for vulnerabilities, read about the vulnerability, test what the recommended fix is, and even have it apply the fix all within one utility.

I need enterprise support from the vendor! No, you don’t! If you think you do step aside and let someone else run things. If your IT team says they need enterprise support go ahead and fire them right now as they don’t know how to do their job. Microsoft, Oracle, etc. provide paid support services for their products. The thing is they don’t provide access to anything that couldn’t have been found with a Google search or reading related discussion forums. I don’t have time for that, well then hire an IT person/team that does because that’s part of their job!

Gentoo supports multilib and slots which makes it very flexible. Most modern distributions support mulilib so I won’t go into that, but you may be wonder what slots are. Slots allow many packages, particularly libraries, to have more than one version installed at the same time. On my dev box this is a life safer as I can have multiple versions of a library installed without having to resort to any trickery. For example, on my laptop I have webkit-gtk 2.4.11 and 2.14.5 installed at the same time.

In Gentoo software is installed, configured, and uninstalled with the tool named ’emerge’ which is part of the portage (software package) system. Emerge can handle dependencies, complication, binary packages, and tons of other abilities and features. You can use wildcards with emerge, in fact some use flags even support regular expression, which is very handy when you need to manipulate multiple packages with the same or similar names. For example, say I had gtk+ installed to use two different x11 applications that each required a different version of gtk+ but I no longer need them. With emerge, just like most package manager, I can name each package I wish to uninstall by listing them out – or – with emerge I can use a wildcard and have it remove all of them: ’emerge -Ca x11-libs/gtk+*’. This command will remove, ask before doing so, and search/remove any package that matches that pattern.

Another area where Gentoo is a level above the rest is with freedom of choice. My boss, whom I generally think is quite intelligent, thinks systemd is the cats meow and anyone who doesn’t get on board is an idiot. I, on the other hand, think it is a pile of s**t that is turning Linux into a binary blob operating system. If I wanted to run a binary blob operating system, I’d run the original aka Windows. Well with Gentoo we get the freedom to select what we want. You want to put all your eggs in one basket? Install systemd! You want a system that doesn’t try to assimilate everything like the Borg on Star Trek or destroys what it cannot? Install OpenRC. The choice is yours and all of the documentation is written to handle either path you take.

While on the topic of documentation, Gentoo has some of the best out there. I even refer to it often when working on other distributions because of the quality and detail often found in the guides, wiki, and forums. Regardless of the operating system used, if you are not willing to learn some basics then you have no business being an admin. The same argument could be made for using automated tools that hide what is going on and make assumptions on the best approach.

In Gentoo we use what are called USE flags to select the features to be used in the system or for a particular package. On any Gentoo system these can be found in /usr/portage/profile/use.desc for global to all packages and /usr/portage/profile/use.local.desc for USE flags that are specific/specialized to only one package. System-wide USE flags are set in /etc/portage/make.conf and per-package USE flags can be set in /etc/portage/package.use/foo (where foo is any filename you wish). For example, I may want to have postgresql set on a system-wide level but a specific package, say zabbix, I do not want to have postgresql support. To do this I would add ‘postgresql’ to my USE flag in /etc/portage/make.conf and then in /etc/portage/package.use/foo I would add a line that reads ‘net-analyzer/zabbix -postgresql’. Here you can see the minus sign in front ‘-postgresql’ which tells emerge that we do not want that feature enabled which overrides the system-wide setting that is in /etc/portage/make.conf.

Compiling everything from source takes too long! Well sometimes, but not always. We can also mitigate this to some degree with a combination of distcc and cross-compile if we have other machines at our disposal. Most source packages are quite small and do not take long to compile and install. There are a few exceptions such as glibc and gcc which can takes more than an hour to compile and install. If you have more than one machine available the compiles can be distributed to them with distcc an with cross-dev even different architectures can be used. I do this to speed things up on my Raspberry Pi systems by making use of my faster multi-CPU/multi-core machines. I mentioned previously that some packages, such as libreoffice, offer very little customization and can be time consuming to compile. For these infrequent cases there are binary packages available in the portage tree. I use a libreoffice binary package on my laptop because my laptop is slow and there is not much I wish to change about libreoffice to begin with. On my desktop I use the standard source package because that machines has 8 cores and compiles pretty quick.

Gentoo is a rolling release which means the software is always being updated. Many distributions have started to move to this model and I personally prefer it over the huge steps that seem to happen in the other major distros. There have been so many upgrades in Debian GNU/Linux that broke things due to the massive time between releases that it has become standard practice to use a staging server to test on before attempting the real upgrade. I’m all for testing things, but the fact that you are almost forced to does not sit well with me. In recent years Ubuntu has gotten nearly as bad not to mention they seem to willy-nilly decide what packages to include and drop between releases *cough*mediawiki*cough*. With a rolling release we avoid these big steps and have less time spent testing and fixing things in staging. One could argue the disadvantage is that you have to upgrade more often, but you don’t have to. You are in control here, not someone else. Given that there are almost weekly updates purely for security purposes in the major distributions, I feel like this is a moot issue anyway.

We also find that there is superior configuration management in Gentoo. There are two different tools that can be used depending on what you wish to do. If you want a quickly see if there is a difference between the installed configuration and a new one you can use ‘etc-update’ which will list the files that are different and will allow you to chose your existing one, the new one, attempt a simple merge, or exit without doing anything. For bigger changes or finer control there is a tool called ‘dispatch-conf’ that allows line-by-line comparison between configuration files. In all cases configuration files are NEVER overwritten automatically during a package upgrade if you have made changes to the original!

So when it comes down to is the initial install is time consuming and a bit labor intensive as you have to make a lot of decisions about your system and then implement them. Suck it up and get it done. Once the installation is finished and the system is up and running there is never a software reason to reinstall. I have one system that has been happily updated since 2008 with only kernel upgrades causing reboots.

Gentoo isn’t for everyone, but the older I get the more I know what I want and expect out of things. With Gentoo the system is setup how I want it, things are configured how I like it, and only those features I wish to use are present. The one-size-fits-most attitude of the other major distributions does not work for me. I want my cake and I’m gonna eat it too!

SSH Over Flaky Connection With MOSH

If you are like me and have to occasionally work while traveling, you may find yourself having to use Internet connections that are a bit on the flaky side. Sometimes I’m tethering to my phone while riding in a car or RV and pass though places where cell service is poor or non-existent. These situations are of lesser concern to web browsers and most email clients as they’ll just continue when the connection resumes, but for SSH any interruption usually results in a disconnect. Well thanks to MOSH (mobile shell) we can say goodbye to disconnects.

First we need to install mosh on the remote system and on our local system. On Gentoo systems this can be accomplished with ’emerge -av net-misc/mosh’ my USE flags:

net-misc/mosh client examples mosh-hardening server -ufw utempter

On Debian/Ubuntu systems you can do an ‘apt-get install mosh’ to get going.

Next you’ll need to open some ports up on the firewall. Mosh uses a port between 60000 and 61000 by default and the UDP protocol on the remote system. I personally open the full range as I sometimes have a lot of logins simultaneously.

Finally, you can connect to the remote machine in very much the same way you would with SSH:

mosh <remote system name or IP>

Now you should be able to login once and have the connection stay up despite loss of connectivity and even moving from one ISP to another. I even leave them running while suspending (sleep) my laptop overnight.

Double SSH Tunnel, Port Forwarding To Access A Remote, Firewalled Webserver

Here’s the scenario: you need to access the webserver running on a UNIX machine that sits behind a firewall but we have access to a different machine via SSH that’s on the same network. Well not to fear because SSH is here to the rescue!

First we need to be sure that we can reach the remote SSH machine, so check that now. Next we need to make sure that we can get to the destination machine from the remote SSH machine so check that at the same time.

So how does all this work? It works by forwarding a local port to the remote SSH machine and then a second connection on the remote SSH machine will forward to our destination machine.

The command on the local machine:

ssh -L 127.0.0.1:1234:127.0.0.1:1234 <remote SSH machine>

The command on the remote SSH machine:

ssh -L 127.0.0.1:1234:127.0.0.1:80<destination webserver>

Once both pieces are up and running all we have to do is point our web browser of choice to localhost:1234 and we’ll be accessing the destination webserver on port 80 as if we were on the same network, or thereabouts.

There really isn’t a limit, at least not that I’ve encountered, to how many times/machines you can tunnel through. This makes it ideal if you are trying to access a location when there are multiple firewalls in between. That’s all there is to it, it’s fairly simple and straightforward.

Google Chrome and Touchscreen Linux

Google Chrome and Linux both work with touchscreens, but sometimes Google Chrome on Linux does not want to behave properly. This is easy to fix by changing the Google Chrome startup to specify your touch device:

/usr/bin/google-chrome-stable –touch-devices=10

In my case the device is “10” but yours may be different. You can determine yours with ‘xinput list’. However, sometimes the device number changes especially if a new devices is connected at boot. So I created a script in /usr/bin/google-chrome with the following contents:

#!/bin/bash
/usr/bin/google-chrome-stable –touch-devices=`xinput list|grep ELAN|sed ‘s/.*id=//’|sed ‘s/\[.*//’|awk ‘{print $1}’

In my system the name of the touchscreen is ELAN so you will need to adjust this if you have a different brand.

After adding that line to your Google Chrome startup you should have proper touch scrolling and button clicking.

A Quick SSH Tunnel For Bypassing A Webfilter/Firewall

I was recently traveling in the central part of the U.S. and while using the public WiFi at a local destination I came across a social website that I frequent that was blocked by a webfilter or firewall rule. On my home machine I have OpenVPN running on two different ports: on one port I can create a VPN connection that allows access to my home network and on the other I get the same functionality plus being able to route all traffic across my home network. Unfortunately those ports were blocked at this location. A little research showed that outbound SSH was not blocked so and many higher level ports above 1000 did not seem to be blocked. So I did a few tests and found a combination that worked:

ssh -D 1234 -f -C -q -N me@homemachineip

What this does is create a SOCKS connection on local port 1234, forks the process to the background (freeing our terminal for other use), enable compression, tells SSH to be quiet, and tells SSH no remote command will be sent.

Next step is to tell our web browser to use the SOCKS connection by telling our browser of choice to use a SOCKS proxy on localhost port 1234 for all connections.

To test, do a Google search for “what’s my ip” and you should see that it comes back with your home IP now.

If the firewall blocks SSH there is not much you can do. As a preemptive step I run SSH on a second, alternate port for places that block port 22.

Now you should be free to browse the web as if at home without the local webfilter restrictions!

Proper DNS and DHCP for your LAN

If you are like me you don’t like the fact that most routers do a terrible job at providing DNS for the LAN-side. Sure, routers are easy to setup and will get you up and going quickly, but most of them suck in more advanced areas. I mean is it too much to ask for to be able to type in a hostname or IP address and have a consistent experience across all devices? Also, what about if I know an IP address but I have no idea what devices it belongs to. I don’t want to login to the router and search the logs for a Mac address that I may or may not recognize and I don’t want to waste time running nmap to try and fingerprint the system in hopes of identifying it. The router should provide reverse DNS lookup so I don’t have to! Oh and don’t get me started about the crappy DNS servers that ISPs provide!

So what we will be doing here is setting up BIND and DHCPd for our local network. It will provide IP address to our devices, register host (DNS) names, provide a local DNS server for queries, and give us reverse DNS.

Before we get started make sure you install dhcpd and bind9. You will probably also want to install bind-tools or whatever your distro calls it.

Now we will configure dhcpd by editing /etc/dhcp/dhcpd.conf and setting the following options (snippet):

server-identifier 192.168.1.1;
authoritative;
option routers 192.168.1.1; # use main router
option domain-name-servers 192.168.1.1;
option domain-name “<YOUR DOMAIN>”;
ddns-domainname “<YOUR DOMAIN>”;
ddns-rev-domainname “in-addr.arpa”;
ddns-update-style interim;
ddns-updates on;
allow client-updates;
update-conflict-detection false;
update-static-leases on;
include “/etc/bind/rndc.key”;
zone <YOUR DOMAIN ZONE> {
primary 127.0.0.1;
key rndc-key;
}
zone 1.168.192.in-addr.arpa {
primary 127.0.0.1;
key rndc-key;
}
subnet 192.168.1.0 netmask 255.255.255.0 {
range 192.168.1.100 192.168.1.254;
default-lease-time 259200;
max-lease-time 518400;
option subnet-mask 255.255.255.0;
option broadcast-address 192.168.1.255;
allow unknown-clients;
zone <YOUR DOMAIN> { primary 192.168.1.1; key rndc-key; }
zone 1.168.192.in-addr.arpa { primary 192.168.1.1; key rndc-key; }
}

Next we will be editing /etc/bind/named.conf. Under ‘acl “trusted”‘ add the hosts IP address. Then under the zone section you will want to add two new ones:

zone “<YOUR DOMAIN>” IN {
type master;
file “pri/<YOUR FILE>.zone”;
allow-query { any; };
allow-transfer { any; };
notify yes;
allow-update { key “rndc-key”; };
};

zone “1.168.192.in-addr.arpa” IN {
type master;
file “pri/rev.zone”;
allow-query { any; };
allow-transfer { any; };
notify yes;
allow-update { key “rndc-key”; };
};

Create a normal BIND zone config file under /etc/bind/pri/<YOUR FILE>.zone and also create a /etc/bind/pri/rev.zone just like a normal zone file except swap out the SOA domain with “1.168.192.in-addr.arpa” and the origin will be “$ORIGIN 1.168.192.in-addr.arpa.” Other than that it should look like a standard BIND zone config.

At this point we can disable the DHCP and DNS on the existing router and start dhcpd and named on the new one. Be sure to test it out before calling it “good” and walking away.

router ~$ host foo
foo.<YOUR DOMAIN> has address 192.168.1.230

router ~$ host 192.168.1.230
230.1.168.192.in-addr.arpa domain name pointer foo.<YOUR DOMAIN>.

We are all set and can sleep soundly knowing that our network works correctly!

Transitioning Between LAN and WLAN By Bonding Ethernet and WiFi

Here’s the situation: you like to have the LAN cable plugged into your laptop when you are sitting at your desk to take advantage of the gigabit speeds, but you sometimes like to roam around by connecting to the WiFi. However, when switching between the two you don’t want to lose your connection/have to get a new IP address. The solution? bond the ethernet and wireless connections to make a seamless transition back and forth.

I use Gentoo on my personal machines and this guide is written specifically for that distribution. I also think systemd is a pile of shit that is turning Linux into a binary blob OS – if I wanted to use a binary blob OS I’d run the original: Windows!

First make sure your wired and wireless connections already work!

Let’s create a new init for the bonded interface:

# cd /etc/init.d/ && ln -s net.lo net.bond0

Now remove net.eth0 and net.wlan0 from autostarting:

rc-update del net.eth0

rc-update del net.wlan0

We can also bring down our connections:

service net.eth0 stop

service net.wlan0 stop

Next edit /etc/conf.d/net:

config_eth0=”null”
config_wlan0=”null”
slaves_bond0=”eth0 wlan0″
config_bond0=”dhcp”

#
# Notes: if network get hosed and you try to restart net.bond0 and it
# fails, you have to manually bring eth0 && wlan0 up with ifconfig.
#
preup() {
if [[ $IFACE -eq “bond0” ]]; then
# bring up the interfaces because sometimes when eth0 isn’t connected it fails to bring anything up
/bin/ifconfig eth0 up ; /bin/ifconfig wlan0 up ; /usr/sbin/wpa_supplicant -iwlan0 -c /etc/wpa_supplicant/wpa_supplicant.conf -B
fi
return 0;
}

postdown() {
if [[ $IFACE -eq “bond0” ]]; then
if [[ -S /var/run/wpa_supplicant/wlan0 ]]; then
killall wpa_supplicant
rm -f /var/run/wpa_supplicant/wlan0
fi
fi
return 0;
}

Also, if you use ifplugd you’ll want to disable/remove it or it may interfere with switching the active interface.

We should also set the bonded interface to autostart:

rc-update add net.bond0 default

Next we want to tell the kernel how we want the bonded interface to function. Specifically we want eth0 to be primary and only use the wireless if the ethernet is down. We can tell it to do all of this by creating a file in /etc/modprobe.d/bonding.conf:

options bonding mode=1 miimon=100 primary=eth0

Now let’s start the new bonded interface:

service net.bond0 start

If all goes well one of the interfaces should be made active and you should be back on the network. If not make not of any errors and see where things went awry.

The only issues I’ve had with this setup is getting the wireless to work if a new configuration is added after the system is already up and running. In that case sometimes getting wpa_supplicant to run with a new config without hosing bond0 can be trying!

 

Dual ISPs or How To Survive Out In The Sticks

So you are living in a remote area that has poor Internet connectivity options and you are a nerd that can’t survive off of a single slow DSL connection. What can one do?! Well with an older computer, three NICs, and a little help from our favorite Linux distro we make a buffet of low quality Internet connections seem like one semi-decent connection. What we want to get out of this is the download bandwidth of a satellite connection but the latency (ping) of a DSL connection. That way we can watch Netflix and YouTube on the satellite and play games, SSH, and do other latency sensitive things on the DSL.

What I used specifically was a DSL connection from the local phone company that is sold as 1.5 Mbit/s but typically shows speeds of around 800 Kbit/s on a good day and a satellite connection from Exede. An old dual CPU AMD Opteron with 3 gigabit NICs and an install of Gentoo or your favorite Linux distro will be our router. In this setup eth0 will be the LAN-connected NIC, eth1 will be the DSL and eth2 the satellite connection. Each of the connections were tested individually with my laptop to ensure they were functional and to get some speed tests for comparison. eth1-2 are dynamically (dhcp) assigned IPs from the ISP provided modems and eth0 has a static IP for our LAN.

Once you have everything ready, connect and then after unconnect, each ISP modem to its designated NIC and then let’s test each of the connections real quick:

$ ping -c10 http://www.google.com

Now we will connect all of the ISP modems and verify everything is connected with ‘ifconfig’ or ‘ip addr’ and check that each NIC has an appropriate IP address. You will want to note each IP address and which modem/ISP it is from/for – maybe even write it down on a piece of paper for quick reference.

If each one works, then let’s continue with creating a router script at /usr/local/bin/router.sh:

#/bin/bash
# Set what interface is which
LAN=eth0
WAN0=eth2
WAN1=eth1
LAN_IP=192.168.1.1
WAN0_IP=192.168.0.3
WAN1_IP=162.72.156.86
LAN_GW=192.168.1.1
WAN0_GW=192.168.0.1
WAN1_GW=162.72.152.1

# SNAT packets going out WAN0 to DSL ISP
iptables -t nat -A POSTROUTING -o ${WAN0} -j SNAT –to-source ${WAN0_IP}

# SNAT packets going out WAN1 to SAT ISP
iptables -t nat -A POSTROUTING -o ${WAN1} -j SNAT –to-source ${WAN1_IP}

# chain which marks a packet (MARK) and its connection (CONNMARK) with MARK 1 for DSL ISP
iptables -t mangle -N MARK-DSL-ISP
iptables -t mangle -A MARK-DSL-ISP -j MARK –set-mark 1
iptables -t mangle -A MARK-DSL-ISP -j CONNMARK –save-mark
# icmp echo requests (ping)
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p icmp -j MARK-DSL-ISP
# ssh
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 22 -j MARK-DSL-ISP
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p udp –dport 22 -j MARK-DSL-ISP
# time
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 37 -j MARK-DSL-ISP
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p udp –dport 37 -j MARK-DSL-ISP
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 123 -j MARK-DSL-ISP
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p udp –dport 123 -j MARK-DSL-ISP
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 23 -j MARK-DSL-ISP
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p udp –dport 23 -j MARK-DSL-ISP
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 992 -j MARK-DSL-ISP
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p udp –dport 992 -j MARK-DSL-ISP
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 107 -j MARK-DSL-ISP
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p udp –dport 107 -j MARK-DSL-ISP
# dns
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 53 -j MARK-DSL-ISP
# Star Trek Online
for STO_IP in 208.95.184.{0..255} 208.95.185.{0..255} 208.95.186.{0..255} 208.95.187.{0..255}; do
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp -d $STO_IP -j MARK-DSL-ISP
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p udp -d $STO_IP -j MARK-DSL-ISP
done
# chain which marks a packet (MARK) and its connection (CONNMARK) with MARK 2 for SAT ISP
iptables -t mangle -N MARK-SAT-ISP
iptables -t mangle -A MARK-SAT-ISP -j MARK –set-mark 2
iptables -t mangle -A MARK-SAT-ISP -j CONNMARK –save-mark
# http
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 80 -j MARK-SAT-ISP
# https
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 443 -j MARK-SAT-ISP
# smtp
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 25 -j MARK-SAT-ISP
# imap2
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 143 -j MARK-SAT-ISP
# pop2
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 109 -j MARK-SAT-ISP
# pop3
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 110 -j MARK-SAT-ISP
# imaps
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 993 -j MARK-SAT-ISP
# pop3s
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 995 -j MARK-SAT-ISP
# ftp
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 21 -j MARK-SAT-ISP
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p udp –dport 21 -j MARK-SAT-ISP
# ftp-data
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 20 -j MARK-SAT-ISP
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p udp –dport 20 -j MARK-SAT-ISP
# sftp
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 115 -j MARK-SAT-ISP
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p udp –dport 115 -j MARK-SAT-ISP
# rsync
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp –dport 873 -j MARK-SAT-ISP
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p udp –dport 873 -j MARK-SAT-ISP
#
# Special rules for certain hosts (they have static IPs)
#
# We want the Wii to use DSL all the time for playing online games
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp -s 192.168.1.95 -j MARK-DSL-ISP
#
# For Samsung Bluray Player using Netflix we want it to use the SAT
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate NEW -p tcp -s 192.168.1.90 -j MARK-SAT-ISP
# If a packet is not NEW, then there must be a connection for it somewhere, so go find the connection mark and apply it to the packet
# Packets from Internal network
iptables -t mangle -A PREROUTING -i ${LAN} -m conntrack –ctstate ESTABLISHED,RELATED -j CONNMARK –restore-mark

# add local routes too
ip route flush table dsl
ip route add table dsl default dev ${WAN0} via ${WAN0_GW}
ip route add table dsl 192.168.0.0/24 dev ${WAN0} src 192.168.0.3
ip route add table dsl 162.72.156.0/24 dev ${WAN1} src 162.72.156.86
ip route add table dsl 192.168.1.0/24 dev ${LAN} src 192.168.1.1/24

# ditto
ip route flush table sat
ip route add table sat default dev ${WAN1} via ${WAN1_GW}
ip route add table sat 192.168.0.0/24 dev ${WAN0} src 192.168.0.3
ip route add table sat 162.72.156.0/24 dev ${WAN1} src 162.72.156.86
ip route add table sat 192.168.1.0/24 dev ${LAN} src 192.168.1.1/24

# Now add rules to actually use them…
ip rule del from all fwmark 2 2>/dev/null
ip rule del from all fwmark 1 2>/dev/null
ip rule add fwmark 1 table dsl
ip rule add fwmark 2 table sat
ip route flush cache

# We need to allow packet forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward

# Finally, make sure that the rp_filter option is disabled on the router, otherwise it could drop packets!
for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 0 > “$i”; done

# That’s it!

In this firewall I route traffic based on the port number and I also included one example of how to route traffic for a game (Star Trek Online). That should be enough to get you up and running and some examples to make your own customizations.

The next step is to make the firewall script executable and run it:

# chmod +x /usr/local/bin/firewall

# /usr/local/bin/firewall

The last part is to add the script so it gets autoloaded by your system on boot, but that depends on the distribution you are using so consult with Google.

Now you have a semi-decent Internet connection despite being in a remote location and traffic should get routed to the most appropriate connection.

WAL-E on PostgreSQL with AWS S3

I run PostgreSQL on two AWS EC2 instances and have binary streaming replication between them for high availability. Since both VMs are in AWS, it only makes sense to use S3 for archiving and backups. I will assume you already have two working EC2 instances running PostgreSQL with replication and only wish to add WAL-E with S3 into the mix.

Before we get started you will need to make sure you have python pip and python virtualenv installed. I also needed to install zlib but that may have been because I started from a very minimal install.

I will go over the setup once as I have my master and slave configured 100% identical with the exception that the master has recovery.conf named recovery.conf.use to keep it from becoming a slave.

We will need the AWS CLI tools before we get into WAL-E.

# pip install awscli

You will need an AWS account/IAM user that has permissions on the S3 bucket we will be using. If you haven’t set an account or an S3 bucket do that now. Both my EC2 instance and S3 bucket are in the US-West region so be sure to adjust that if needed. Now we need to create config and credential files for user postgres:

$ sudo -u postgres -i

postgres ~ $ mkdir ~/.aws

postgres ~ $ echo -e “[default]\nregion = us-west-2\noutput = json” > ~/.aws/config

postgres ~ $ chmod 600 ~/.aws/config

postgres ~ $ echo -e “[default]\naws_secret_access_key = <REPLACE WITH YOUR KEY>\naws_access_key_id = <REPLACE WITH YOUR KEY ID>” > ~/.aws/credentials

postgres ~ $ chmod 600 ~/.aws/config

Now would be a good time to test your awscli install and your credentials:

postgres ~ $ aws s3 ls <REPLACE WITH YOUR BUCKET NAME>

If you get an error go back and check your credentials as well as your IAM permissions for the user and bucket.

Next we will install WAL-E:

# pip install wal-e

We need to make a directory to hold the configuration files:

# mkdir -p /etc/wal-e.d/env/

# chown root:postgres /etc/wal-e.d

# chmod 750 /etc/wal-e.d

# chown root:postgres /etc/wal-e.d/env

# chmod 750 /etc/wal-e.d/env

Now we need to create the configuration/credential files:

# echo “<REPLACE WITH YOUR KEY ID>” > /etc/wal-e.d/env/AWS_ACCESS_KEY_ID

# echo “us-west-2” > /etc/wal-e.d/env/AWS_REGION

# echo “<REPLACE WITH YOUR ACCESS KEY>” > /etc/wal-e.d/env/AWS_SECRET_ACCESS_KEY

# echo “s3://<REPLACE WITH YOUR S3 bucket URL>” > /etc/wal-e.d/env/WALE_S3_PREFIX

# chown postgres:postgres /etc/wal-e.d/env/*

# chmod 640 /etc/wal-e.d/env/*

Note that the S3 URL prefix must be a lowercase “s3://” or it will fail!

Now is a good place to stop and manually do a basebackup to test the WAL-E install, configuration, and credentials (be sure to adjust the path to match where your database resides on the filesystem):

postgres ~ $ envdir /etc/wal-e.d/env wal-e backup-push /var/lib/postgresql/9.5/main

If everything goes properly it should automatically start a backup, copy the files, and finish. If you encounter any errors be sure to double check your credentials and paths.

Next we will edit our postgresql.conf to use WAL-E for archiving:

archive_command = ‘envdir /etc/wal-e.d/env /usr/bin/wal-e wal-push %p’

Then we will edit our recovery.conf to also use WAL-E:

restore_command = ‘envdir /etc/wal-e.d/env /usr/bin/wal-e wal-fetch “%f” “%p”‘

At this point we can restart postgresql to make use of the new configuration (be sure to adjust this commend if you are using systemd or another init system):

service postgresql-9.5 restart

We also want our basebackups to go to S3 so we’ll create a cronjob for that:

postgres ~ $ crontab -e

Then add a line similar to this (be sure to adjust the path to match where your database resides on the filesystem):

0 2 * * * postgres envdir /etc/wal-e.d/env wal-e backup-push /var/lib/postgresql/9.5/main

We should be good to go at this point and you can watch your postgresql log file (postmaster.log for me) to make sure everything is running smoothly on both the master and slave respectively.

If you plan on keeping a lot of archive files but do not need quick access to them you may want to consider setting up an S3 bucket policy that moves things to glacier after a certain number of days. For me I have it configured to move files older than one week into a cheaper storage medium and then into glacier after two weeks.

That’s about all there is to it and if you need further reading check out the WAL-E github page and of course Google is your friend when it comes to error messages!

PostgreSQL on a Raspberry Pi 3 64-bit with Binary Streaming Replication

In my home office I run PostgreSQL with streaming binary replication between two servers. I don’t need streaming replication in my home office, but it is a good way to learn the system and it’s also nice having the ability to switch which server is the master during upgrades. After a hardware failure in my VM server I decided it was really stupid to have both the master and slave on the same physical machine. My database needs are small as I only use mediawiki, zabbix, and a few other minor things so I decided to look at using a Raspberry Pi 3 in 64-bit mode as a database server. I was also curious if PostgreSQL could replicate between two different architectures: aarch64 (arm 64-bit) and amd64 (x86_64) as the endianness is the same and the bitness is as well. I’m always on the lookout for new projects involving Raspberry Pis, Linux, and other related things and thought this might be a fun thing to try. I run Gentoo on all of my Linux machines and I also refuse to use SystemD and instead use OpenRC which means you’ll need to adjust a few commands here and there if you use something else!

There is a Gentoo guide that covers installing on the Raspberry Pi 3 in 64-bit mode and is a separate beast from the normal install guide for other Raspberry Pis due to complications with getting the Pi3 into 64-bit mode.

Once you have your Pi3 up and running the first we need is to keyword the version of PostgreSQL we want to use by creating a file in /etc/portage/package.keywords/postgresql:

=app-eselect/eselect-postgresql-1.2.1 ~arm64
=dev-db/postgresql-9.5.5 ~arm64

Next we want to select the use flags we want to incorporate:

dev-db/postgresql nls pam python readline server ssl threads zlib

Gentoo has a great quickstart guide that you should look over before continuing to see if you need any additional settings.

Then install PostgreSQL as you normally would:

emerge -av =dev-db/postgresql-9.5.5

On my setup I have a slightly different datadir than the Gentoo default due to migration(s) from other distros and systems, so I had to edit /etc/conf.d/postgresql and change the line ‘DATA_DIR=”/var/lib/postgresql/9.3/data’ to ‘DATA_DIR=”/var/lib/postgresql/9.3/main”‘. If you are doing a fresh install you will not likely need to change this.

Now we need to initialize the database:

emerge –config dev-db/postgresql:9.5.5

If you need to make changes to the default configs, now is the time to do so. For me I had existing configs to copy over.

For streaming replication I followed the official streaming replication guide and official binary replication guide to get it all up and running on my previous setup.

Finally, we will set postgresql to start at boot and start the service:

rc-update add postgresql-9.5 default

service postgresql-9.5 start

We can check the status on the master with the following psql command:

postgres=# select * from pg_stat_replication;
pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | backend_xmin | state | sent_location | write_location | flush_location | replay_location | sync_priority | sync_state
——-+———-+————-+——————+—————-+—————–+————-+——————————-+————–+———–+—————+—————-+—————-+—————–+—————+————
30872 | 19764 | replication | walreceiver | 192.168.13.203 | | 38556 | 2017-04-12 03:52:50.486721-07 | | streaming | 15C/A80133C0 | 15C/A80133C0 | 15C/A8012E38 | 15C/A8012E38 | 0 | async
(1 row)

That’s about all there is to it!