ping (ICMP) blocked? No problem, enter hping!

When traveling while working we often find ourselves using networks that have certain restrictions. For example, many free WiFi hotspots will block things like ICMP echo requests (ping) which makes things a bit difficult when you are trying to figure out problems with connectivity, where packet loss is occurring, and what is reachable and what is not. Enter hping, a TCP/IP utility that can do far more than the name suggests.

First an example using standard ping (ICMP):

ping -c1 http://www.google.com
PING http://www.google.com (172.217.14.228) 56(84) bytes of data.

http://www.google.com ping statistics —
1 packets transmitted, 0 received, 100% packet loss, time 0ms

As we can see in the above output, we never got a reply. Is the network down in between? Is there a firewall that’s blocking our requests? We don’t know.. So let’s take a look at using hping to send a TCP packet, instead of ICMP, to port 80 on a well known website:

sudo hping -c 1 -S -p 80 http://www.google.com
Password:
HPING http://www.google.com (wlp2s0 172.217.14.228): S set, 40 headers + 0 data bytes
len=44 ip=172.217.14.228 ttl=127 id=13972 sport=80 flags=SA seq=0 win=11680 rtt=51.0 ms

http://www.google.com hping statistic —
1 packets tramitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 51.0/51.0/51.0 ms

Ah-ha! We got a reply! So TCP connections to port 80 at least are not blocked. We can work with that (hint this is where having your own custom, configurable OpenVPN server comes in handy: configure it to answer on port 80 and you are back to work).

Breakdown of the options used: as with the standard ping tool, the “-c” flag tells it we only want to send N count requests. In this example we are only sending one (1) request. Without this command line flag, hping (and ping) will ping non-stop until you stop it with ctrl-c. The next flag is “-S” which sets the SYN flag as by default it will not be set by hping. The “-p” flag tells hping we want to send our request to a specific port, in the above example we use port 80 which is the standard non-SSL webserver port. The last option is the hostname or IP address we want to interact with (www.google.com in our example.)

I am going to try something new and use a terminal utility named asciinema to record the examples above. Please let me know what you think: Watch a video of the above example using asciinema!

Review: GL.iNet GL-AR750S-Ext Travel Router and EasyTether

I, along with my family, like to travel several times a year. The type of traveling varies – sometimes it is somewhere within a day or two driving distance and other times we take the a flight to somewhere more distant and stay longer. This is possible because I am a mostly (99% of the time) remote worker and can work from most locations so long as I have power for my laptop and accessories and high speed Internet. When I am stationary I try to use the local WiFi if possible, but if I am traveling or at a location that does not have suitable WiFi then I will use my Android smart phone (Verizon service) to fill the gap. Using my phone is fine for one device, but what if I need to connect more than one? When I’m at a hotel, do I really want to trust their WiFi? Well it turns out the answer to both of these questions is to pair my phone with a specialized router.

First some background: most phone carriers (at least in the US) allow hot-spots on the phone, but they come with a catch: you are limited, per month, in how much bandwidth you are allowed to consume. In Verizon-land it is somehow acceptable to cap this at 15GB per month which is a bit unrealistic in this day and age. The phone itself has unlimited data, but anything you wish to connect to the built-in hot-spot or otherwise tether to the phone is limited to 15GB per month. One day I got to thinking “so how can I tap into that unlimited side of the phone?” A bit of searching turned up a few options. Out of those options, I would say the top two are:  PDANet+, which as been around for many years and EasyTether, a more recent addition. On the phone-side they are both paid apps on the Google Play Store, and while I own both, I have found EasyTether to work with more devices and it also seems to use closer to the phones full network speed.

What this looks like then is on my Android phone I use a paid app named EasyTether and on my laptop I use a free piece of software also named EasyTether. The creators of EasyTether provide client-side support for Linux, Windows, Mac, and some specialized devices. I personally have used it with Linux, Windows, and a few specialized routers. EasyTether can connect to the phone via Bluetooth or USB. The USB option is faster and will also trickle charge your phone. The Bluetooth option is good if you are unable to plug your phone in (i.e. you are also using it and having a cord is annoying) but are within 25 or so feet of your client-end (i.e. laptop), but Bluetooth 3 and 4 have a ~25 megabit/second limit. This means that you will run into a big bottleneck with Bluetooth while with USB you will probably top out your cell network bandwidth before you hit the 5gigabit/second limit of USB 3. These limits are of course on paper and other factors come into play.

My initial thought was to use a Raspberry PI and tether my phone to it, but that meant the access point was software which has never been very good in my experience. Then I got to thinking about specialized travel routers. After some searching and reading reviews I found the GL.iNet GL-AR750S-Ext Travel Router, hereafter referred to as the ar750s to keep things easy for me. The ar750s is a dual-band (2.4GHz and 5GHz) router, with external antennas, running OpenWRT (Linux-based) firmware and it has 3 gigabit Ethernet ports. The OpenWRT firmware can be customized in just about any way you could imagine. The ar750s has built-in OpenVPN client and server, dual flash, ac750, external storage (micro-SD), and has a bunch of included utilities. One of its features is the ability to connect to an existing WiFi as an extender, act as a broadband router, and the list goes on.

After I ordered the router from Amazon I expected to have to do a bit of hacking to make their pairing work, but it turns out both GL.iNet and EasyTether work together already. GL.iNet has a nice EasyTether guide. The only difference for me was the EasyTether version needed to be a newer (latest) version instead of the quite old one reference in the guide. If for some reason that guide is no longer available, what you need to do is extract the EasyTether file on your computer, find the ipk under “*\ar71xx\generic” and scp it to the router, ssh to the router as root (using whatever password you set in the web interface), run “opkg update” and “opkg install <path/to/where/you/scp’d/the/ipk>. Once you have the IPK installed, you need to run “easytether-usb” to set it up. Then edit /etc/config/network and add “config interface ‘wan’ \ option ifname ‘tap-easytether’ \ option proto ‘dhcp'” (where the “\” is, put a newline). Oh, and you will need to have USB debugging enabled on your phone.

This setup works pretty darn good, but requires ssh’ing into the router each time you want to bring the connection up or if it dies. So I wrote a simple SH script, available from my GitHub. In case that dies, here it is in all its glory:

#!/bin/sh

# Version
version=0.04

# A simple script that checks for connectivity (including working DNS resolution)
# If no connectivity, reset tethering
#
# Requires easytether-usb to be installed and already setup/working
# Manual (one time) setting of the USB device ID of the tether device (TETHERDEV).

# Find the tethering device with lsusb. Example (Samsung/Verizon kids tablet):
# Bus 001 Device 013: ID 04e8:6860 Samsung Electronics Co., Ltd Galaxy (MTP)
TETHERDEV=04e8:6860
# 04e8:6860 is for Samsung’s USB identification (both Note 10 and kids tablet)

# Some highly available website to check (www.google.com is backed by *lots* of servers)
CHECKWWW=www.google.com

# How long before we check again?
SLEEPITOFF=60

while true; do
curl –connect-timeout 10 $CHECKWWW > /dev/null 2>&1

if [ $? != 0 ]; then
# No internet
echo “Network down, Houston we have a problem!” # FIXME: For debugging
# Reset the USB device
usbreset $TETHERDEV
# Wait a few seconds for device to be ready again
sleep 5s
# (re)start easytether-usb to make a connection
easytether-usb
else
# Internet working
echo “Network up, all good in the hood!” # FIXME: For debugging
fi
sleep $SLEEPITOFF
done

I put this file /usr/local/sbin and make sure it is executable. Then we need to edit /etc/rc.local (just before the ‘exit’) and add “/usr/local/sbin/fixnet.sh&” so it will start at boot. Be sure to change the TETHERDEV to match the USB ID of your phone (found with “lsusb”) in order for it to work. I use curl instead of ping, because ICMP packets are filtered/blocked in my testing.

Once you have everything up and running you only need to enable EasyTether USB on your phone via the app and plug it into the USB port on the router. The router is very easy to use and once configured I rarely need to do anything other than power it on. Speaking of power, the ar750s runs great from a battery pack. The battery pack that I use to charge my phone on the go will also power the ar750s for several hours.

So that covers mobile data (traveling) and places that have no Internet, but what about when I’m at a hotel or using some other network (be it WiFi or wired) that cannot be trusted? Well, for those incidents, I prefer to have my devices behind another layer of isolation via the ar750s. The ar750s supports being a router via Ethernet as well as acting as a WiFi repeater. If the hotel WiFi is not encrypted (often the case) I like to use OpenVPN whenever possible to close the gap. When I am using the router without my phone, the above script will keep trying to reset USB and connect EasyTether to a non-existent device. This has not caused any problems for me. However, it might be a problem if you try to use the USB port for something else – so be warned!

John The Ripper compile error (Gentoo)

So I was trying to build John The Ripper 1.8.0 but it failed here:

In file included from /usr/include/string.h:494,
from c3_fmt.c:20:
In function ‘strncpy’,
inlined from ‘binary’ at c3_fmt.c:173:2:
/usr/include/bits/string_fortified.h:106:10: warning: ‘__builtin_strncpy’ specified bound 128 equals destination size [-Wstringop-truncation]
106 | return __builtin___strncpy_chk (__dest, __src, __len, __bos (__dest));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
x86_64-pc-linux-gnu-gcc DES_fmt.o DES_std.o DES_bs.o DES_bs_b.o BSDI_fmt.o MD5_fmt.o MD5_std.o BF_fmt.o BF_std.o AFS_fmt.o LM_fmt.o trip_fmt.o dummy.o batch.o
bench.o charset.o common.o compiler.o config.o cracker.o crc32.o external.o formats.o getopt.o idle.o inc.o john.o list.o loader.o logger.o math.o memory.o m
isc.o options.o params.o path.o recovery.o rpp.o rules.o signals.o single.o status.o tty.o wordlist.o unshadow.o unafs.o unique.o c3_fmt.o x86-64.o -Wl,-O1 -W
l,–as-needed -fopenmp -lcrypt -o ../run/john
/usr/lib/gcc/x86_64-pc-linux-gnu/9.2.0/../../../../x86_64-pc-linux-gnu/bin/ld: john.o: in function `main’:
john.c:(.text.startup+0x89): undefined reference to `CPU_detect’
/usr/lib/gcc/x86_64-pc-linux-gnu/9.2.0/../../../../x86_64-pc-linux-gnu/bin/ld: john.c:(.text.startup+0xc0): undefined reference to `CPU_detect’
/usr/lib/gcc/x86_64-pc-linux-gnu/9.2.0/../../../../x86_64-pc-linux-gnu/bin/ld: john.c:(.text.startup+0xed): undefined reference to `CPU_detect’
/usr/lib/gcc/x86_64-pc-linux-gnu/9.2.0/../../../../x86_64-pc-linux-gnu/bin/ld: john.c:(.text.startup+0x532): undefined reference to `CPU_detect’
collect2: error: ld returned 1 exit status
make[1]: *** [Makefile:835: ../run/john] Error 1
make[1]: Leaving directory ‘/var/tmp/portage/app-crypt/johntheripper-1.8.0/work/john-1.8.0/src’
make: *** [Makefile:184: linux-x86-64] Error 2
make: Leaving directory ‘/var/tmp/portage/app-crypt/johntheripper-1.8.0/work/john-1.8.0/src’
* ERROR: app-crypt/johntheripper-1.8.0::gentoo failed (compile phase):
* emake failed
*
* If you need support, post the output of `emerge –info ‘=app-crypt/johntheripper-1.8.0::gentoo’`,
* the complete build log and the output of `emerge -pqv ‘=app-crypt/johntheripper-1.8.0::gentoo’`.
* The complete build log is located at ‘/var/tmp/portage/app-crypt/johntheripper-1.8.0/temp/build.log’.
* The ebuild environment file is located at ‘/var/tmp/portage/app-crypt/johntheripper-1.8.0/temp/environment’.
* Working directory: ‘/var/tmp/portage/app-crypt/johntheripper-1.8.0/work/john-1.8.0’
* S: ‘/var/tmp/portage/app-crypt/johntheripper-1.8.0/work/john-1.8.0’

It turns out to be related to compiler options. Specifically I needed to NOT use -march=native and I needed to have “avx” in my CPU_FLAGS_X86. With those two changes I was able to compile (emerge) without further issues. After it finished compiling I switched my -march flag back to native.

Nested (Nested (Nested SSH) SSH)) SSH

There are occasions where I need to reach a server via SSH that is only reachable through multiple bastions. Sometimes this is because of security reasons and other times it is because the machines are on different networks with no direct route. One can of course SSH to the first bastion, then from there to the next, and so forth, but that is annoying to have to type each time. We can do this from the command line as well as in the SSH config.

An example from the command line (for scripting, not typing) using strung together commands:

ssh -t user@host1 ssh -t user@host2 ssh -t user@host3 … ssh user@destination

The ‘-t’ flag tells SSH to use a pseudo terminal on the remote machine. This is required if you intend on running a command, such as SSH itself, that expects to be executed in a terminal instead of as a detached/background process. The final SSH command doesn’t need the ‘-t’ flag if you are aiming for a remote shell such as bash.

An example from the command line (again, for scripting) using jumphost flag:

ssh -J user@host1,user@host2,user@host3,… user@destination

Okay so that’s pretty cool, but what if we want to make it a permanent setting in our SSH config? Well, we can do that too by adding these lines to our ~/.ssh/config:

# host1
host host1
HostName host1.fqdn
User user

# host2
host host2
HostName host2.fqdn
User user
ProxyJump host1

# host3
host host3
HostName host3.fqdn
User user
ProxyJump host2

# destination
host destination
HostName destination.fqdn
User user
ProxyJump host3

Now we can use ‘ssh destination’ and SSH will handle the rest for us.

That covers the basics and should give you a glimpse of how chill SSH is with being nested, strung together, and so on.

Using AWS S3 as Primary Storage on Nextcloud

I have been testing/using Nextcloud for the last couple of months in hopes of getting rid of Dropbox, Google Drive, etc. I recently experimented with having external storage connected to it. That’s all fine and dandy, but then I wondered if an external storage could be used as the primary storage? A little searching revealed I wasn’t the first person to think of that. In fact it is supported by Nextcloud and is documented. To get started create a bucket with the desired settings and create an IAM user that has access.

The official Nextcloud documentation gives this example:

‘objectstore’ => array(
‘class’ => ‘OC\\Files\\ObjectStore\\S3’,
‘arguments’ => array(
‘bucket’ => ‘nextcloud’,
‘autocreate’ => true,
‘key’ => ‘EJ39ITYZEUH5BGWDRUFY’,
‘secret’ => ‘M5MrXTRjkyMaxXPe2FRXMTfTfbKEnZCu+7uRTVSj’,
‘hostname’ => ‘example.com’,
‘port’ => 1234,
‘use_ssl’ => true,
‘region’ => ‘optional’,
// required for some non amazon s3 implementations
‘use_path_style’=>true
),
),

Based on my experience using AWS S3 as an external storage device, I ended up with this as my config:

‘objectstore’ => array(
‘class’ => ‘OC\\Files\\ObjectStore\\S3’,
‘arguments’ => array(
‘bucket’ => ‘<my_bucket>’,
‘key’ => ‘<key>’,
‘secret’ => ‘<so_secret>’,
‘use_ssl’ => true,
‘region’ => ‘<region>,
// required for some non amazon s3 implementations
‘use_path_style’=>true
),
),

Specifically, I found it necessary to specify the region (i.e. us-west-2) and SSL otherwise I got errors.

I have been running this for a few days now and have not seen any issues.

Nextcloud, Docker, and upgrades

I have been running Nextcloud via a Docker image for a few months and recently a new version of Nextcloud was released. This seemed like the perfect opportunity to test out upgrading to a newer Nextcloud Docker image and keeping my data. Since I mount a volume to keep the configuration data in, it will be a fairly easy to upgrade.

First step is to make sure we have backups and verify their integrity. The Nextcloud backups page details these instructions pretty well, but just to cover the basics you need to backup your data and database at a minimum. I also went ahead and grabbed a copy of the config.php by itself and stored it outside of the container. Tip: I didn’t initially know which volume store was the right one, so I entered the container by loading bash and created a temporary file named ‘findmehere’ that I could search for from the host.

Next we will stop the existing container by issuing a ‘docker stop <id>’ where <id> is the container id listed in the output of ‘docker ps’. Then we will start a new docker image using the same command we did the first time. For me this looked like ‘docker run -d -v nextcloud:/var/www/html -p 8181:80 nextcloud’ but YMMV.

The occ script should detect the new version of Nextcloud and start the upgrade. Check the status by visiting your Nextcloud web page. Since we used a volume to keep our data in we should be all set!

Nextcloud and Docker with Apache Proxy

I decided I wanted to try migrating away from Dropbox, Amazon Drive, and Google Drive to my own server using open source tools. After a bit of research I determined Nextcloud would be the best fit for what I wanted to do right now and some optional features later on. Nextcloud can be installed via packages in the major distributions, but I wanted to use this opportunity to test drive Docker at the same time. One of the reasons I wanted to use a container was so that the installation is mostly isolated from the host install which is good for security purposes but also makes it easier if I want to migrate the whole thing to a different host later on. Now the host I want to use already has a web server, Apache, listening on ports 80 and 443, so we’ll configure it to act as a proxy between the web server in the Nextcloud Docker image and the client. This will also fit well with the SSL certificate the host has.

The first step is getting Docker installed and running. This is pretty easy for most distributions and is covered in detail on the official Docker Community Edition site for Ubuntu, CentOS, and the Gentoo Wiki even has instructions as well.

Once you’ve got Docker up and running. let’s test it out first:

docker run hello-world

This should download the hello-world image and run it.

Now, let’s get to docker. First let me mention that I already have a database (PostgreSQL) that I used so I’ll skip that step here, but if you don’t already have a database available now would be the point to pause and go get that resolved. Since data is not saved between Docker containers, we will need to instruct Docker to create a mount point for the Nextcloud image that will keep our data safe. This can be accomplished with the ‘-v’ flag. We also need to tell Docker what port we want to open up, but in my case I don’t want it using port 80 or 443 so we’ll have to further instruct Docker to forward the port in our ‘-p’ flag.

docker run -d -v nextcloud:/var/www/html -p 8181:80 nextcloud

In the above command I have select port 8181 on the host to be passed to port 80 on the Nextcloud Docker image. Once the Docker container loads completely you should be able to access it via http://your_ip:8181 and see the setup page, but before we do that let’s setup our proxy.

On the host side we will create an Apache vhost with a few lines like this:

<VirtualHost *:80>
ServerName nextcloud.my_domain.com
http://localhost:8181/
Redirect permanent / https://nextcloud.my_domain.com </VirtualHost>

<VirtualHost *:443>
ServerName nextcloud.my_domain.com

<proxy *>
Require host localhost
Require all granted
</proxy>
ProxyPass / http://localhost:8181/
<IfModule mpm_peruser_module>
ServerEnvironment apache apache
</IfModule>
<IfModule mod_headers.c>
Header always set Strict-Transport-Security “max-age=15552000; includeSubDomains”
</IfModule>

SSLEngine on
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL
SSLCertificateFile /etc/my_certificate.pem
SSLCertificateKeyFile /etc/my_certificate-private.pem
SSLCertificateChainFile /etc/my_certificate-full.pem
</VirtualHost>

What this configuration does is force all non-SSL connections to use SSL. Then under the SSL configuration it proxies all connections to port 8181 on localhost where nextcloud is running. Then finally we use our SSL certificate from the host. Don’t forget to setup DNS for your nextcloud domain!

At this point we should be ready to continue with the setup via web. Load up https://nextcloud.my_domain.com in your web browser and follow the on-screen instructions. One of the pages should detect the proxy setup and ask for additional domain(s) to be configured so be sure to add https://nextcloud.my_domain.com to the list in addition to http://host.my_domain.com:8181 (if desired).

One final step that I suggest is setting up a cron job on the host (not inside the Nextcloud Docker image). I have mine set to run ever 15 minutes. In order for this to work we need to install sudo in the container by entering the container:

docker exec -it 16765e565e25 bash

Update apt sources:

apt-get update

Install sudo:

apt-get install sudo

Now finally on the host (NOT container) create a crontab with this line:

*/15 * * * * docker exec -d <docker id> /usr/bin/sudo -u www-data /usr/local/bin/php /var/www/html/cron.php

Be sure to replace <docker id> with the real one which you can find by running ‘docker ps’ on the host.

At this point you should have a fully functional Nextcloud server!

Client PPtP Connection From A VM

I encountered an issue recently with trying to make a PPtP connection from a Linux VM as the client to a remote commercial device or server where the GRE packets were being dropped. The same PPtP credentials worked on another server that is bare metal. This lead me speculate that the issue might be something between the routing devices and the client. After a bit of investigative work with wireshark I discovered the GRE packets were in fact getting to the virtualization host but not to the guest VM. I suspect this issue may be present with other types of virtualization software, but to be clear this particular VM host is running KVM/QEMU.

It has been a while (read: years) since I’ve done much with PPtP beyond just using it. Adding a configuration that was working on another server to this particular system I discovered the connection would not complete much to my dismay. Looking at what ppp logged to the system log revealed it never got a proper GRE reply. Well, there were a lot of things in the log but the one that stood out looked like this:

warn[decaps_hdlc:pptp_gre.c:204]: short read (-1): Input/output error

After a bit of Googling and reading the documentation for pptp-client I decided re-try the setup on the previously mentioned working system and watch the log closely for further clues. Where the second system was failing the original system sailed right past and worked fine. My next attempt was to look at what connections the first system had going which lead to me realize and make a mental connection to the documentation/Googling had revealed about PPtP using protocol 47 (GRE) on TCP port 1723 for the control. Watching another attempt on the second system showed the outgoing request for GRE but nothing coming back. Repeating the last test but watching for incoming GRE on the host showed that it was being received but not being passed on to the guest VM. Looking at my options I discovered that there is a whole set of modules and a kernel configuration option to allow forwarding of PPtP.

The missing pieces to the puzzle include adding a line to your sysctl.conf:

net.netfilter.nf_conntrack_helper=1

Then loading these kernel modules:

nf_conntrack_proto_gre
nf_nat_proto_gre
nf_conntrack_pptp
nf_nat_pptp

As soon as these were in place PPtP started working as expected in the guest VM. What started out as a mystery turned out to be a fairly simple solution. While there are probably not a lot of people still using PPtP these days, it is a better alternative to using a proprietary VPN client.

Placing A Buffer Between Your Cell and The World

This might be a familiar problem for some people: I’ve had the same personal cell phone number for 15+ years. During this time I have used my number for personal, business, personal business, and the list goes on. Over the years the number of telemarketers has increased to the point where it is sometimes multiple calls per day. This has been annoying but I can usually deal with it by tapping decline on numbers I don’t know. However, about a year ago I started getting text/SMS spam and that is far more irritating to me. When this SMS spam reached the multiple per day I decided it might be time to get a new phone number, but I didn’t want the same problem to reappear. My solution is to make my own answering service and give out that number and never my cell. This covers the phone calls, but what about texts? I wouldn’t want to miss a legitimate message. Well both aspects can be accomplished by using Twilio

For those that do not know, Twilio is a programmable phone service with voice, SMS, Fax, and other features. Twilio has an API for several languages including Python, PHP, node.js, C#, Java, and Ruby. I already have a web server so for me it seemed easiest (quickest to setup) to use that to house some PHP and use Twilio to handle my automated voice and SMS messaging service.

So what does the end result look like? People (or automated telemarketers) can call my Twilio phone number and are greeted with a message of my choosing. Since I don’t want the automated calls leaving me messages I have created a phone (tree) menu that requires the caller to enter a specific number (extension) to leave me a message. Then for SMS, I have a PHP script set up that takes the message and sends a copy to my email then autoresponds and tells the sender that I’ll get back to them as soon as possible.

Lets start with the voice part as that is the more involved piece in this setup. In the Twilio web console, under the section titled “Voice & Fax” I have it set to “Webhook” and have a URL pointing to a specific URL on my webserver. The URL looks something like https://mydomain.com/twilio/main.php The contents of main.php is fairly simple:

<?php

header(“content-type: text/xml”);
echo “<?xml version=\”1.0\” encoding=\”UTF-8\”?>\n”;
$from = $_REQUEST[‘From’];
// email me the number every number that calls
mail(‘myemailaddress@gmail.com’, ‘Call System: call from ‘.$from, $from.”\n”, ‘From: myemailaddress@gmail.com’);
?>
<Response>
<Say voice=”woman” language=”en”>Hello. You may select from the following options.</Say>
<Gather numDigits=”1″ action=”main-handler.php” method=”POST”>
<Say voice=”woman” language=”en” loop=”1″>
To leave a message for Ron select one.
</Say>
<Pause length=”15″/>
</Gather>
</Response>

If the caller selects one they will be sent to main-handler.php, if they select anything else the message replays. In main-handler.php I have:

<?php

// if the caller pressed anything but these digits, send them back
if($_REQUEST[‘Digits’] != ‘1’) {
header(“Location: main.php”);
die;
}

header(“content-type: text/xml”);
echo “<?xml version=\”1.0\” encoding=\”UTF-8\”?>\n”;
?>

<Response>
<?php if ($_REQUEST[‘Digits’] == ‘117’) { ?>
<Say voice=”woman” language=”en”>Please leave a message for Ron. You may hang up when finished.</Say>
<Record maxLength=”90″ transcribe=”true” action=”ron-recording.php” transcribeCallback=”ron-recording-transcribe.php” />
<?php } ?>
</Response>

If the caller selects one, the flow gets sent to ron-recording.php:

<?php
header(“content-type: text/xml”); echo “<?xml version=\”1.0\” encoding=\”UTF-8\”?>\n”;
?>
<Response>
<Say voice=”woman” language=”en”>Thank you for leaving a message for Ron.</Say>
</Response>

If the caller leaves a message, transcription is handled by ron-recording-transcribe.php:

<?php
$from = $_REQUEST[‘From’];

// email me
mail(‘myemailaddress@gmail.com’, ‘Call System: message for Ron from ‘.$from, $from.”\n”.$_REQUEST[‘TranscriptionText’].”\n”, ‘From: myemailaddress@gmail.com’);

?>

That covers the voice aspect of my Twilio setup, the last piece is handling SMS. In the Twilio web console under “Messaging” I have it set to Webhook and the URL looks something like https://mydomain.com/twilio/incomingsms.php This handles all SMS text messaging that are sent to my Twilio number:

<?php
header(“content-type: text/xml”);
echo “<?xml version=\”1.0\” encoding=\”UTF-8\”?>\n”;
// email me
$from = $_REQUEST[‘From’];
mail(‘myemailaddress@gmail.com’, ‘Call System: SMS for Ron from ‘.$from, $from.”\n”.$_REQUEST[‘Body’].”\n”, ‘From: myemailaddress@gmail.com’);
?>
<Response>
<Message>I am busy right now but will try to reply to your message as soon as possible.</Message>
</Response>

When a text is sent to my Twilio number the contents of the text get sent to my email immediately and a message reading “I am busy right now but will try to reply to your message as soon as possible.” is sent to sender.

Well that covers my simple Twilio setup for handling voice messages and SMS texts. Hopefully it proves useful in the years to come with regards to reducing the amount of telemarketers and spam texts sent to my cell phone.

Creating Your Own Encrypted File “Safe”

I often think about, no scratch that – I often worry about what would happen if my laptop was stolen or fell into “evil” hands. I mean there isn’t a lot on any of my machines that could be misused as most things are locked down. My Internet-based accounts such as my Google account require two factor authentication, important files are backed up, etc. However, there are special files, and here I’m specifically thinking about SSH private keys, that should never be out of my control. My solution is fairly simple: create an encrypted file that can be mounted as a loopback device.

The first step is deciding how much speed we are going to need as we cannot directly resize our encrypted file once it is created. If we later need more storage (or less) our only option is to create a new one and copy the contents of the old (mounted) safe to the new one. I use mine to store my entire ~/.ssh, ~/.gpg, and a few other files so my needs are fairly small. All of my files together account for less than 100MB, but knowing that I might want to expand later I decided on 1GB.

If we are using ext2/3/4, xfs, and probably a few other filesystems we can use fallocate to reserve our disk space. I say probably a few others as I know of at least one it doesn’t work on which is zfs.

fallocate -l 1G safe.img

The next step is to create an encrypted device on our new blank image:

cryptsetup luksFormat safe.img

During this step you will be prompted for a password and this is really the only weak spot (bugs not withstanding) in the entire setup. Make sure your password is long enough to make brute force unreasonably long and make sure it cannot be aided with any of the known dictionaries floating around. I made mine 31 characters long because it is long enough to make brute force unprofitable.

Once the encrypted data is written, we can proceed to opening the device:

cryptsetup open safe.img

You will be prompted to enter your password each time you open it so make sure you are using a trusted keyboard (i.e. not wireless).

The next step is to create a filesystem on our new safe:

mkfs.ext4 /dev/mapper/safe

Now, finally, we can mount it and start using it!

mount /dev/mapper/safe /mnt/safe

At this point you should be able to add files into your safe as if were any other mounted device.

Once you are done using your safe, don’t forget to unmount it and close it so that no-one can access it:

umount /mnt/safe

cryptsetup close safe

So now we know how to create, open, and close the device, but what sorts of things are good for storing in there? Well as previously mentioned I store my entire ~/.ssh/ directory in my safe. I moved the directory into /mnt/safe/ and then created a symlink from there to ~/.ssh which allows me to use everything I normally would (ssh, mosh, scp, etc.) without having to reconfigure anything.

What to do next is up to you, but I do tot trust the quality of USB thumb drives out there these days. So I opted to stick my safe on my local hard drive and include it in my backup scheme.