Skip to content

Filesystem data checksumming, pt. II

After my last post on filesystem data checksumming it took me a while until I could convince myself to actually set up regular checks of all the (important) files on my filesystems. The "fileserver" is a somewhat older machine and checksumming ~1.5TB of data takes almost 4 (!) days. Admittedly, the fact that I chose SHA-256 as a hashing algorithm seems to contribute to this long runtime. This being a private file server, MD5 would've have probably been more than enough.

But I wanted to know if this would really make a difference and wrote a small benchmark script, testing different programs and different digests on a particular machine. As always, the results will differ greatly from machine to machine - the following results are for this PowerBook of mine:
$ time ./ test.img 30 2>&1 | tee out.log
=> This took 3.5 hours to complete!

$ grep ^TEST out.log | egrep -v 'rhash_benchmark|SKIPPED' | sort -nk7
TEST: coreutils / DIGEST: md5 / 58 seconds over 30 runs
TEST: openssl / DIGEST: sha1 / 64 seconds over 30 runs
TEST: rhash / DIGEST: sha1 / 64 seconds over 30 runs
TEST: openssl / DIGEST: md5 / 75 seconds over 30 runs
TEST: rhash / DIGEST: md5 / 84 seconds over 30 runs
TEST: perl / DIGEST: sha1 / 121 seconds over 30 runs
TEST: rhash / DIGEST: sha224 / 140 seconds over 30 runs
TEST: openssl / DIGEST: sha224 / 141 seconds over 30 runs
TEST: rhash / DIGEST: sha256 / 141 seconds over 30 runs
TEST: openssl / DIGEST: sha256 / 169 seconds over 30 runs
TEST: coreutils / DIGEST: sha1 / 177 seconds over 30 runs
TEST: rhash / DIGEST: ripemd160 / 305 seconds over 30 runs
TEST: openssl / DIGEST: ripemd160 / 447 seconds over 30 runs
TEST: perl / DIGEST: sha256 / 637 seconds over 30 runs
TEST: perl / DIGEST: sha224 / 641 seconds over 30 runs
TEST: coreutils / DIGEST: sha256 / 653 seconds over 30 runs
TEST: coreutils / DIGEST: sha224 / 657 seconds over 30 runs
TEST: perl / DIGEST: sha384 / 660 seconds over 30 runs
TEST: perl / DIGEST: sha512 / 661 seconds over 30 runs
TEST: rhash / DIGEST: sha512 / 693 seconds over 30 runs
TEST: openssl / DIGEST: sha384 / 694 seconds over 30 runs
TEST: rhash / DIGEST: sha384 / 695 seconds over 30 runs
TEST: openssl / DIGEST: sha512 / 696 seconds over 30 runs
TEST: coreutils / DIGEST: sha512 / 1513 seconds over 30 runs
TEST: coreutils / DIGEST: sha384 / 1515 seconds over 30 runs
I've marked two entries here:
  • Originally I used coreutils to calculate a SHA-256 checksum of each file. In the test run above this takes 11 times longer to complete than MD5 would have taken.
  • Even if I decide against MD5 and choose SHA-1 instead, I'd have to switch to openssl because for some reason coreutils takes almost 3 times longer to complete.
The outcome of these tests means that I'll probably switch to MD5 for my data checksums - this also means that I have to 1) re-generate an MD5 checksum for all files and 2) remove the now-obsolete SHA-256 from all files :-\

Update 1: I omitted cksum and sum from the tests above, as they're not necessarily faster than other checksum tools:
$ n=30
$ for t in sum cksum openssl\ {md4,md5}; do
    START=$(date +%s)
    for a in `seq 1 $n`; do
        $t test.img > /dev/null
    END=$(date +%s)
    echo "TEST: $t / $(echo $END - $START | bc -l) seconds over $n runs"
done | sed 's/ md/_md/' | sort -nk4
TEST: openssl_md4 / 56 seconds over 30 runs
TEST: md5sum / 58 seconds over 30 runs
TEST: sum / 75 seconds over 30 runs
TEST: openssl_md5 / 76 seconds over 30 runs
TEST: cksum / 78 seconds over 30 runs
But again: these tests will have to be repeated on different systems, it could very well be that cksum might really be faster than everything else on another machine - maybe not :-)

Update 2: And it helped indeed: removing the SHA-256 checksum and calculating & attaching the MD5 checksum on 1.5TB of data (88k files) took "only" 31 hours. Which is still a lot, but a lot shorter than the "almost 4 days" we had with SHA-256 :-) Also, the next run won't have to remove the old checksum - it only has to do the verification step. What skewed this number even more was the fact that backups were running on the machine while it was doing the re-calculating stuff, so hopefully the next run will be even shorter.

A kingdom for a music player, pt. II

For a long time I've looked for a better music player for the desktop. After a while I got tired of how slow graphical music players got when running on this ~20k songs library. At the end I returned to the command line and couldn't be happier.

My most-used command for playing music is now:
$ find /mnt/nfs/media/MP3 ! -path "*/MP3/Hoerspiele/*" -type f | \
    sort -R | while read a; do mpg123 -v "$a"; done
Unknown files (e.g. cover pictures) are just skipped and I can even ^C to skip a song or pause with ^Z. That's all I really wanted :-) I'ts even possible to skip the intro for a certain radio show:
$ mpg123 -k 2000 [...]
On MacOS X, sort doesn't support "-R" and we can use Perl for this:
$ find /mnt/nfs/media/MP3 -type f | tail -30 | \
    perl -MList::Util=shuffle -e 'print shuffle();' | \
    while read a; do afplay -d "$a"; done

VirtualBox: switching to Host-only networking

There are many ways to provide network connectivity to VirtualBox guests. The most common ones, in short:

  • Network Address Translation (NAT): this is the default mode. VirtualBox will act as a DHCP server, providing guests with internal addresses and connectivity to the outside world. But no routing is provided and thus guests cannot be reached from the outside.
  • Bridged Networking: a virtual NIC is bridged to a physical NIC on the host host, guests have full network connectivity and can be reached from the outside world. However, an external DHCP and DNS service may be needed.
  • Internal networking: similar to bridged networking, but only the host and guests on the same host will be able to connect to the guest.
  • Host-only networking: a hybrid between bridged and internal networking. Guests can connect to each other, but no real NIC has to be present on the host. DHCP / DNS can be provided by VirtualBox or externally.
For a long time I've just used Bridged networking - it was easy to setup and worked like a charm. Of course, this incurred some administrative overhead: for every VM a DNS name had to be registered. At home, dnsmasq is running on the router and will be able to provide DHCP & DNS to the guests. With static names and IP addresses for the guests, a simple mapping scheme had to be implemented:

  • When a new VM gets created, modify its MAC address to correspond to a certain range and match the last octet to its (future) IP address. E.g. for a guest with the future IP address of, set its MAC address to 08:00:27:e2:81:31.
  • Add both entries to /etc/ethers and the IP address / hostname mapping to /etc/hosts.
This worked very well for a long time but was always dependent on the dnsmasq installation being around. When connected to a different network, the guests will not be able to rely on the DHCP & DNS setup. Also, the physical NIC the guest network is bridged to may not be online. Think of laptops, sometimes connecting via Wifi, sometimes via ethernet.

And so I decided to take a look at Host-only networking. First we have to create (and configure) the host-only interface:
$ vboxmanage hostonlyif create 
Interface 'vboxnet0' was successfully created
$ vboxmanage hostonlyif ipconfig vboxnet0 --ip
Disable any VirtualBox DHCP servers:
$ vboxmanage list dhcpservers
$ vboxmanage dhcpserver remove --netname NetworkName
$ vboxmanage list hostonlyifs
Name:            vboxnet0
GUID:            607cb20b-9848-4313-b522-3ccd6cd01be9
DHCP:            Disabled
IPV6Address:     fe80:0000:0000:0000:0800:27ff:fe00:0000
IPV6NetworkMaskPrefixLength: 64
HardwareAddress: 0a:00:27:00:00:00
MediumType:      Ethernet
Status:          Up
VBoxNetworkName: HostInterfaceNetworking-vboxnet0
With the virtual NIC in place, we have to configure the guests:
$ vboxmanage showvminfo vm1 | grep NIC\ 1
NIC 1:           MAC: 080027e28131, Attachment: Bridged Interface 'wlan0', ...

$ vboxmanage modifyvm vm1 --nic1 hostonly --hostonlyadapter1 vboxnet0
$ vboxmanage showvminfo vm1 | grep NIC\ 1
NIC 1:           MAC: 080027e28131, Attachment: Host-only Interface 'vboxnet0', ...
Although we could use the internal DHCP server from VirtualBox, we would not be able to provide our elaborate mapping scheme. Let's setup a small, local dnsmasq installation:
$ sudo apt-get install dnsmasq-base
$ tail -n2 /etc/{ethers,hosts}
==> /etc/ethers <==
08:00:27:e2:81:30       vm0
08:00:27:e2:81:31       vm1

==> /etc/hosts <==  vm0  vm1

$ grep ^[a-z] dnsmasq.conf 
Note: we're not using dnsmasq as a DNS server on our host. Our virtual machines only need to be reachable from localhost anyway and we'll just use /etc/hosts. However, we cannot disable the DNS function in dnsmasq (by setting port=0) because then dnsmasq won't send DHCP offers for the matching MAC address. I was about to use port=2053 to allow dnsmasq to run as a non-root user, but of course dnsmasq still needs to bind to port 67 to act as a DHCP server. Also, with port set to any other port than 53, guests would not be able to refer to other guests by its name, because resolv.conf doesn't understand port numbers:
vm1$ dig vm0 -p 2053 @ | grep ^[a-z]
vm0.                  0       IN      A
Almost there. We can now startup the VM and it should get its assigned via DHCP. We should be able to connect to the guest, but we don't seem to be able to reach any other destination except the local network from insie the guest. For that, we have to enable IP forwarding in the host.

Linux host

# iptables -A FORWARD -i vboxnet0 -s -m conntrack --ctstate NEW -j ACCEPT
# iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
# iptables -A POSTROUTING -t nat -j MASQUERADE
# sysctl -qw net.ipv4.ip_forward=1
Now we should be able to connect to the guest with a static DNS name or IP address and we should be able to connect to the outside world from within the guest.

MacOS X host

On MacOS X, the magic commands would be:
# sysctl net.inet.ip.forwarding=1 net.inet.ip.fw.enable=1
net.inet.ip.forwarding: 0 -> 1
net.inet.ip.fw.enable: 1 -> 1

# grep ^net /etc/sysctl.conf
Enable NAT through pf.conf(5):
# grep -B1 nat\  /etc/pf.conf 
rdr-anchor "*"
nat on en1 from to any -> (en1)

# pfctl -f /etc/pf.conf
# pfctl -e
Note: the nat entry must follow the rdr-anchor entry, it cannot be just appended to the end of the file.

Homebrew: GitHub API rate limit exceeded

I'm a big MacPorts fanboy but since Homebrew is all the craze for a few years now, I tried to give it another look.
$ mkdir homebrew && homebrew
$ curl -L | \
         tar --strip 1 -xzvf -
$ sudo mv homebrew /opt/homebrew && sudo chown -R root:wheel /opt/homebrew
$ sudo brew update
Initialized empty Git repository in /opt/homebrew
OK, so far - so good. Let's search for some packages, shall we?
$ brew search foo

$ brew search bar

$ brew search ssh
autossh      git-ssh      libssh       mpssh        pssh         ssh-copy-id  sshrc        sshuttle     tmux-cssh    zssh
csshx        gssh         libssh2      mussh        rssh         sshguard     sshtrix      stormssh     vassh
homebrew/fuse/sshfs                 homebrew/php/php54-ssh2             homebrew/php/php56-ssh2             Caskroom/cask/ssh-tunnel-manager
homebrew/php/php53-ssh2             homebrew/php/php55-ssh2             Caskroom/cask/bassshapes            Caskroom/cask/sshfs
Error: GitHub API rate limit exceeded for (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)
Try again in 59 minutes 32 seconds, or create an personal access token:
and then set it as HOMEBREW_GITHUB_API_TOKEN
Wait, wat? brew is asking the remote repo if a package is available? I've just run brew update:
   update   - Fetch the newest version of Homebrew and all formulae 
              from GitHub using git(1).
But indeed, the search command will perform an online search:
   search  - [...] The search for text is extended online to some popular taps.
Fortunately one can set HOMEBREW_NO_GITHUB_API=1 to stop this madness.


This is just awesome:
$ sudo apt-get install ttf-ancient-fonts
$ export PS1="\u@\h🍔  "
The font was probably used to render ancient symbols, but somehow managed to implement U+1F354 too:
$ printf 🍔 | od -x
0000000 9ff0 948d


Being an Alpine user, I have several rules and filters in place, especially for all those countless mailing lists I'm subscribed to. Specifically, I only want the mails of, say the last 3 weeks kept in certain mail folders, but I don't need the whole archive of lkml to be stored on my disk. To do that, there's a rule in my .pinerc to implement that:
patterns-filters2=LIT:pattern="/NICK=purge_old-threads/AGE=(21,INF)/FLDTYPE=SPEC/FOLDER={localhost\/user=dummy\/tls\/novalidate-cert}INBOX.Misc.lkml,{localhost\/user=dummy\/tls\/novalidate-cert}INBOX.Misc.bugtraq,[...]" action="/FILTER=1"
This is just an excerpt but maybe you get the idea: the filter is called purge_old-threads and it deletes mails older than 21 days. So far, so good.

But the filter stanza is actually quite long and hard to maintain and only gets triggered when I actually change into the mail folder and look at its contents. Alpine doesn't do any automagic housekeeping, so when I don't read lkml for a few weeks, the mailfolder grows and the incoming flow of mails just pile up. Then, when I get around to read lkml again, the filter kicks in and has to crawl through ~20k messages and delete all the older ones, which might take a while to complete.

So, I wanted to know if there's a way to do this without these rather cryptic Alpine rule sets. During my search I came across IMAPExpire. It's a nice Perl script that uses IMAP::Client, which looks a lot like Net::IMAP::Client (and has been packaged for Debian too), except it's not - we really need IMAP::Client here and we'll try to install it from CPAN:
$ env | grep PERL
PERL_MB_OPT=--install_base "/home/dummy/.perl5"

$ cpan
cpan[1]> install IMAP::Client
IMAP::Client is up to date (0.13).

cpan[4]> i IMAP::Client 
Module id = IMAP::Client
    CPAN_USERID  CONTEB (Brenden Conte )
    CPAN_FILE    C/CO/CONTEB/IMAP-Client-0.13.tar.gz
    UPLOAD_DATE  2006-09-28
    MANPAGE      IMAP::Client - Advanced manipulation of IMAP services w/ referral support
    INST_FILE    /home/dummy/.perl5/lib/perl5/IMAP/
With that in place, should work now. Don't forget the --test switch when trying this out:
$ cat > ~/.imap-pw

$ ./ --test --user dummy --passfile ~/.imap-pw --age 21 \
        --debug 9 --folders INBOX.Misc.lkml
>> 0001 NOOP
<< 0001 OK NOOP completed.
>> 0002 LOGIN dummy s3cr3tpassw0rd
>> 0003 LIST "" "INBOX.Misc.lkml"
<< * LIST (\HasNoChildren) "." INBOX.Misc.lkml
<< 0003 OK List completed.
TEST  : You're running in test mode, so the deletions wont actually take place
ACTION: Delete mail which arrived before 20-Apr-2015 from: INBOX.Misc.lkml
>> 0004 SELECT "INBOX.Misc.lkml"
<< * FLAGS (\Answered \Flagged \Deleted \Seen \Draft NonJunk)
<< * OK [PERMANENTFLAGS (\Answered \Flagged \Deleted \Seen \Draft NonJunk \*)] Flags permitted.
<< * 36 EXISTS
<< * 14 RECENT
<< * OK [UNSEEN 1] First unseen.
<< * OK [UIDVALIDITY 1204617147] UIDs valid
<< * OK [UIDNEXT 6464] Predicted next UID
<< * OK [HIGHESTMODSEQ 855] Highest
<< 0004 OK [READ-WRITE] Select completed (0.370 secs).
>> 0005 UID SEARCH BEFORE 3-May-2015
<< * SEARCH 6428 6429 6430 6431
<< 0005 OK Search completed (0.034 secs).
Deleting 4 messages from INBOX.Misc.lkml
This should be put into a script of course, running over every mailing list folder I'm subscribed to. If we're confident enough that no real email folder will be purged (and our backups restores are working), a cronjob could be created too :-)


Inspired by httpdiff and (in combination with colordiff):
$ diff -u  <(curl -sI \
           <(curl -sIL | colordiff 
--- /dev/fd/63  2015-04-02 16:28:18.000000000 -0700
+++ /dev/fd/62  2015-04-02 16:28:18.000000000 -0700
@@ -1,6 +1,12 @@
-HTTP/1.1 301 Moved Permanently
+HTTP/1.1 200 OK
 Date: Thu, 02 Apr 2015 23:28:18 GMT
 Server: Apache
-Content-Type: text/html; charset=iso-8859-1
+X-Powered-By: PHP/5.4.38
+X-Session-Reinit: true
+X-Blog: Serendipity
+Cache-Control: private, pre-check=0, post-check=0, max-age=0
+Expires: 0
+Pragma: no-cache
+Set-Cookie: s9y_54ff07474dc18d0b1f7=e1; path=/
+Content-Type: text/html; charset=UTF-8

El cheapo dynamic DNS

Ever since DynDNS stopped to offer free accounts, I used FreeDNS to dynamically update two hostnames. However, FreeDNS offers wildcard DNS records only for premium members and while I could spend $5 per month for their service, I wanted to find out if there are other, free dynamic DNS providers out there.

Believe it or not, DMOZ is still online and has a list of Dynamic DNS Service Providers and one has to click through every item to find out about the features of each provider. Keywords: free1), wildcard DNS, OS agnostic update process (ideally a simple update URL via TLS/SSL).

There's the Best Free Dynamic DNS Services list that got updated in 2014 evaluating quite a few providers and I used this list (and the DMOZ list) to narrow down my provider of choice. In alphabetical order:

  • ChangeIP offers free dynamic DNS but did not offer wildcard support, IIRC.
  • Apparently CloudFlare is offering (free) dynamic DNS as well - but no, thanks :-\
  • DNSdynamic offers a simple API to update IP addresses, but no word on wildcard DNS support.
  • DNSExit supposedly offers wildcard DNS, I haven't signed up yet to find out if this applies to their dynamic DNS offers too. They also offer a dynamic DNS update URL, which is neat.
  • DtDNS does much more than dynamic DNS and has all the goods too: wildcard DNS lots of update clients to choose from.
  • DuckDNS looks like a worthy contender: free, wildcard DNS, excellent documentation on how to do IP address updates. And a nice duck, too! :-)
  • duiaDNS offers a free package for personal use, with IPv6 address updates and two sub-subdomains (e.g. {foo,bar} under So, not really a wildcard but sufficient for my needs. Also, while they offer update clients for a fair share of operating systems, one can use a shell script to issue IP address updates.
  • Dynu offers wildcard DNS but I couldn't make out if they support an easy update URL for the IP address updates.
  • looks promising too, it's free as in beer and in speech, use an update URL for IP address updates but I could not find any information if they support DNS wildcards.
  • Hurricane Electric is a real beast, they offer much more than dynamic DNS and really seem to know what they're doing. However, they only seem to support dynamic DNS for your own domain name but don't seem to offer any own placeholder domain names. Too bad, really - but then again HE is not for n00bs who don't have a domain name to spare for their dynamic DNS setup :-)
  • I even tried no-ip once, but one has to update the hostname every 30 days (or they expire) and they charge for wildcard support, although I don't know if they even know what wildcard DNS is - their "purchase a wild card" link points to purchasing a wildcard certificate.
  • System NS offers free dynamic DNS during their beta phase. But they're beta since 2013 and nobody knows what happens when the beta phase ends. IP address updates are done via an update URL (but their certificate is expired since 2014).
  • YDNS looks nice, has IPv6 support too, a simple update URL (and an update client) but there's no wildcard support.
  • Zonomi offers free dynamic DNS, and wildcard DNS too, but I don't know if this applies to their free offer as well. Also, their webserver doesn't really support SSL.
I'll take a closer look at the bold entries in the next days and will report back with the winner :-)

1) why free? No reason, really. I just wanted to see what's out there and if the service is good I intend to donate, but I don't like to have a contract for a service like this, that's all.

Fun with SSH ControlMaster

So, there was this user, wondering why a different group membership is displayed depending on the host name used in the SSH login process:
$ ssh mallory id; ssh id                 # Both point to the same machine!
uid=1000(dummy) gid=100(users) groups=100(users),16(dialout),33(video)
uid=1000(dummy) gid=100(users) groups=100(users),7(lp),16(dialout),33(video)
Huh? What happened here? After quite some digging, I found the following in the user's ~/.ssh/config:
Host *
        ControlMaster   no
        ControlPath     /home/dummy/tmp/ssh-%r@%h
And sure enough there was an active SSH connection and an active socket file in the ControlPath directory with a timestamp from a few weeks ago:
$ netstat -nl | grep tmp
unix  2      [ ACC ]     STREAM     LISTENING     16314    /home/dummy/tmp/ssh-dummy@mallory.Xnmcb2CghSke46qz
$ ls -l /home/dummy/tmp
srw-------. 1 dummy dummy 0 Jan 02 14:22 ssh-dummy@mallory
The use case for ControlMaster is, in short: establish an SSH connection once and then establish later SSH connections to the same host (and as the same user) over the already established socket.
And a ControlMaster connection has been established, but only to mallory, not to (even though both adresses point to the same host). With ControlMaster=no set in ~/.ssh/config, new connections will 1) not try to set up a new ControlMaster but 2) will try to use existing sockets in ControlPath.

And that's exactly what happened: the "ssh mallory" would use the existing socket, the "ssh" would create a completely new connection.

Now, some time after the ControlMaster has been created (after January 2), the group membership of the user has changed: the user got added to another group ("lp").

New SSH connections to the host are just that: "new". And will therefore see the real thing. Old SSH connections over the ControlMaster socket will be spawned as a child process off that already existing SSH process that has been in place from before the group membership changed and will have an old view of the system. This can be reproduced quite nicely, using two terminals:
1|dummy@fedora0$ ssh -o ControlMaster=yes localhost

2|dummy@fedora0$ ssh localhost id; ssh id
uid=1000(dummy) gid=1000(dummy) groups=1000(dummy)
uid=1000(dummy) gid=1000(dummy) groups=1000(dummy)
Now let's add dummy to a group and try again:
fedora0# usermod -a -G lp dummy
fedora0# id dummy
uid=1000(dummy) gid=1000(dummy) groups=1000(dummy),7(lp)

2|dummy@fedora0$ ssh localhost id; ssh id
uid=1000(dummy) gid=1000(dummy) groups=1000(dummy)
uid=1000(dummy) gid=1000(dummy) groups=1000(dummy),7(lp)
I still don't know if this is a feature or bug, but I found it interesting enough to document :-)

Getting rid of serendipity_spamblocklog

This Serendipity installation has a Spamblock feature which is currently logging comment spam to a database. Over time, this database grew quite a bit and while I'd like to keep the data around I don't need the data right away. Since the machine the database is running on is low on memory anyway, I wanted to archive & then purge old records from the spamblock logging table.

This is where we are:
$ ls -lhgo serendipity_spamblocklog*
-rw-rw----. 1 8.7K Nov 21 20:28 serendipity_spamblocklog.frm
-rw-rw----. 1 1.2G Jan 25 07:29 serendipity_spamblocklog.MYD
-rw-rw----. 1  41M Jan 25 07:39 serendipity_spamblocklog.MYI

$ for a in {2008..2015}; do
     printf "year: $a       "
     mysql -B -N -D s9y -e "select count(*) from serendipity_spamblocklog \
           where year(from_unixtime(timestamp)) = $a;"
year: 2008      12
year: 2009      14901
year: 2010      93232
year: 2011      12332
year: 2012      4373
year: 2013      245002
year: 2014      1232742
year: 2015      131898
Yeah, 2014 was really the year-of-the-spam :-)

Export those into CSV files:
$ for a in {2008..2014}; do
     echo "year: $a"
     mysql -D s9y -e "select * from serendipity_spamblocklog \
        where year(from_unixtime(timestamp)) = \"$a\" into outfile \
        \"spamblocklog-$a.csv\" \ fields terminated by ',' enclosed by '\"' \
        lines terminated by '\n';"
year: 2008
year: 2009
year: 2010
year: 2011
year: 2012
year: 2013
year: 2014
Which gives us:
$ ls -lhgo
total 1.2G
-r--------. 1 4.7K Jan 25 07:07 serendipity_spamblocklog-2008.csv
-r--------. 1 4.2M Jan 25 07:07 serendipity_spamblocklog-2009.csv
-r--------. 1  91M Jan 25 07:08 serendipity_spamblocklog-2010.csv
-r--------. 1 5.4M Jan 25 07:08 serendipity_spamblocklog-2011.csv
-r--------. 1 6.4M Jan 25 07:09 serendipity_spamblocklog-2012.csv
-r--------. 1 146M Jan 25 07:09 serendipity_spamblocklog-2013.csv
-r--------. 1 860M Jan 25 07:10 serendipity_spamblocklog-2014.csv
To count records, we can't use "wc -l" just like that because comments may contain newlines as well - so let's count timestamps instead:
$ grep -c '^\"1' *
Delete the exported records:
$ for a in {2008..2014}; do
     echo "year: $a"
     mysql -D s9y -e "delete from serendipity_spamblocklog \
        where year(from_unixtime(timestamp)) = $a;"
The size of the database file may not decrease until after we run OPTIMIZE TABLE on the table:
$ mysqlcheck --optimize s9y serendipity_spamblocklog
$ ls -lhgo serendipity_spamblocklog.*
-rw-rw----. 1 8.7K Nov 21 20:28 serendipity_spamblocklog.frm
-rw-rw----. 1  88M Jan 25 08:15 serendipity_spamblocklog.MYD
-rw-rw----. 1 3.2M Jan 25 08:15 serendipity_spamblocklog.MYI
And we can still run some stats on the CSV files:
$ awk -F\",\" '/^\"1/ {print $2}' serendipity_spamblocklog-2013.csv | sort | uniq -c | sort -n
    30 "API_ERROR"
   119 "moderate"
 98045 "REJECTED"
146808 "MODERATE"

$ awk -F\",\" '/^\"1/ {print $8}' serendipity_spamblocklog-2014.csv | awk '{print $1}' | sort | uniq -c | sort -n | tail
   251 PHP/5.3.89
   252 PHP/5.3.59
   252 PHP/5.3.94
   261 PHP/5.2.19
   270 PHP/5.2.62
  1125 Opera/9.80
 30848 PHP/5.2.10
509019 Mozilla/4.0
646256 Mozilla/5.0

gpg: rejected by import filter

Trying to import a GPG key failed:
$ gpg --verify SHA512SUMS.asc
gpg: Signature made Thu Dec  4 18:03:53 2014 PST using RSA key ID 15A0A4BC
gpg: Can't check signature: public key not found

$ gpg --recv-keys 15A0A4BC
gpg: requesting key 15A0A4BC from hkp server
gpg: key 3A06537A: rejected by import filter
gpg: Total number processed: 1
The thing is, 0x15A0A4BC points to a subkey and GnuPG v1.4.18 has problem when only the keyid of the subkey is specified. There are a few ways to tackle that:

Upgrade to at least GnuPG v2.0.26. While the 1.4-branch is said to be maintained, the release notes are not. If you're really using v1.4, make sure that commit d585527 is included, which fixes bug #1680

If updating GnuPG is not an option, we can also download the key and import it locally:
$ curl
$ gpg --import 
gpg: key 3A06537A: public key "Mozilla Software Releases <>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:  18  signed:  11  trust: 0-, 0q, 0n, 0m, 0f, 18u
gpg: depth: 1  valid:  11  signed:   3  trust: 11-, 0q, 0n, 0m, 0f, 0u
gpg: next trustdb check due at 2015-03-14
Or, now that we know the real keyid, we can import it too:
$ gpg --yes --batch --delete-keys 0x15a0a4bc

$ gpg --recv-keys 3A06537A
$ gpg --verify SHA512SUMS.asc 
gpg: Signature made Thu Dec  4 18:03:53 2014 PST using RSA key ID 15A0A4BC
gpg: Good signature from "Mozilla Software Releases <>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 2B90 598A 745E 992F 315E  22C5 8AB1 3296 3A06 537A
     Subkey fingerprint: 5445 390E F5D0 C2EC FB8A  6201 057C C3EB 15A0 A4BC

SSH & the while loop

Somehow this loop stops after the first element:
$ head -2 hosts

$ head -2 hosts | while read a; do ssh $a "uname -n"; done
The best explanation I could find, comes from a forum post*):
> The "cat list.txt" is providing stdin for the entire "while" loop. It is not 
> constrained to only the read statement. ssh is eating all of the input
> and feeding it to "date" as stdin. And date just ignores the input. Use
> the -n option to ssh to stop this from happening.
And indeed, this is working now:
$ head -2 hosts | while read a; do ssh -n $a "uname -n"; done
Thanks, Perderabo!

*) copied w/o permission

On Password Strength

Password generation may seem to be trival these days, as various password generators do exist. But I wanted to know if they are any good and I found two programs for the command line trying to examine the strength of a password:
  • cracklib-check from the CrackLib project is a standalone program derived from the pam_cracklib module.

  • An alternative to pam_cracklib is passwdqc from the Openwall Project which also provides standalone programs to generate and check passwords.
Let's use these two programs on a few password generators. For the sake of simplicity, let's only generate random looking passwords that are exactly 12 characters long. Nobody should have to type passwords these days anyway and a password manager is recommended.


pwgen (v2.06 from 2007) is maybe the most popular one, and easy to use too. The last version has been released in 2007, let's see if it is any good:
$ time pwgen -s -1 12 100000 | /usr/sbin/cracklib-check | fgrep -c -v ': OK'

real    0m31.722s
user    0m20.072s
sys     0m13.736s
→ We generated 100k passwords in 32 seconds, 20 of them (0.02%) did not pass the cracklib test.
$ time pwgen -s -1 12 100000 | pwqcheck -1 --multi | fgrep -c -v OK:

real    2m42.557s
user    2m43.156s
sys     0m1.668s
→ We generated 100k passwords in 163 seconds, 2731 of them (2.7%) did not pass the pwqcheck test. Clearly, pwqcheck seems to be much stricter. It also takes much longer to check, but this should only matter when checking thousands of passwords, as we just did.


pwqgen (v1.3.0 from 2013) from the passwdqc project has a weird syntax, probably due to the fact that it's mostly used as a PAM module rather than as a standalone program. I couldn't figure out how to generate passwords of exactly 12 characters long, the following use of cut(1) will miss passwords shorter than 12 characters:
$ seq 1 100000 | while read a; do pwqgen; done | cut -c-12 | ...
So, for the sake of correctness, let's do this instead (although this takes ~3 times longer to complete):
$ i=0; time while [ $i -lt 100000 ]; do pwqgen | cut -c-12 | egrep -o '^.{12}$' && i=$((i+1)); done \
          | /usr/sbin/cracklib-check | fgrep -c -v ': OK'

real    6m4.756s
user    3m51.396s
sys     1m22.528s
→ We generated 100k passwords in 364 seconds, 9 of them (0.009%) did not pass the cracklib test. Clearly, pwqgen is generating much better passwords than pwgen, according to cracklib. And again with pwqcheck:
$ i=0; time while [ $i -lt 100000 ]; do pwqgen | cut -c-12 | egrep -o '^.{12}$' && i=$((i+1)); done \
          | pwqcheck -1 --multi | fgrep -c -v OK:

real    6m16.043s
user    6m0.292s
sys     1m8.708s
→ We generated 100k passwords in 376 seconds, 29083 of them (29%) did not pass the pwqcheck test. Wow. This even contradicts the finding above: while pwqgen does seem to generate better passwords than pwgen according to cracklib, when checked with pwqcheck, password quality seems to be much lower. Let's attribute that to our cut -c-12 hack and move on to another password generator:


apg (v2.2.3 from 2003) hasn't had a release in 10 years and is kinda slow, since it's using /dev/random directly, for whatever reason:
$ time apg -a 1 -m 12 -x 12 -n 100000 | /usr/sbin/cracklib-check | fgrep -c -v ': OK'

real    4m28.997s
user    4m56.896s
sys     0m17.712s
→ We generated 100k passwords in 269 seconds, 67 of them (0.067%) did not pass the cracklib test. And again with pwqcheck:
$ time apg -a 1 -m 12 -x 12 -n 100000 | pwqcheck -1 --multi | fgrep -c -v OK:

real    5m15.415s
user    9m30.960s
sys     0m0.420s
→ We generated 100k passwords in 315 seconds, 291 of them (0.29%) did not pass the pwqcheck test.


gpw (v0.0.19940601 from 2006) attempts to produce pronounceable passwords, so our criteria for random passwords won't hold. Let's test it anyway and see what happens:
$ time gpw 100000 12 | /usr/sbin/cracklib-check | fgrep -c -v ': OK'

real    0m28.195s
user    0m19.640s
sys     0m10.756s
→ We generated 100k passwords in 28 seconds, 540 of them (0.54%) did not pass the cracklib test.
$ time gpw 100000 12 | pwqcheck -1 --multi | fgrep -c -v OK:

real    0m1.670s
user    0m1.768s
sys     0m0.016s
Wow - none of the passwords generated by gpw was accepted by pwqcheck! Execution time was very fast, though :-)


makepasswd (v1.10 from 2013) is a Perl program, and a very fast one too:
$ time makepasswd --chars=12 --count=100000 | /usr/sbin/cracklib-check | fgrep -c -v ': OK'

real    0m34.404s
user    0m32.624s
sys     0m13.428s
→ We generated 100k passwords in 34 seconds, 22 of them (0.022%) did not pass the cracklib test.
$ time makepasswd --chars=12 --count=100000 | pwqcheck -1 --multi | fgrep -c -v OK:

real    2m29.020s
user    2m40.328s
sys     0m0.024s
→ We generated 100k passwords in 149 seconds, 12742 of them (12.74%) did not pass the pwqcheck test.

So, in conclusion: use pwqcheck to check for passwords and apg or pwgen for password generation. To always use a password checker when generating a password, use something like this:
$ pwgen_check() { pwgen $@ | pwqcheck -1 --multi; }
$ pwgen_check -s 12 10
OK: CjgR1nC4t9t5
OK: iggW9u3hMAnd
OK: E7fAY7fjF5KJ
Bad passphrase (not enough different characters or classes for this length): FexzFoJRxpO5
OK: JnIcezRq39SY
OK: TUmzflKP3npZ
OK: pSkPzf0fHnlw
While the "benchmarks" above were made up as I discovered more and more password generators, I wrote a small script combining all these, generating the following results:
$ time ./ 12 1000000
     pwgen - 148 passwords (0%) failed for cracklib, runtime: 239 seconds.
    pwqgen - 127 passwords (0%) failed for cracklib, runtime: 1605 seconds.
       apg - 635 passwords (0%) failed for cracklib, runtime: 3225 seconds.
       gpw - 5249 passwords (0%) failed for cracklib, runtime: 188 seconds.
makepasswd - 248 passwords (0%) failed for cracklib, runtime: 285 seconds.
   openssl - 175 passwords (0%) failed for cracklib, runtime: 5509 seconds.

     pwgen - 29523 passwords (2.00%) failed for pwqcheck, runtime: 1133 seconds.
    pwqgen - 290042 passwords (29.00%) failed for pwqcheck, runtime: 2248 seconds.
       apg - 3013 passwords (0%) failed for pwqcheck, runtime: 4082 seconds.
       gpw - 1000000 passwords (100.00%) failed for pwqcheck, runtime: 21 seconds.
makepasswd - 128036 passwords (12.00%) failed for pwqcheck, runtime: 1029 seconds.
   openssl - 100438 passwords (10.00%) failed for pwqcheck, runtime: 6417 seconds.

real    433m0.997s
user    352m16.577s
sys     120m52.305s

Fun with Debian DKMS

Running VirtualBox on Debian needs the virtualbox-dkms package installed. DKMS stands for Dynamic Kernel Module Support and is an attempt to build out-of-tree drivers for the kernel versions installed instead of offering multiple package versions of the same driver.

So, virtualbox-dkms was installed and all was good - until a change in the kernel sources broke the VirtualBox build and needed a patch against the virtualbox-dkms sources. However, only recent kernel versions were affected, the patched virtualbox-dkms code would not run correctly with an older kernel.

On this box, two kernel versions are installed: linux-image-3.14-2-amd64 and linux-image-3.17.0-rc1+, compiled from vanilla sources. A "dpkg-reconfigure virtualbox-dkms" would build the virtualbox-dkms for both kernel versions, but for the reasons explained above, we can't do this now.

Let's rebuild virtualbox-dkms only for the kernel that needs to be built with the patched version of virtualbox-dkms:
# rmmod vboxpci vboxnetadp vboxnetflt vboxdrv

# ls -lgo /var/lib/dkms/virtualbox/
total 4
drwxr-xr-x 5 4096 Aug 31 01:49 4.3.14
lrwxrwxrwx 1   26 Aug 31 01:49 kernel-3.14-2-amd64-x86_64 -> 4.3.14/3.14-2-amd64/x86_64
lrwxrwxrwx 1   25 Aug 31 01:37 kernel-3.17.0-rc1+-x86_64 -> 4.3.14/3.17.0-rc1+/x86_64

# dkms remove virtualbox/4.3.14 -k 3.17.0-rc1+/x86_64

# cd /usr/src/virtualbox-4.3.14
# patch -p0 < ~/virtualbox-alloc_netdev.diff
# dkms install virtualbox/4.3.14 -k 3.17.0-rc1+/x86_64
And that should be all to it :)

On SSH ciphers, MACs and key exchange algorithms

Inspired by a some question on StackExchange on the taxonomy of Ciphers/MACs/Kex available in SSH, I wondered what would be the fastest combination of Ciphers, MACs and KexAlgorithms that OpenSSH has to offer.

I've tested with OpenSSH 6.6 (released 2014-03-14) on a Debian/Jessie system (ThinkPad E431). Initially I ran these tests against an SSH server in a virtual machine but realized that the server is not supporting newer Cipher/MAC/KexAlgorithm combinations, so before I ran the actual benchmark I ran to test all working combinations. Later on I ended up running the performance test on localhost, making the evaluation step obsolete. Still, I decided to keep it around so that one can peform the benchmark on real-world situations where the remote SSH server is not located on localhost :-)

This OpenSSH version supports 15 different Ciphers, 18 MAC algorithms and 8 Key-Exchange algorithms - that's 2160 combinations to test. will go through the output of and transfer a certain amount of data from local /dev/zero to remote /dev/null. Connecting to localhost was fast so I opted to transfer 4GB of data.

Before we get into the details, let's see the top-5 combinations of the results:
cipher: aes192-ctr mac: kex: ecdh-sha2-nistp256 - 6 seconds
cipher: aes192-ctr mac: kex: diffie-hellman-group1-sha1 - 6 seconds
cipher: aes128-ctr mac: kex: ecdh-sha2-nistp384 - 6 seconds
cipher: aes192-ctr mac: kex: ecdh-sha2-nistp384 - 6 seconds
cipher: aes192-ctr mac: kex: diffie-hellman-group-exchange-sha1 - 6 seconds
The UMAC message authentication code has been introduced in OpenSSH 4.7 (released 2007-09-04) and is indeed the fastest MAC in this little contest. Looking at the results reveals that there indeed some variation in the results when it comes to different MAC or Kex choices. Iterating through all ciphers, we calculate the average run time of each combination:
$ for c in `awk '{print $4}' ssh-performance.log | sort | uniq`; do
     printf "cipher: $c  "
     grep -w $c ssh-performance.log | awk '{sum+=$(NF-1); n++} END {print sum/n}'
done | sort -nk3
cipher:  8.8125
cipher:  9.23611
cipher: aes128-ctr  15.6875
cipher: aes192-ctr  15.6944
cipher: aes256-ctr  16.1319
cipher: arcfour     20.26391)
cipher: arcfour128  20.3403
cipher: arcfour256  20.5278
cipher: aes128-cbc  21.125
cipher: aes192-cbc  22.4583
cipher:  23.2361
cipher: aes256-cbc  23.9722
cipher: blowfish-cbc  55.6875
cipher: cast128-cbc  59.5139
cipher: 3des-cbc  200.854
So, (included in OpenSSH 6.2, released 2013-03-22) comes out fastest across all combinations while 3des-cbc is indeed the slowest cipher. While the major performance factor is still the choice of the cipher, both MAC and Kex still play a role. As an example, let's look at aes192-ctr mac, the results range from 6 to 46 seconds:
cipher: aes192-ctr mac: kex: ecdh-sha2-nistp256 - 6 seconds
cipher: aes192-ctr mac: kex: ecdh-sha2-nistp256 - 46 seconds
Let's see how MAC and Kex choices rank up across all (15) different ciphers. That is, we calculate the average time for each MAC:
$ for m in `awk '{print $6}' ssh-performance.log | sort | uniq`; do
     printf "mac: $m  "
     grep -w $m ssh-performance.log | awk '{sum+=$(NF-1); n++} END {print sum/n}'
done | sort -nk3
mac:  28.45
mac:  28.8167
mac:  29.8583
mac:  30.1
mac: hmac-sha1-96  33.4417
mac:  33.5167
mac:  33.6333
mac: hmac-sha1  33.7104
mac: hmac-md5-96  33.7792
mac:  33.8167
mac: hmac-md5  33.825
mac:  34.2
mac:  38.2333
mac: hmac-sha2-512  38.2833
mac:  43.775
mac: hmac-ripemd160  43.7792
mac: hmac-sha2-256  44.3792
mac:  44.45
And again for the key exchange algorithms:
$ for k in `awk '{print $8}' ssh-performance.log | sort | uniq`; do
     printf "kex: $k  "
     grep -w $k ssh-performance.log | awk '{sum+=$(NF-1); n++} END {print sum/n}'
done | sort -nk3
kex: ecdh-sha2-nistp256  35.2926
kex: diffie-hellman-group14-sha1  35.3148
kex: diffie-hellman-group1-sha1  35.4296
kex: diffie-hellman-group-exchange-sha256  35.563
kex: ecdh-sha2-nistp521  35.563
kex: diffie-hellman-group-exchange-sha1  35.6926
kex: ecdh-sha2-nistp384  35.8333
kex:  35.8667
The differences for Kex here are in the sub-second range, so even the recently added Curve25519 option would not be much of a performance impact here.

So, what do we make of all this? Another StackExchange question suggests that SSH in general holds up pretty good security-wise and even dismisses problems with CBC. Assuming all of that is true, what can we do to get the most performance when transferring big files over SSH? Let's look at the defaults again, from ssh_config(5) of OpenSSH 6.6:
Ciphers: aes128-ctr aes192-ctr aes256-ctr arcfour256 arcfour128 [...]
MACs: [...]
KexAlgorithms: ecdh-sha2-nistp256 ecdh-sha2-nistp384 ecdh-sha2-nistp521 [...]
So, according to the results of this little contest, a faster default for a recent version of OpenSSH could be:
  • The GCM ciphers have been implemented with OpenSSH 6.2 (released 2013-03-22).
  • The EtM (Encrypt-then-MAC) modes and 128-bit UMAC variants have only been supported since OpenSSH 6.2 (released 2013-03-22).
  • The KexAlgorithms option has been added with OpenSSH 5.7 (released 2011-01-24)
As always, when it comes to benchmarks: other SSH implementations (e.g. HPN-SSH) or different setups will most certainly return different results. So please test yourself before drawing any conclusions from these results.

Update: OpenSSH 6.7 (released 2014-10-06) disables the CBC ciphers by default because of vulnerabilites found when used with SSH (found in 2008).