Skip to content

Encrypted network block device

While backing up with Crashplan works fine most of the time (and one trusts their zero-knowledge promise), sometimes new software updates, power outages or other unplanned interruptions cause Crashplan to fail and either stop backing up or discard the whole archive and start to backup from scratch, uploading the whole disk again :-\

So yeah, it mostly works but somehow I'd like to be a bit more in control of things. The easiest thing would be to order some disk space in the cloud and rsync all data off to a remote location - but of we need to encrypt it first. But how? There are a few solutions I've came across so far, I'm sure there are others, but let's look at them real short:

  • duplicity uses librsync to upload GnuPG encrypted parts to the remote destination. I've heard good (and bad) things about it, but the tought of splitting up data into small chunks and encrypting it, uploading thousands of small bits of random-looking data sounds cool and a bit frightening at the same time. Especially the restore scenario boggles my mind. I don't want to dismiss this entirely (and may even come back to it later on), but let's look for something saner for now.

  • Attic is a deduplicating backup program written in Python. I've haven't actually tried this one either, although it seems to support encryption and remote backup destinations, although the mentioning of FUSE mounts make me a bit uneasy.

  • Obnam supports encrypted remote backups, again via GnuPG. I gotta check this out if this really works as advertised.

  • Burp uses librsync and supports something called "client side file encryption" - but that turns off "delta differencing", which sounds like the whole purpose of using librsync in the first place is then gone.

  • Rclone supports encrypted backups, but only to some pre-defined storage providers and not to arbitrary SSH-accessible locations.

  • BorgBackup has the coolest name (after Obnam :-)) and supports deduplication, compression and authenticated encryption - almost too good to be true. This should really be my go-to-solution for my usecase and if my hand-stitched version isn't working out, I'll come back to this for sure.

With that, let's see if we can employ a Network Block Device to serve our needs.
As an example, let's install nbd-server on the remote location and set up a disk that we want to serve to our backup client later on:
$ sudo apt-get install nbd-server

$ cd /etc/nbd-server/
$ grep -rv ^\# .
./config:[generic]
./config:       user = nbd
./config:       group = nbd
./config:       listenaddr = localhost
./config:       allowlist = true
./config:       includedir = /etc/nbd-server/conf.d
./conf.d/local.conf:[testdisk]
./conf.d/local.conf:    exportname = /dev/loop1
./conf.d/local.conf:    flush = true
./conf.d/local.conf:    readonly = false
./conf.d/local.conf:    authfile = /etc/nbd-server/allow
./allow:127.0.0.1/32
We will of course serve a real disk later on, but for now a loop device will do:
$ dd if=/dev/zero bs=1M count=10240 | pv | sudo dd of=/var/tmp/test.img
$ sudo losetup -f /var/tmp/test.img
With that, our nbd-server can be started and should listen on localhost only - we'll use SSH port-forwarding later on to connect back to this machine:
$ ss -4lnp | grep nbd
tcp LISTEN  0 10 127.0.0.1:10809 *:* users:(("nbd-server",pid=9249,fd=3))
The client side needs a bit more work. An SSH tunnel of course, but also the nbd kernel module and the nbd-client program. However, I noticed that the nbd-client version that comes with Debian/8.0 contained an undocumented bug that made it impossible to gain write access to the export block device. And we do really want write access :-) Off to the source, then:
$ sudo apt-get install libglib2.0-dev
$ git clone https://github.com/NetworkBlockDevice/nbd.git nbd-git && cd nbd-git
While the repository appears to be maintained, the build system looks kinda archaic. And we don't want to install almost 200 MB in dependencies for the docbook-utils packages to provide /usr/bin/docbook2man to build man pages. So let's skip all that and build only the actual programs:
$ sed -r '/^make -C (man|systemd)/d' -i autogen.sh
$ sed    '/man\/nbd/d;/systemd\//d'  -i configure.ac

$ ./autogen.sh
$ ./configure --prefix=/opt/nbd --enable-syslog
$ make && sudo make install
The configuration file format changed (again) or be passed on the command line:
$ sudo modprobe nbd
$ sudo /opt/nbd/sbin/nbd-client -name testdisk localhost 10809 /dev/nbd0 -timeout 30 -persist
On the server side, this is noticed too:
nbd_server[9249]: Spawned a child process
nbd_server[9931]: virtstyle ipliteral
nbd_server[9931]: connect from 127.0.0.1, assigned file is /dev/loop1
nbd_server[9931]: Starting to serve
nbd_server[9931]: Size of exported file/device is 10737418240
We can now use /dev/nbd0 as if it were a local disk. We'll create a key, initialize dm-crypt and create a file system:
$ openssl rand 4096 | gpg --armor --symmetric --cipher-algo aes256 --digest-algo sha512 > testdisk-key.asc
$ gpg -d testdisk-key.asc | sudo cryptsetup luksFormat --cipher twofish-cbc-essiv:sha256 \
                  --hash sha256 --key-size 256 --iter-time=5000 /dev/nbd0
gpg: AES256 encrypted data
Enter passphrase: XXXXXXX
gpg: encrypted with 1 passphrase

$ gpg -d testdisk-key.asc | sudo cryptsetup open --type luks /dev/nbd0 testdisk
$ sudo file -Ls /dev/nbd0 /dev/mapper/testdisk
/dev/nbd0:            LUKS encrypted file, ver 1 [twofish, cbc-essiv:sha256, sha256] UUID: 30f41e4...]
/dev/mapper/testdisk: data

$ sudo cryptsetup status testdisk
/dev/mapper/testdisk is active.
  type:    LUKS1
  cipher:  twofish-cbc-essiv:sha256
  keysize: 256 bits
  device:  /dev/nbd0
  offset:  4096 sectors
  size:    20967424 sectors
  mode:    read/write

$ sudo mkfs.xfs -m crc=1,finobt=1 /dev/mapper/testdisk
$ sudo mount -t xfs /dev/mapper/testdisk /mnt/disk/
$ df -h /mnt/disk
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/testdisk   10G   33M   10G   1% /mnt/disk
Deactivate with:
$ sudo umount /mnt/disk 
$ sudo cryptsetup close testdisk
$ sudo pkill -f /opt/nbd/sbin/nbd-client
When mounted, the disk speed is limited of course by the client's upload speed and the CPU speed too (for SSH and dm-crypt). Let's play with this for a while and see how this works out with rsync workloads. Maybe I'll come back for BorgBackup after all :-)

Weird CDROM formats

So, I came across these files:
$ ls -goh
-rw-r--r-- 1 526M Sep 29 12:58 file.bin
-rw-r--r-- 1  478 Sep 29 12:50 file.cue
Does anyone remember cue sheets? Luckily, even today there are tools out there to make sense of these and convert them into something usable:
$ bchunk -v file.bin file.cue file.iso
Reading the CUE file:

Track  1: MODE1/2352    01 00:00:00 (startsect 0 ofs 0)
Track  2: AUDIO     01 22:46:13 (startsect 102463 ofs 240992976)
Track  3: AUDIO     01 25:25:74 (startsect 114449 ofs 269184048)
Track  4: AUDIO     01 28:01:35 (startsect 126110 ofs 296610720)
Track  5: AUDIO     01 31:14:31 (startsect 140581 ofs 330646512)
Track  6: AUDIO     01 34:51:35 (startsect 156860 ofs 368934720)
Track  7: AUDIO     01 37:51:22 (startsect 170347 ofs 400656144)
Track  8: AUDIO     01 41:22:03 (startsect 186153 ofs 437831856)
Track  9: AUDIO     01 44:18:34 (startsect 199384 ofs 468951168)
Track 10: AUDIO     01 46:38:03 (startsect 209853 ofs 493574256)
Track 11: AUDIO     01 49:12:05 (startsect 221405 ofs 520744560)

Writing tracks:

 1: file.iso01.iso
 mmc sectors 0->102462 (102463)
 mmc bytes 0->240992975 (240992976)
 sector data at 16, 2048 bytes per sector
 real data 209844224 bytes
 200/200  MB  [********************] 100 %

 2: file.iso02.cdr
 mmc sectors 102463->114448 (11986)
 mmc bytes 240992976->269184047 (28191072)
 sector data at 0, 2352 bytes per sector
 real data 28191072 bytes
  26/26   MB  [********************] 100 %
 3: file.iso03.cdr
[...]
In this case, we don't care for the audio part of the image, so we could discard all the .cdr files later on and just use the ISO image:
$ ls -goh file.*
-rw-r--r-- 1 526M Sep 29 12:58 file.bin
-rw-r--r-- 1  478 Sep 29 12:50 file.cue
-rw-r--r-- 1 201M Oct 31 16:01 file.iso01.iso

$ sudo mount -t iso9660 -o loop,ro file.iso01.iso /mnt/cdrom
$ ls /mnt/cdrom
AUTORUN.INF  Data  Install  readme.txt  Setup.exe  Splash
Oh, yeah :-)

Compression benchmarks

Some time has passed since the last compression benchmarks and new contenders entered the race, so let's do another round of benchmarks, shall we?

MacBook Pro 2009

This laptop ships with an Intel Core2 Duo P8700 processor, so these tests may take a while:
$ tar -cf test.tar /usr/share/ 
$ ls -goh test.tar
-rw-r--r--  1    384M Oct  6 08:00 test.tar

$ time for i in {1..10}; do ~/bin/compress-test.sh test.tar | tee results_${i}.out; done
[...]
real    2046m5.142s
user    222m1.302s
sys     3m30.933s
So, 10 rounds of compressing and decompressing this tarball took 34 hours to complete. The results break down to:
$ for o in 9c 1c dc; do
   for p in gzip pigz bzip2 pbzip2 xz lzma zstd pzstd brotli; do
      awk "/"$p"\/"$o"/ {sum+=\$3} END {print \"$p/$o\t\", sum/10}" results_*.out
   done | sort -nk2; echo
done
pzstd/9c         19.7
zstd/9c          53.4
brotli/9c       234.5
pigz/9c         746.4
pbzip2/9c       764.6
gzip/9c         775.2
lzma/9c        1180.2
bzip2/9c       1563.9
xz/9c          3825

pzstd/1c          2.4
brotli/1c         4.7
zstd/1c           6.1
pigz/1c           6.2
gzip/1c          10.4
pbzip2/1c       752
xz/1c           778.7
lzma/1c         779.5
bzip2/1c       1532.3

pzstd/dc          0.8
zstd/dc           1.8
gzip/dc           2.4
pigz/dc           2.4
brotli/dc         2.9
pbzip2/dc         9.1
lzma/dc          10.2
xz/dc            10.8
bzip2/dc        748

Thinkpad E431

This machine comes with an i7-3632QM CPU and our test tarball is somewhat bigger:
$ tar -cf test.tar /usr/share/locale/ /usr/share/games/quake3/
$ ls -goh test.tar
-rw------- 1 978M Oct  8 22:38 test.tar

$ time for i in {1..10}; do ~/bin/compress-test.sh test.tar | tee results_${i}.out; done
[...]
real	420m39.764s
user	529m13.192s
sys	3m46.148s
After 7 hours, the results are in:
$ for o in 9c 1c dc; do
    for p in gzip pigz bzip2 pbzip2 xz lzma zstd pzstd brotli; do
       awk "/"$p"\/"$o"/ {sum+=\$3} END {print \"$p/$o\t\", sum/10}" results_*.out
    done | sort -nk2; echo
done
pzstd/9c	 17.4
pigz/9c	         17.5
pbzip2/9c	 31.5
zstd/9c    	 70.4
gzip/9c    	 84.4
bzip2/9c	145.3
brotli/9c	260
xz/9c	        612.4
lzma/9c	        622.4

pzstd/1c 	  3.3
pigz/1c	          7.2
brotli/1c	  8
zstd/1c	         10.2
pbzip2/1c	 26
gzip/1c	         27.8
bzip2/1c	141.6
lzma/1c	        181.5
xz/1c	        185.2

pzstd/dc	  0.6
zstd/dc	          2.1
brotli/dc	  4.8
pigz/dc	          5
gzip/dc	          8
pbzip2/dc	  8.8
xz/dc	         36.5
lzma/dc	         40.2
bzip2/dc	 53.3

PowerBook G4

This (older) machine is still running 24/7, so let's see which compressor we should use in the future:
$ tar -cf test.tar /usr/share/doc/gcc-4.9-base/ /usr/share/perl5
$ ls -goh test.tar
-rw-r--r-- 1 41M Oct 15 02:53 test.tar

$ PROGRAMS="gzip bzip2 xz lzma brotli zstd" \
  ~/bin/compress-test.sh -n 10 -f test.tar | tee ~/r.log
$ ~/bin/compress-test.sh -r ~/r.log
### Fastest compressor:
### zstd/1c:      1.90 seconds / 63.300% smaller 
### brotli/1c:    2.20 seconds / 57.900% smaller 
### gzip/1c:      4.80 seconds / 58.800% smaller 
### zstd/9c:     11.30 seconds / 66.000% smaller 
### gzip/9c:     19.00 seconds / 62.500% smaller 
### bzip2/1c:    36.90 seconds / 63.800% smaller 
### lzma/1c:     37.80 seconds / 65.700% smaller 
### xz/1c:       40.20 seconds / 66.000% smaller 
### brotli/9c:   60.50 seconds / 66.800% smaller 
### bzip2/9c:    63.00 seconds / 66.000% smaller 
### xz/9c:      111.90 seconds / 68.000% smaller 
### lzma/9c:    115.90 seconds / 67.700% smaller 

### Smallest size:
### zstd/9c:     11.30 seconds / 66.000% smaller 
### zstd/1c:      1.90 seconds / 63.300% smaller 
### xz/9c:      111.90 seconds / 68.000% smaller 
### xz/1c:       40.20 seconds / 66.000% smaller 
### lzma/9c:    115.90 seconds / 67.700% smaller 
### lzma/1c:     37.80 seconds / 65.700% smaller 
### gzip/9c:     19.00 seconds / 62.500% smaller 
### gzip/1c:      4.80 seconds / 58.800% smaller 
### bzip2/9c:    63.00 seconds / 66.000% smaller 
### bzip2/1c:    36.90 seconds / 63.800% smaller 
### brotli/9c:   60.50 seconds / 66.800% smaller 
### brotli/1c:    2.20 seconds / 57.900% smaller 

### Fastest decompressor:
### zstd/dc:       .80 seconds
### brotli/dc:    1.20 seconds
### gzip/dc:      1.20 seconds
### xz/dc:        1.70 seconds
### lzma/dc:      3.20 seconds
### bzip2/dc:     7.20 seconds

Building NRPE for OpenWRT

In the last article we restored nrpe from backups to a running OpenWRT installation. After another power outage we have to do this again, but let's actually build nrpe this time and only restore its configuration from the backup.

The build process will happen in a VM running Debian/jessie(amd64), so missing utilities or header files will have to be installed via apt-get:
sudo apt-get autoconf binutils build-essential gawk gettext git libncurses5-dev libssl-dev libz-dev ncurses-term openssl sharutils subversion unzip
We'll check out the source and switch to the v15.05.1 branch, because we'll need to build for the release that's currently running on the router. Since OpenWrt switched to musl last year, we cannot build trunk as the running Chaos Calmer is still linked against uClibc.
git clone https://github.com/openwrt/openwrt.git openwrt-git
cd $_
git checkout -b local v15.05.1
Fetch an appropriate .config and enter the configuration menu:
wget https://downloads.openwrt.org/chaos_calmer/15.05.1/ar71xx/generic/config.diff -O .config
make defconfig
make menuconfig
Here, we'll select our target profile and disable the SDK:
  • Target Profile => NETGEAR WNDR3700/WNDR3800/WNDRMAC
  • [_] Build the OpenWrt SDK (disabled)
Let's also disable all modular packages from the build and run the prerequisite check to verfiy that the configuration is still valid:
sed 's/=m$/=n/' -i.bak .config
make prereq
With that, we're ready to build and install the toolchain:
script -c "time make -j4 V=s tools/install && date && time make -j4 V=s toolchain/install" ~/build.log 
This will need some time (and diskspace) to complete. Once completed (check the build.log!), we can finally build our packages:
tar -C package/network/utils/ -xf ~/nrpe.tar
tar -C package/network/utils/ -xf ~/monitoring-plugins.tar
make oldconfig
script -c "time make -j4 V=s package/nrpe/compile" -a ~/p.log
script -c "time make -j4 V=s package/monitoring-plugins/compile" -a ~/p.log
When everything is built correctly, we should have two package files:
$ find . -type f -name "[nm]*.ipk"  | xargs ls -goh
-rw-r--r-- 1 703K Oct  2 18:50 ./bin/ar71xx/packages/base/monitoring-plugins_2.1.2-1_ar71xx.ipk
-rw-r--r-- 1  23K Oct  2 18:43 ./bin/ar71xx/packages/base/nrpe_3.0.1-1_ar71xx.ipk

$ file build_dir/target-mips*/*/src/nrpe
build_dir/target-mips_34kc_uClibc-0.9.33.2/nrpe-3.0.1/src/nrpe: ELF 32-bit MSB executable, MIPS, MIPS32 rel2 version 1, dynamically linked, interpreter /lib/ld-uClibc.so.0, not stripped
The installation should automatically install any dependencies, if needed:
router$ opkg install ./*.ipk
Installing monitoring-plugins (2.1.2-1) to root...
Installing nrpe (3.0.1-1) to root...

router$ /etc/init.d/nrpe enable
router$ /etc/init.d/nrpe start

router$ netstat -lnp | grep 5666
tcp 0 0 192.168.0.2:5666 0.0.0.0:* LISTEN 6771/nrpe
This was the easy part. The difficult part will be to get both packages upstream :-)

/bin/ls --wtf

So, I noticed this:
$ env -i /bin/bash                 # Clear the environment
$ touch foo bar\ baz               # Creates two files, "foo" 
                                   # and "bar baz"
$ ls -1
'bar baz'
foo
Why is ls(1) suddenly quoting filenames that contain spaces? After a bit of digging, this commit introduced this change into GNU/coreutils, but at least Debian is on the case and fixed it in their version:
$ ls
bar baz
foo

$ ls --quoting-style=shell
'bar baz'
foo

Mediawiki Upgrade

Upgrading Mediawiki through Git seemed like a cool idea and worked quite well for a long time. But since Mediawiki 1.25 the update process changed considerably and just wasn't fun any more. As updates are a rare occurence anyway, I decided to switch back to tarballs instead. Let's try this, for Mediawiki 1.27:

 curl https://www.mediawiki.org/keys/keys.txt | gpg --import
 wget https://releases.wikimedia.org/mediawiki/1.27/mediawiki-1.27.1.tar.gz{,.sig}
 gpg --verify mediawiki-1.27.1.tar.gz.sig
 
 export DOCROOT=/var/www/
 cd $DOCROOT/mediawiki
 tar --strip-components=1 -xzf ~/mediawiki-1.27.1.tar.gz
Perform the necessary (database) updates:
 cd $DOCROOT/mediawiki
 script -a -c "date; php maintenance/update.php --conf `pwd`/LocalSettings.php" ~/mwupdate.log 
While we're at it, re-generate the sitemap:
 cd $DOCROOT/mediawiki
 mkdir -p sitemap && chmod 0770 sitemap && sudo chgrp www-data sitemap
 sudo -u www-data MW_INSTALL_PATH=`pwd` php maintenance/generateSitemap.php \
     --conf `pwd`/LocalSettings.php --fspath `pwd`/sitemap --server https://www.example.net \
     --urlpath https://www.example.net/mediawiki/sitemap --skip-redirects
Remove/disable clutter:
 cd $DOCROOT/mediawiki
 rm -rf COPYING CREDITS FAQ HISTORY INSTALL README RELEASE-NOTES-1.27 UPGRADE
 chmod 0 docs maintenance tests
 sudo touch {cache,images}/index.html
Don't forget to upgrade the extensions as well:
 cd ../piwik-mediawiki-extension-git
 git checkout master && git pull && git clean -dfx
 git archive --prefix=piwik-mediawiki-extension/ --format=tar HEAD | tar -C $DOCROOT/mediawiki/extensions/ -xvf -
  
 cd ../MobileFrontend-git
 git checkout master && git pull && git clean -dfx
 git archive --prefix=MobileFrontend/ --format=tar origin/REL1_27  | tar -C $DOCROOT/mediawiki/extensions/ -xvf -
And with that, the new version should be online :-)

Installing NRPE in OpenWRT

With at least OpenWRT 15.05, the NRPE package appears to be unmaintained. We could should build the package manually, but before we do this, let's install an older version from our backups. For example:
$ ( cd ../backup/router/ && find . -name "*nrpe*" -o -name "check_*" | xargs tar -cf - ) | \
    ssh router "tar -C / -xvf -"
This should restore the NRPE binary, its configuration files and init scripts and all the check_* monitoring plugins. Did I mention that backups are important? :-)
With that, we're almost there:
 $ ldd /usr/sbin/nrpe
        libssl.so.1.0.0 => not found
        libcrypto.so.1.0.0 => not found
        libwrap.so.0 => not found
        libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x77a64000)
        libc.so.0 => /lib/libc.so.0 (0x779f7000)
        ld-uClibc.so.0 => /lib/ld-uClibc.so.0 (0x77a88000)
Let's install the dependencies:
opkg install libopenssl libwrap
Add the nagios user:
echo 'nagios:x:50:' >> /etc/group
echo 'nagios:x:50:50:nagios:/var/run/nagios:/bin/false' >> /etc/passwd
echo 'nagios::16874:0:99999:7:::' >> /etc/shadow
Configure nrpe:
 $ grep ^[a-z] /etc/nrpe.cfg
 pid_file=/var/run/nrpe.pid
 server_port=5666
 server_address=192.168.0.1
 nrpe_user=nagios
 nrpe_group=nagios
 allowed_hosts=192.168.0.10,192.168.0.11
 dont_blame_nrpe=0
 debug=0
 command_timeout=60
 connection_timeout=300
 
 command[check_dummy]=/usr/libexec/nagios/check_dummy 0
 command[check_dns]=/usr/libexec/nagios/check_dns -H test.example.net -s localhost -w 0.1 -c 0.5
 command[check_entropy]=/root/bin/check_entropy.sh -w 1024 -c 512
 command[check_http]=/usr/libexec/nagios/check_http -H localhost -w 0.1 -c 0.5
 command[check_load]=/usr/libexec/nagios/check_load -w 4,3,2 -c 5,4,3
 command[check_ntp_time]=/usr/libexec/nagios/check_ntp_time -H 0.openwrt.pool.ntp.org -w 0.5 -c 1.0
 command[check_ssh]=/usr/libexec/nagios/check_ssh -4 router
 command[check_softwareupdate_opkg]=/root/bin/check_softwareupdate.sh opkg
 command[check_users]=/usr/libexec/nagios/check_users -w 3 -c 5
Let's try to start it, and enable it if it works:
 $ /etc/init.d/nrpe start
 $ ps | grep nrp[e]
 5320 nagios    2908 S    /usr/sbin/nrpe -c /etc/nrpe.cfg -d
 
 $ /etc/init.d/nrpe enable
And that's about it. Of course: since we're using an outdated NRPE version, we won't receive any (security) updates - so this setup should only be used in a trusted environment, i.e. not over the internet.

gpgkeys: HTTP fetch error 60: SSL certificate problem: Invalid certificate chain

After installing GnuPG from Homebrew, gpg was unable to connect to one of its key servers:
$ gpg --refresh-keys
gpg: refreshing 47 keys from hkps://hkps.pool.sks-keyservers.net
gpgkeys: HTTP fetch error 60: SSL certificate problem: Invalid certificate chain
[...]
The trick was to install their root certificate and mark it "trusted":
$ wget https://sks-keyservers.net/sks-keyservers.netCA.pem
$ open sks-keyservers.netCA.pem
	=> Trust always
Now the operation was able to complete:
$ gpg --refresh-keys
[...]
gpg: Total number processed: 47
gpg:              unchanged: 19
gpg:           new user IDs: 5
gpg:            new subkeys: 4
gpg:         new signatures: 1698
gpg:     signatures cleaned: 2
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:  19  signed:  12  trust: 0-, 0q, 0n, 0m, 0f, 19u
gpg: depth: 1  valid:  12  signed:   4  trust: 12-, 0q, 0n, 0m, 0f, 0u
gpg: next trustdb check due at 2018-08-19

MacOS Gatekeeper: Verifying...

There's VLC installed on this Mac via Homebrew Cask and every time VLC starts up, the dreaded Verifying... progress bar comes up:
VLC verifying...
Now, this message of course is generated by MacOS Gatekeeper, trying to do its job. Eventually the verification completes and VLC is started - but the process repeats every time VLC starts! And it's only happening for VLC, it doesn't appear for other applications installed with Homebrew Cask.

Fortunately, there's an easy workaround to stop that behaviour - we need to remove the com.apple.quarantine extended attribute:
$ xattr -l /Applications/BrewBundle/VLC.app
com.apple.quarantine: 0002;5123a312;Safari;4CC444EB-4444-44A4-4C44-4B444FBC4444

$ sudo xattr -d com.apple.quarantine /Applications/BrewBundle/VLC.app
Now VLC can be started w/o the verification delay :-)

XFS: Corruption warning: Metadata has LSN ahead of current LSN

This just happened again on a different machine, right after running xfs_repair:
$ sudo xfs_repair /dev/mmcblk0
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 1
        - agno = 2
        - agno = 0
        - agno = 3
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done

$ echo $?
0

$ sudo mount -t xfs /dev/mmcblk0 /mnt/disk
mount: wrong fs type, bad option, bad superblock on /dev/mmcblk0,

$ sudo dmesg -t | tail 
XFS (mmcblk0): Mounting V5 Filesystem
XFS (mmcblk0): Corruption warning: Metadata has LSN (20:50596) ahead of current LSN (1:2). Please unmount and run xfs_repair (>= v4.3) to resolve.
XFS (mmcblk0): log mount/recovery failed: error -22
XFS (mmcblk0): log mount failed
What happened here? Apparently, with the XFS v5 superblock the userspace tools (xfsprogs) also changed.

And so it happened that xfs_repair version 3.2.1 tried to check an XFS file system that had already enabled its v5 superblock format. But the version is too old to handle v5 superblocks and left the file system in an corrupt state.

Luckily it's easy to fix:
 > Kernel v4.4 and later detects an XFS log problem which is only fixed by
 > xfsprogs v4.3 or later. If you have encountered the inability to mount an
 > xfs filesystem, please update to this version of xfsprogs and run
 > xfs_repair against the filesystem.
And indeed:
$ /opt/xfsprogs/sbin/xfs_repair -V
xfs_repair version 4.5.0

$ sudo /opt/xfsprogs/sbin/xfs_repair /dev/mmcblk0
[...]
Phase 7 - verify and correct link counts...
Maximum metadata LSN (20:50596) is ahead of log (1:2).
Format log to cycle 23.
done

$ sudo mount -t xfs /dev/mmcblk0 /mnt/disk
$ mount | tail -1
/dev/mmcblk0 on /mnt/disk type xfs (rw,relatime,attr2,discard,inode64,noquota)
Phew! :-)

Character collation

So, recently I came across this funny behaviour on a SLES11sp4 machine:
sles11$ netstat -ni | awk '/^[a-z]/' 
Kernel Interface table
Iface   MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0   1500   0     3562      0      0      0     1955      0      0      0 BMRU
lo    16436   0       20      0      0      0       20      0      0      0 LRU
Wait, what? Why is the (uppercase) string "Kernel" matched against the lowercase "[a-z]" search expression? The same command on a SLES12sp1 machine does the Right Thing:
sles12$ netstat -ni | awk '/^[a-z]/' 
eth0   1500   0      685      0      0      0      438      0      0      0 BMRU
lo    65536   0       12      0      0      0       12      0      0      0 LRU
Apparently, this is not an unknown problem and can indeed be fixed by providing another LC_COLLATE variable:
$ netstat -ni | LC_COLLATE=C awk '/^[a-z]/' 
eth0   1500   0     3711      0      0      0     2032      0      0      0 BMRU
lo    16436   0       20      0      0      0       20      0      0      0 LRU
While providing a different LC_COLLATE variable did help, this still smells like a bug in SLES11, as the configured locales were exactly the same:
sles11$ locale 
LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=

sles11$ locale -k LC_COLLATE
collate-nrules=4
collate-rulesets=""
collate-symb-hash-sizemb=2039
collate-codeset="UTF-8"

sles11$ locale | md5sum 
677d9b3dbdf9759c8b604f294accd102  -

sles12$ locale | md5sum 
677d9b3dbdf9759c8b604f294accd102  -
Interestingly enough, both installations differ greatly in the way they look up locale information:
sles11$ echo | strace -e open awk '/^[a-z]/' 
open("/etc/ld.so.cache", O_RDONLY)      = 3
open("/lib64/libdl.so.2", O_RDONLY)     = 3
open("/lib64/libm.so.6", O_RDONLY)      = 3
open("/lib64/libc.so.6", O_RDONLY)      = 3
open("/usr/lib/locale/locale-archive", O_RDONLY) = 3
open("/usr/lib64/gconv/gconv-modules.cache", O_RDONLY) = 3



sles12$ echo | strace -e open awk '/^[a-z]/' 2>&1 | grep -v ENOENT
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libm.so.6", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib/locale/en_US.utf8/LC_CTYPE", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib64/gconv/gconv-modules.cache", O_RDONLY) = 3
open("/usr/lib/locale/en_US.utf8/LC_COLLATE", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib/locale/en_US.utf8/LC_MESSAGES", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib/locale/en_US.utf8/LC_MESSAGES/SYS_LC_MESSAGES", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib/locale/en_US.utf8/LC_NUMERIC", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib/locale/en_US.utf8/LC_TIME", O_RDONLY|O_CLOEXEC) = 3
open("/dev/null", O_RDWR)               = 3
+++ exited with 0 +++
Alas, no bug has been reported yet :-\

While this appears to be documented behaviour, it's still very confusing and may even violate the Principle of Least Surprise. FWIW, GNU/grep behaves as expected on both systems, no matter the collation:
$ echo Abc | egrep --color '[[:lower:]]'
Abc

PS: I forgot to mention how cool SUSE Studio is - this SLE12 test VM was up & running in minutes and accessible via SSH too and I didn't even have to fire up my local VirtualBox instance! :-)

umask & symbolic links on MacOS X

This just annoyed me again:
$ umask 0022
$ touch foo
$ umask 0066
$ ln -s foo bar

$ ls -lgo foo bar
-rw-r--r--  1   0 Mar  9 14:17 foo
lrwx--x--x  1   3 Mar  9 14:17 bar -> foo

$ sudo -u nobody cat foo bar
$ 
OK, this seems to work (the permissions are checked on the target, not the symlink), but not so with directories:
$ umask 0022
$ mkdir -p foo/file
$ umask 0066
$ ln -s foo bar

$ ls -ldgo foo bar
drwxr-xr-x  3   102 Mar  9 15:02 foo
lrwx--x--x  1     3 Mar  9 15:03 bar -> foo

$ sudo -u nobody ls -l bar
ls: bar: Permission denied
lrwx--x--x  1 admin  wheel  3 Mar  9 14:23 bar
Interestingly enough, it works if we append a slash to the symlink:
$ sudo -u nobody ls -lgo bar/
total 0
drwxr-xr-x  2  68 Mar  9 14:24 dir
This is annoying when a user has a more stringent umask for normal use, but temporarily elevates its privileges to install software, without adjusting the umask first. To clean up this mess afterwards, we can re-create the affected symbolic links:
$ umask 0022
$ find . -type l ! -perm -g+r | while read l; do
   target=$(readlink "$l") && rm -f "$l" && ln -svf "$target" "$l"
done
./bar -> foo

$ ls -ld foo bar
drwxr-xr-x  4 admin  wheel  136 Mar  9 14:37 foo
lrwxr-xr-x  1 admin  wheel    3 Mar  9 14:38 bar -> foo
Note: this has been seen in MacOS 10.10.5 on a Journaled HFS+ file system.

OpenBSD & CVS

Every now and then I start up my OpenBSD VM to see how things are in BSD-land. And of course, after the VM has been asleep for a few month, I'd like to update the system too. As OpenBSD still uses CVS to manage their source repositories (for various reasons), we may have no other choice but to use it:
$ cd /usr/src/
$ time cvs -q up -rOPENBSD_5_8 -Pd
[...]
U usr.sbin/zic/zic.8
U usr.sbin/zic/zic.c
P usr.sbin/ztsscale/ztsscale.c
  158m51.96s real     0m16.85s user     7m34.07s system
The tree is about 780 MB in size and took 2.6 hours to complete. And we haven't even started the build yet. Wat?

There's an unofficial Git tree for openbsd-src, but before we revert to that, let's try the recommended alternative, CVSync.

Let's look at the available repositories first:
$ cvsync cvsync://anoncvs.usa.openbsd.org/
Name: openbsd, Release: rcs
 Comment: OpenBSD CVS Repository
Name: openbsd-cvsroot, Release: rcs
Name: openbsd-ports, Release: rcs
Name: openbsd-src, Release: rcs
Name: openbsd-www, Release: rcs
Name: openbsd-x11, Release: rcs
Name: openbsd-xf4, Release: rcs
Name: openbsd-xenocara, Release: rcs
We're just interested in openbsd-src for now:
$ sudo mkdir -m0775 /cvs && sudo chgrp wsrc /cvs       # We're not using doas yet.
$ cat /etc/cvsync_openbsd.conf
config {
       hostname anoncvs.usa.openbsd.org
       base-prefix /cvs

       collection {
               name openbsd-src release rcs
               umask 002
       }
}

$ cvsync -c /etc/cvsync_openbsd.conf 
The initial sync took well over 3 hours to complete, but successive runs tend to complete in a few minutes, much less than updating with plain cvs.

However, the result is unusable yet:
$ ls -1 /cvs/src/sys/arch/`uname -m`/conf          
Attic
GENERIC,v
GENERIC.MP,v
Makefile.i386,v
RAMDISK,v
RAMDISK_CD,v
files.i386,v
ld.script,v
No, we have to checkout a local copy now, before we can start using it:
$ cd /usr/src
$ cvs -d /cvs checkout -P src
$ cvs -d /cvs up -Pd
Only now we'll be able to actually update the system. At last, the Git checkout was quick and so much less painful:
$ time git clone https://bitbucket.org/braindamaged/openbsd-src.git openbsd-src-git
[...]
real    12m57.329s
user    4m5.468s
sys     0m54.316s

$ cd $_
$ ls -1 sys/arch/`uname -m`/conf
files.i386
GENERIC
GENERIC.MP
ld.script
Makefile.i386
RAMDISK
RAMDISK_CD

Vacation pictures

The holidays are over and I had to dig through heaps of vacation pictures and wanted to create a little photo gallery for my fellow relatives to click through. After past experiments with Zenphoto and Piwigo, I wanted to switch to a much more simpler solution. One that wouldn't require a database backend and maybe didn't break after a few update cycles.

Looking at static image gallery generators I decided to try llgal again. The command line switches are more difficult to remember than tar, but here we go:
llgal --www --sort revtime --ct %Y-%m-%d -a -d . -k --title "Pictures of Foo"
This will process pictures in the current directory, with the following options:
--www           make all llgal files world-readable
--sort revtime  sort pictures in reverse-mtime (oldest pictures on top)
--ct %Y-%m-%d   use image timestamps as captions, YYYY-mm-dd
-a              write image sizes under thumbnails on index page
-d              operate in directory <dir>
-k              use the image captions for the HTML slide titles
--title         title of the index of the gallery
So far, so good. But some obstacles had to be tackled first:
  • Each picture on the camera was ~3-5 MB each and I didn't want to upload these large files to the gallery. So I resized the pictures with some photo program (not with GraphicsMagick) but now the file's mtime got mangled. GNU/touch was able to fix this.
  • The pictures were taken with two cameras. Unfortunately, one of the cameras had its system time off by two hours - this had to be fixed as well.
As all the pictures (from both cameras) are now in one directory, this is how it looked like:
$ exiftool -s DSCN_001.jpg IMG_002.jpg | grep ^DateTimeOriginal
DateTimeOriginal                : 2015:12:23 18:01:00
DateTimeOriginal                : 2015:12:23 16:03:00
In reality, DSCN_001.jpg was taken at 16:01 and should be listed before IMG_002.jpg. Luckily exiftool is able to correct the EXIF data:
export delta="00:00:00 02:00:00"            # format is YY:mm:dd HH:MM:SS
ls DSCN* | while read f; do
  echo "FILE: $f"
  exiftool -P -ModifyDate-="$delta" -DateTimeOriginal-="$delta" -CreateDate-="$delta" "$f"
  touch -r "$f" -d '-2 hours' "$f"
done
Although we corrected the file's mtime already, it was still mangled by the previous export step. Let's just extract the exact date from the EXIF data and correct the mtime again:
ls *JPG | while read f; do
  echo "FILE: $f"
  TZ=PST8PDT touch -d "$(exiftool -d %Y-%m-%d\ %H:%M:%S -s "$f" | awk '/^DateTimeOriginal/ {print $3,$4}')" "$f"
done
After another llgal run, the pictures were now listed in their correct order and ready to be consumed :-)

RTNETLINK answers: No such process

A colleague of mine presented me with a weird routing problem today and it took me a while to understand what was going on. The task was simple: add a network route via a certain gateway that can only be reached via a certain network interface. Let's re-create the setup:
# ip addr change 10.10.0.3/24 dev eth2
# ip link set eth2 up
# ip addr show dev eth2 scope global
3: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 08:00:27:d0:34:51 brd ff:ff:ff:ff:ff:ff
    inet 10.10.0.3/24 scope global eth2
Let's add a new route then:
# ip route add 10.20.0.0/24 via 10.10.0.1 dev eth2
RTNETLINK answers: No such process
Huh? Our eth2 is UP and should be able to reach 10.10.0.1, right? Let's look at the routing table1):
# netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
192.168.56.0    0.0.0.0         255.255.255.0   U         0 0          0 eth0
0.0.0.0         192.168.56.1    0.0.0.0         UG        0 0          0 eth0
Aha! For some reason the machine has lost its network route on the eth2 interface. Well, the machine has been online for a while and we don't know which admin did what and why. But although eth2 is configured and UP, it cannot reach its own network w/o a network route. Of course, the "ip addr change" does that automatically2) and we staged the whole thing for illustration purposes.

Let's add the missing route and try again:
# ip route add 10.10.0.0/24 dev eth2 
# netstat -rn 
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
10.10.0.0       0.0.0.0         255.255.255.0   U         0 0 0 eth2
192.168.56.0    0.0.0.0         255.255.255.0   U         0 0 0 eth0
0.0.0.0         192.168.56.1    0.0.0.0         UG        0 0 0 eth0

# ip route add 10.20.0.0/24 via 10.10.0.1 dev eth2
# netstat -rn 
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
10.20.0.0       10.10.0.1       255.255.255.0   UG        0 0 0 eth2
10.10.0.0       0.0.0.0         255.255.255.0   U         0 0 0 eth2
192.168.56.0    0.0.0.0         255.255.255.0   U         0 0 0 eth0
0.0.0.0         192.168.56.1    0.0.0.0         UG        0 0 0 eth0
Yay! :-)

1) Sometimes the output from the iproute2 tools are not as easy to parse and I'll use good ol' net-tools again.
2) Unless we were to assign a /32 address to the interface, e.g. "ip addr change 10.10.0.0/32 dev eth2"