Encrypted network block device
While backing up with Crashplan works fine most of the time (and one trusts their zero-knowledge promise), sometimes new software updates, power outages or other unplanned interruptions cause Crashplan to fail and either stop backing up or discard the whole archive and start to backup from scratch, uploading the whole disk again :-\
So yeah, it mostly works but somehow I'd like to be a bit more in control of things. The easiest thing would be to order some disk space in the cloud and rsync all data off to a remote location - but of we need to encrypt it first. But how? There are a few solutions I've came across so far, I'm sure there are others, but let's look at them real short:
- duplicity uses librsync to upload GnuPG encrypted parts to the remote destination. I've heard good (and bad) things about it, but the tought of splitting up data into small chunks and encrypting it, uploading thousands of small bits of random-looking data sounds cool and a bit frightening at the same time. Especially the restore scenario boggles my mind. I don't want to dismiss this entirely (and may even come back to it later on), but let's look for something saner for now.
- Attic is a deduplicating backup program written in Python. I've haven't actually tried this one either, although it seems to support encryption and remote backup destinations, although the mentioning of FUSE mounts make me a bit uneasy.
- Obnam supports encrypted remote backups, again via GnuPG. I gotta check this out if this really works as advertised.
- Burp uses librsync and supports something called "client side file encryption" - but that turns off "delta differencing", which sounds like the whole purpose of using librsync in the first place is then gone.
- Rclone supports encrypted backups, but only to some pre-defined storage providers and not to arbitrary SSH-accessible locations.
- BorgBackup has the coolest name (after Obnam :-)) and supports deduplication, compression and authenticated encryption - almost too good to be true. This should really be my go-to-solution for my usecase and if my hand-stitched version isn't working out, I'll come back to this for sure.
With that, let's see if we can employ a Network Block Device to serve our needs.
As an example, let's install nbd-server on the remote location and set up a disk that we want to serve to our backup client later on:
$ sudo apt-get install nbd-server $ cd /etc/nbd-server/ $ grep -rv ^\# . ./config:[generic] ./config: user = nbd ./config: group = nbd ./config: listenaddr = localhost ./config: allowlist = true ./config: includedir = /etc/nbd-server/conf.d ./conf.d/local.conf:[testdisk] ./conf.d/local.conf: exportname = /dev/loop1 ./conf.d/local.conf: flush = true ./conf.d/local.conf: readonly = false ./conf.d/local.conf: authfile = /etc/nbd-server/allow ./allow:127.0.0.1/32We will of course serve a real disk later on, but for now a loop device will do:
$ dd if=/dev/zero bs=1M count=10240 | pv | sudo dd of=/var/tmp/test.img $ sudo losetup -f /var/tmp/test.imgWith that, our
nbd-server
can be started and should listen on localhost only - we'll use SSH port-forwarding later on to connect back to this machine:
$ ss -4lnp | grep nbd tcp LISTEN 0 10 127.0.0.1:10809 *:* users:(("nbd-server",pid=9249,fd=3))The client side needs a bit more work. An SSH tunnel of course, but also the nbd kernel module and the nbd-client program. However, I noticed that the
nbd-client
version that comes with Debian/8.0 contained an undocumented bug that made it impossible to gain write access to the export block device. And we do really want write access :-) Off to the source, then:$ sudo apt-get install libglib2.0-dev $ git clone https://github.com/NetworkBlockDevice/nbd.git nbd-git && cd nbd-gitWhile the repository appears to be maintained, the build system looks kinda archaic. And we don't want to install almost 200 MB in dependencies for the docbook-utils packages to provide
/usr/bin/docbook2man
to build man pages. So let's skip all that and build only the actual programs:
$ sed -r '/^make -C (man|systemd)/d' -i autogen.sh $ sed '/man\/nbd/d;/systemd\//d' -i configure.ac $ ./autogen.sh $ ./configure --prefix=/opt/nbd --enable-syslog $ make && sudo make installThe configuration file format changed (again) or be passed on the command line:
$ sudo modprobe nbd $ sudo /opt/nbd/sbin/nbd-client -name testdisk localhost 10809 /dev/nbd0 -timeout 30 -persistOn the server side, this is noticed too:
nbd_server[9249]: Spawned a child process nbd_server[9931]: virtstyle ipliteral nbd_server[9931]: connect from 127.0.0.1, assigned file is /dev/loop1 nbd_server[9931]: Starting to serve nbd_server[9931]: Size of exported file/device is 10737418240We can now use
/dev/nbd0
as if it were a local disk. We'll create a key, initialize dm-crypt
and create a file system:
$ openssl rand 4096 | gpg --armor --symmetric --cipher-algo aes256 --digest-algo sha512 > testdisk-key.asc $ gpg -d testdisk-key.asc | sudo cryptsetup luksFormat --cipher twofish-cbc-essiv:sha256 \ --hash sha256 --key-size 256 --iter-time=5000 /dev/nbd0 gpg: AES256 encrypted data Enter passphrase: XXXXXXX gpg: encrypted with 1 passphrase $ gpg -d testdisk-key.asc | sudo cryptsetup open --type luks /dev/nbd0 testdisk $ sudo file -Ls /dev/nbd0 /dev/mapper/testdisk /dev/nbd0: LUKS encrypted file, ver 1 [twofish, cbc-essiv:sha256, sha256] UUID: 30f41e4...] /dev/mapper/testdisk: data $ sudo cryptsetup status testdisk /dev/mapper/testdisk is active. type: LUKS1 cipher: twofish-cbc-essiv:sha256 keysize: 256 bits device: /dev/nbd0 offset: 4096 sectors size: 20967424 sectors mode: read/write $ sudo mkfs.xfs -m crc=1,finobt=1 /dev/mapper/testdisk $ sudo mount -t xfs /dev/mapper/testdisk /mnt/disk/ $ df -h /mnt/disk Filesystem Size Used Avail Use% Mounted on /dev/mapper/testdisk 10G 33M 10G 1% /mnt/diskDeactivate with:
$ sudo umount /mnt/disk $ sudo cryptsetup close testdisk $ sudo pkill -f /opt/nbd/sbin/nbd-clientWhen mounted, the disk speed is limited of course by the client's upload speed and the CPU speed too (for
SSH
and dm-crypt
). Let's play with this for a while and see how this works out with rsync
workloads. Maybe I'll come back for BorgBackup
after all :-)