Skip to content

Fedora: where is bigint.pm?

Recently something like this happened:
$ perl -Mbigint -e 'print 1->is_zero()."\n"'
Can't locate bigint.pm in @INC (you may need to install the bigint module)
OK, but which package will provide bigint? (not to be confused with Math::BigInt!)

Debian has apt-file:
$ apt-file search bigint.pm
perl-modules-5.28: /usr/share/perl/5.28.1/bigint.pm
Arch Linux has Pacman:
$ pacman -F bigint.pm
core/perl 5.28.1-1 (base) [installed: 5.30.1-1]
    usr/share/perl5/core_perl/bigint.pm
openSUSE has zypper but its search function isn't returning much. However, bigint.pm is provided by their standard perl package:
$ rpm -qf `locate bigint`
perl-5.30.1-3.2.x86_64
And Fedora has dnf, but whatprovides doesn't return anything and search only returns slightly unrelated results:
$ dnf search bigint
texlive-bigints-doc.noarch
perl-Math-BigInt-GMP.x86_64
perl-Math-BigInt-FastCalc.x86_64
texlive-bigints.noarch
perl-Math-BigInt.noarch
php-pear-math-biginteger.noarch
But none of those actually provided bigint.pm. Thankfully a comment in RHBZ#1286363 provided the key command on how to install the correct Perl module:
$ sudo dnf install 'perl(bigint)'
With that in place, the missing bigint.pm would be installed and the command above executes just fine. Of course, this works for other pragmas just as well:
$ dnf install 'perl(threads)'
Package perl-threads-1:2.22-439.fc31.x86_64 is already installed.

SELinux is preventing dnsmasq from using the dac_override capability.

While trying to set log-facility=/var/log/dnsmasq.log in dnsmasq.conf resulted in an SELinux splat:
SELinux is preventing dnsmasq from using the dac_override capability.
[...]
Raw Audit Messages
type=AVC msg=audit(1583125188.633:22508): avc:  denied  { dac_override } for  pid=1501431 comm="dnsmasq" capability=1  scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:system_r:dnsmasq_t:s0 tclass=capability permissive=0

Hash: dnsmasq,dnsmasq_t,dnsmasq_t,capability,dac_override
This had been reported before (in 2018), but for /var/lib/dnsmasq/dnsmasq.leases, this time it was about /var/log/dnsmasq.log and we had everything in place:
$  ls -lZ /var/log/dnsmasq.log 
-rw-r-----. 1 dnsmasq root system_u:object_r:dnsmasq_var_log_t:s0 79783 \
            Mar  1 20:59 /var/log/dnsmasq.log
Before granting dac_override to dnsmasq, we found this all explained in another blog post:
[...] The simple thing to do from an SELinux point of view would be to add the allow rule

allow dovecot_t self:capability dac_override;

But from a security proint of view, this is lousy.  The much better solution would be to 'relax' the permissions on the socket by adding group read/write.
And indeed, this helped as expected:
$ chmod -c g+w /var/log/dnsmasq.log
mode of '/var/log/dnsmasq.log' changed from 0640 (rw-r-----) to 0660 (rw-rw----)
Now dnsmasq would start and is able to log to /var/log/dnsmasq.log.

Resize a NetBSD root disk

A NetBSD DomU (Xen) needed more disk space for its root disk. While this may not be worth mentioning with the help of cfdisk or GNU/parted in Linux land, I haven't done this yet on a NetBSD system. Hubert describes this in part on his blog, but doesn't really enlarge the partition but adds and configures another partition to the disk. So, let's describe the whole process, including the resize_ffs part.

First, we need to resize the actual device of course. We're using LVM for our PV domain:
$ lvresize --size +4G vg0/netbsd-disk0
After starting the DomU, we can see the new disk size:
netbsd: xbd0: 4096 MB, 512 bytes/sect x 8388608 sectors
netbsd: xbd0: 8192 MB, 512 bytes/sect x 16777216 sectors
These sector numbers will be important in the next step, editing the disklabel:
$ disklabel xbd0
# /dev/rxbd0:
type: unknown
disk: disk0
label:
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 16
sectors/cylinder: 1008
cylinders: 8322
total sectors: 8388608
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0           # microseconds
track-to-track seek: 0  # microseconds
drivedata: 0

16 partitions:
#        size    offset     fstype [fsize bsize cpg/sgs]
 a:   7863408         0     4.2BSD   2048 16384     0
 b:    525200   7863408       swap                   
 c:   8388608         0     unused      0     0
 d:   8388608         0     unused      0     0
We need to do a few things now:
  • Adjusting the total sectors
  • Adjusting the d partition, the full disk on x86.
  • adjusting the c partition, the NetBSD part of the disk. As all partitions will belong to this NetBSD installation, its size will be equal to the d partition.
  • Adjust the offset of the swap partition.
  • And finally adjust the size of our root partition, a in our case.
Let's do all that in one go:
$ disklabel -e xbd0                         
# /dev/rxbd0:
type: unknown
disk: disk0
label: 
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 16
sectors/cylinder: 1008
cylinders: 8322
total sectors: 16777216
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0           # microseconds
track-to-track seek: 0  # microseconds
drivedata: 0 

16 partitions:
#        size    offset     fstype [fsize bsize cpg/sgs]
 a:  16252016         0     4.2BSD   2048 16384     0
 b:    525200  16252016       swap                   
 c:  16777216         0     unused      0     0
 d:  16777216         0     unused      0     0
With 16777216 as a (new) total sector count, and a swap size of (unchanged) 525200 sectors, this leaves 16252016 sectors for the root disk.

In Linux, the partition table would need to be initialized and I decided to reboot the VM to make this happen. Only afterwards I learned about disklabel -i -r: "Read the on-disk label for sd0, edit it using the built-in interactive editor and reinstall in-core as well as on-disk".

Now that the disklabel has been adjusted, we still need to resize the file system:
$ resize_ffs -p -v /dev/rxbd0a
It's required to manually run fsck on file system before you can resize it

 Did you run fsck on your disk (Yes/No) ? Yes
Growing fs from 1965852 blocks to 4063004 blocks.

$ df -h /
Filesystem         Size       Used      Avail %Cap Mounted on
/dev/xbd0a         3.6G       3.4G       8.5M  99% /
Hm, still nothing. Rebooting the VM once more, but now fsck was unhappy:
Starting root file system check:
/dev/rxbd0a: BAD SUPER BLOCK: VALUES IN SUPER BLOCK DISAGREE WITH THOSE IN FIRST ALTERNATE

/dev/rxbd0a: UNEXPECTED INCONSISTENCY; RUN fsck_ffs MANUALLY.
Automatic file system check failed; help!
ERROR: ABORTING BOOT (sending SIGTERM to parent)!
[1]   Terminated              (stty status "^T...
Enter pathname of shell or RETURN for /bin/sh:

# fsck_ffs -f /dev/rxbd0a
** /dev/rxbd0a
BAD SUPER BLOCK: VALUES IN SUPER BLOCK DISAGREE WITH THOSE IN FIRST ALTERNATE
** File system is already clean
** Last Mounted on /
** Root file system
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
ALTERNATE SUPERBLK(S) ARE INCORRECT
SALVAGE? [yn] y

SUMMARY INFORMATION BAD
SALVAGE? [yn] y

BLK(S) MISSING IN BIT MAPS
SALVAGE? [yn] y

158187 files, 1806197 used, 99638 free (654 frags, 12373 blocks, 0.0% fragmentation)

***** FILE SYSTEM WAS MODIFIED *****
While we're in this rescue shell, let's try resize_ffs once more:
# resize_ffs -p -v /dev/rxbd0a
It's required to manually run fsck on file system before you can resize it

 Did you run fsck on your disk (Yes/No) ? Yes
Growing fs from 1965852 blocks to 4063004 blocks.
Another reboot later and now the system is able to see its new disk space:
$ df -h /
Filesystem         Size       Used      Avail %Cap Mounted on
/dev/xbd0a         7.5G       3.4G       3.7G  48% /

From autofs to systemd.automount

The venerable autofs mechanism to automatically mount and unmount network shares still works with today's systems but lately I noticed that NFS and CIFS shares would hang when I unplug my laptop from the local network and connect at another site (e.g. work, or a random coffee shop) where the usual network shares are not reachable. More and more processes will hang (and waiting for the network resource to re-appear) and eventually the machine will be almost unusable and only a reboot may help.

Of course one could configure a VPN to make these resources available all the time, but I don't really need these network shares and I'm already running a VPN when I'm out and about, so this would be unnecessary and overly complicated. With the reign of systemd it is now possible to have systemd handle automounting via the systemd.automount unit, so let's see if it handles these situations better.

autofs

While several tutorials on how to implement this already exist, let's recap first how autofs works. The main configuration file is /etc/auto.master, containing nothing more than:
+dir:/etc/auto.master.d
+auto.master
In /etc/auto.master.d the real map files are referenced:
$ cat /etc/auto.master.d/local.autofs 
/mnt/smb /etc/auto.cifs
/mnt/nfs /etc/auto.nfs
These map files will contain the share definitions:
# auto.cifs
win0  -fstype=cifs,vers=3.0,fsc,guest,rw,nodev,nosuid,noexec,fsc ://smb/win0
win1  -fstype=cifs,vers=3.0,fsc,guest,ro,nodev,nosuid,noexec,fsc ://smb/win1

# auto.nfs
data0  -fstype=nfs,rw,nodev,nosuid,noexec,bg,intr,sec=sys,acl,fsc nfs:/mnt/data0
data1  -fstype=nfs,ro,nodev,nosuid,noexec,bg,intr,sec=sys,acl,fsc nfs:/mnt/data1
Once autofs.service is reloaded, the shares should be accessible.

systemd.automount

But let's dismantle all that and now turn to systemd.automount. For each (network) share we will need a .mount and also a .automount unit file:
$ cat /usr/local/etc/mnt-nfs-data0.mount 
[Unit]
Description=NFS data0

[Mount]
What=nfs:/mnt/data0
Where=/mnt/nfs/data0
Type=nfs4
Options=rw,nodev,nosuid,noexec,bg,intr,sec=sys,acl,fsc

[Install]
WantedBy=multi-user.target
$ cat /usr/local/etc/mnt-nfs-data0.automount 
[Unit]
Description=Automount NFS data0

[Automount]
Where=/mnt/nfs/data0

[Install]
WantedBy=multi-user.target
Link both unit files to /etc/systemd/system, repeat for each network share as needed:
sudo ln -s /usr/local/etc/mnt-nfs-data0.mount     /etc/systemd/system/
sudo ln -s /usr/local/etc/mnt-nfs-data0.automount /etc/systemd/system/
The .mount unit files only need to be linked; the .automount files need to be enabled and started:
sudo systemctl enable mnt-nfs-data0.automount
sudo systemctl start  mnt-nfs-data0.automount
With that, the share should be accessible:
$ mount | grep -m1 mnt/nfs
systemd-1 on /mnt/nfs/data0 type autofs (rw,relatime,fd=48,pgrp=1,timeout=0[...]
This configuration has now been running on my laptop for a few months and it feels like it behaves better when these network resources go away and the machine isn't locking up any more. Yay \o/

conditional name resolving with dnsmasq

For some reason I needed to install a lightweight DNS forwarder on my local machine. A host file would not be sufficient, I really needed some kind of local DNS machinery that allows for specific queries to be answered by certain DNS servers. But looking more closely, there were already two DNS servers running on that machine:
$ sudo netstat -lnpu | grep :53\ 
udp   127.0.0.53:53     0.0.0.0:*  3845/systemd-resolved
udp   192.168.122.1:53  0.0.0.0:*  3189/dnsmasq  
The first one is systemd-resolved, that mainly seems to care about which resolv.conf to use and provides only some basic configuration parameters, not sufficient to what's needed here.

The second one is from libvirtd, running on the default address of the virtual virbr0 interface. And indeed, some parameters could be adjusted and the following actually worked:
$ sudo virsh net-edit --network default
[...]
  <dns>
    <forwarder domain='example.com' addr='1.2.3.4'>
    <forwarder domain='foobar.net'  addr='2.2.2.2'/>
    <forwarder addr='5.5.5.5'/>
  </dns>

$ sudo virsh net-destroy --network default
$ sudo virsh net-start   --network default
This would forward queries for example.com to 1.2.3.4 and all queries not listed here to 5.5.5.5. That was kind of what was needed, but these rules needed to be updated from time to time and editing XML stanzas for DNS entries felt somewhat unnatural. Also, with that setup I would depend on libvirt to always be installed and in working condition. If, for some reason, the libvirt setups breaks and its dnsmasq instance doesn't come up, the system would have no DNS services. But since dnsmasq was installed anyway, let's just use that.

After disabling systemd-resolved, the DNS part of libvirt's dnsmasq instance needed to be disabled too:

  <dns enable='no'/>
With that, port 53 was free to use and a new dnsmasq instance could be spawned.
$ cat /etc/dnsmasq.d/local.conf
interface=lo
listen-address=127.0.0.1
log-queries
server=5.5.5.5

server=/example.com/1.2.3.4
server=/foobar.net/2.2.2.2
All unspecified queries will go to 5.5.5.5. We could also omit that and with the absence of a no-resolv directive, dnsmasq will forward all unspecified queries to a name server specified in /etc/resolv.conf. That way we can have distinct (private) name servers for certain domains, and a stable fallback for everything else. Neat :-)

iSCSI fun

Long time no blog post, I know. Maybe this has to do with the fact that all these posts have always been mental notes to myself first, and bits of lessons learned for the everybody else second and most of my mental notes end up in a wiki installation of mine, which gets updated way more often than this blog :-\

Anyway, the other day I was trying to build Android for a Sony phone but I didn't have enough disk space on my laptop to do so. Checking out all the sources and the build takes almost 200 GB of space - but luckily I had an (encrypted) external disk available that I could use just for this. Once plugged in and decrypted I realized that I really wanted the build environment to match the proposed requirements as closely as possible. Running a Fedora 28 desktop, let's use a Ubuntu 18.04 virtual machine for the actual build:
$ vboxmanage createhd disk --filename /opt/vm/generic/disk2.vdi --size 204800 --variant Fixed
$ vboxmanage storageattach ubuntu0 --storagectl SATA --device 0 --port 2 --type hdd --medium /opt/vm/generic/disk2.vdi
Note that we use a fixed disk image to prevent some nasty I/O errors within that virtual machine. With all that in place, we could start the machine and start to build AOSP. With regards to the aforementioned nasty I/O errors, I really must give credit to btrfs in this scenario with its built-in data checksums. I know, ZFS has this feature for ages but as this still isn't available upstream, btrfs is the next best thing.

So, while this was all fine and dandy, I still had this disk enclosure attached to my laptop and thus I couldn't move around with my laptop as I usually do. Talk about #firstworldproblems! :-) So, why not attach the enclosure to my "server" instead and see if I could somehow access the enclosure over WiFi? For some reason the first thing that pops into my head was NBD and while I had an network block device setup going in the past, it wasn't much fun and would fail too often. So let's use iSCSI instead.

Once the disk enclosure was attached to the "server", the block devices needed to be passed on to the Xen DomU that was supposed to present them as an iSCSI target:
$ lsblk /dev/sd[de]
NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdd    8:48   0 931.5G  0 disk 
sde    8:64   0 931.5G  0 disk 
 
$ xl block-attach virt2 'format=raw, vdev=xvdd, access=rw, target=/dev/sdd'
$ xl block-attach virt2 'format=raw, vdev=xvde, access=rw, target=/dev/sde'
In the virtual machine, the tgt needed to be installed and configured:
$ cat /etc/tgt/conf.d/md0.conf 
default-driver iscsi

# RAID-0
<target iqn.example.local:virt2-sdd>
    backing-store dev/sdd
    initiator-address 10.0.0.15
</target>
With that in place, we can go back to our laptop again and attach the disk:
$ iscsiadm -m discovery -t sendtargets -p virt2
10.0.0.24:3260,1 iqn.example.local:virt2-sdd

$ iscsiadm -m node --targetname "iqn.example.local:virt2-sdd" --portal 10.0.0.24:3260 --login

$ lsblk /dev/sdb 
NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb    8:16   0  1.8T  0 disk
This worked :-)

Now we can decrypt the disk (sdb) again and make it available to our VirtualBox VM. It may not be the fastest setup (WiFi being the bottleneck), but it worked and now I can move around with my laptop again, with the disk enclosure sitting somewhere else. Yay! :-)

How to copy a DVD

The other day my neighbor came over and asked me if it was possible to make a copy of a DVD, and then burn the copy to a blank DVD, so that it can be played in a DVD player. I rarely use DVDs and my current computer doesn't even have a DVD drive anymore, so I used an older MacBook Pro for this task.

Looking back, the easiest way to do this would be to get a copy of the movie from DVD somewhere else and then find out how to make a playable DVD out of it, but why not go the whole way and find out about the extraction part as well?

DVD Ripping

The proper term here seems to be Ripping and in the past I sometimes used the wonderful HandBrake to do just that. Handbrake can then convert the ripped copy to other formats, but we're not quite there yet.

Usually it was sufficient to play a DVD once with VLC which would then use libdvdcss to store the CSS key in ~/.dvdcss, in turn allowing HandBrake to decrypt the same, which is essential for the rip to complete. And while this worked beforeTM, this time the resulting video was all distorted and felt like watching an old, mangled VHS tape, so something wasn't right.

The internet was full of similar reports and suggestions too, the main theme being "Just install libdvdcss to the correct location for HandBrake to find and it should just work". Well, instead of just relying on VLC to do the decryption once, I did install libdvdcss via Homebrew, hoping that Handbrake will be able to find it:

$ otool -L /usr/local/lib/libdvdcss.dylib
/usr/local/lib/libdvdcss.dylib:
        /usr/local/opt/libdvdcss/lib/libdvdcss.2.dylib (compatibility version 5.0.0, current version 5.0.0)
        /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 1259.20.0)
        /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit (compatibility version 1.0.0, current version 275.0.0)
        /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1226.10.1)

And lo and behold, Handbrake did find the library, but then would crash reliably when trying to open the DVD. Bummer. And only the closing comment in the same bug report shed some light on this, suggesting that 1) libdvdcss is not recommended anyway and 2) another tool named "MakeMKV" would help here.
And off we are, still trying to complete the DVD ripping part, when we could have just downloaded the movie from somewhere else :-) Luckily MakeMKV really did the trick and was easy enough to use, and offers a 30 60 day trial version, which is just fine for this one-off experiment.

DVD Creation

MakeMKV produced a ~3 GB file (MPEG-2 Video, AC3 Audio) which now needed to be converted into the DVD-Video format and finally burned onto a blank DVD-R.

I had Burn installed on this Mac, and while it is able to burn DVD-Video, it wouldn't understand the .mkv container format. The interwebs are full of recommendations for something called "iSkysoft DVD Creator", which is offered from so many shady looking websites, and under so many different alternate names that it's hard to not suspect something sinister. At least on first sight, the image does not present itself as malware, so maybe it's safe enough to try? After removing my tin foil hat and installing, this DVD creator was indeed able to parse the .mkv file and burn a DVD-Video disc in the correct format. But, as I was using the Trial Version, the whole movie was overlayed with a huge visible watermark. Lacking official documentation regarding this fact, I should have suspected for this to happen. Hm, so...what else is out there?

Digging through the Homebrew-Cask database I found DVDStyler, which should be up to the task as well. And it's released as Open Source software, cool beans!

The DVDStyler interface felt a bit awkward, but never look a gift horse in the mouth (I can't believe that this is a real proverb in the English language!) and a few mouse clicks and a coffee later, a DVD-Video copy was produced. Yay!

Next time I must remember to direct my neighbor to the next video-on-demand platform instead of ever fumbling with DVD copies again :-)

Privatkopie

Signal Desktop on Fedora

Signal Desktop has been released some time ago and while a native application may have its advantages, it also needs time and effort until it will be available for other platforms.

Binary installation

The install routine for "Debian-based Linux" instructs us to do the following:

 > curl -s https://updates.signal.org/desktop/apt/keys.asc | sudo apt-key add -
 > $ echo "deb [arch=amd64] https://updates.signal.org/desktop/apt xenial main" | \
 >   sudo tee -a /etc/apt/sources.list.d/signal-xenial.list
 > $ sudo apt update && sudo apt install signal-desktop
With only a Fedora distribution around, we could of course use alien to install the package, but:

$ sudo dnf install alien
[...]
Transaction Summary
===============================
Install  70 Packages
....let's not and instead do this manually. Luckily, their download directory structure adhers to the Debian Repository Format, so with a bit of fiddling we can produce the necessary URLs:

$ curl -sLO https://updates.signal.org/desktop/apt/dists/xenial/InRelease
$ gpg --recv-keys D980A17457F6FB06

$ gpg --verify InRelease
gpg: Signature made Wed 20 Dec 2017 11:43:08 AM PST
gpg:                using RSA key D980A17457F6FB06
gpg: Good signature from "Open Whisper Systems " [unknown]
Primary key fingerprint: DBA3 6B51 81D0 C816 F630  E889 D980 A174 57F6 FB06
The InRelease is signed and contains checksums to the Packages file:

$ curl -sLO https://updates.signal.org/desktop/apt/dists/xenial/main/binary-amd64/Packages
$ sha256sum Packages 
121c0e796cef911240bb39b6d5ebed747202e9be8261808ecbf3fc4641da9e7b  Packages

$ grep 121c0e796cef911240bb39b6d5ebed747202e9be8261808ecbf3fc4641da9e7b InRelease 
121c0e796cef911240bb39b6d5ebed747202e9be8261808ecbf3fc4641da9e7b     2578 main/binary-amd64/Packages
Let's look at the Packages file for the actual packages available for download:

$ egrep '^(Package|SHA256|File|$)' Packages 
Package: signal-desktop
Filename: pool/main/s/signal-desktop/signal-desktop_1.1.0_amd64.deb
SHA256: 74ee408fa5c7047b1f2a7faa2a9fe0d5947f7f960bd7776636705af69a6b1eec

Package: signal-desktop
Filename: pool/main/s/signal-desktop/signal-desktop_1.0.41_amd64.deb
SHA256: 9cf87647e21bbe0c1b81e66f88832fe2ec7e868bf594413eb96f0bf3633a3f25

Package: signal-desktop-beta
Filename: pool/main/s/signal-desktop-beta/signal-desktop-beta_1.1.0-beta.6_amd64.deb
SHA256: a38eb35001618019affba7df4e54ccbb36581d232876e0f1af9622970b38aa12
We decide to use the signal-desktop-beta and continue:

$ curl -sLO https://updates.signal.org/desktop/apt/pool/main/s/signal-desktop-beta/signal-desktop-beta_1.1.0-beta.6_amd64.deb
$ sha256sum signal-desktop-beta_1.1.0-beta.6_amd64.deb 
a38eb35001618019affba7df4e54ccbb36581d232876e0f1af9622970b38aa12  signal-desktop-beta_1.1.0-beta.6_amd64.deb
To extract the package, we'll need the dpkg package:

$ sudo dnf install dpkg
$ dpkg -x signal-desktop-beta_1.1.0-beta.6_amd64.deb deb
Check if all libraries are installed:

$ ldd deb/opt/Signal\ Beta/signal-desktop-beta | grep not
Looks good - let's "install" the package in /opt now:
sudo mv deb/opt/Signal\ Beta /opt/
sudo chown -R root:root /opt/Signal\ Beta/
sudo ln -s /opt/Signal\ Beta/signal-desktop-beta /usr/local/bin/signal-desktop-beta
Create desktop shortcut and icons:

mv deb/usr/share/applications/signal-desktop-beta.desktop ~/.local/share/applications/signal-desktop-beta.desktop
rsync -av deb/usr/share/icons/hicolor/ ~/.local/share/icons/hicolor/
The .desktop file should contain something like this:

$ cat  ~/.local/share/applications/signal-desktop-beta.desktop
[Desktop Entry]
Name=Signal Desktop Beta
Comment=Private messaging from your desktop
Exec="/opt/Signal Beta/signal-desktop-beta" %U
Terminal=false
Type=Application
Icon=signal-desktop-beta
With all that in place, Signal Desktop Beta should be ready to go. Don't forget to migrate the data from the old installation!

Build from source

Building from source may need a ton of dependencies, so it may or may not be desirable to install all that on a desktop system. The short version of the install routine would be:

git clone https://github.com/WhisperSystems/Signal-Desktop.git Signal-Desktop-git
cd Signal-Desktop-git

yarn config set cache-folder /usr/local/src/tmp/yarn/
npm config set cache /usr/local/src/tmp/npm/
TMPDIR=/usr/local/src/tmp/ npm install
So far, so good, but then there's some grunt breakage:

$ node_modules/grunt-cli/bin/grunt 
Loading "sass.js" tasks...ERROR
>> Error: ENOENT: no such file or directory, scandir '../node_modules/node-sass/vendor'
Loading "sass.js" tasks...ERROR
>> Error: ENOENT: no such file or directory, scandir '../node_modules/node-sass/vendor'
Warning: Task "sass" not found. Use --force to continue.

Aborted due to warnings.
...TBD :-\

Compression benchmarks 2017

As I had to send a disk based backup to another machine on a local network, I wanted to compress the backup data before sending it over the wire, of course. And as the last benchmark has been done one year ago, it was time for another one anyway :-)

$ ls -hgo disk.img
 -rw------- 1 861M Oct 25 23:02 disk.img

$ ./compress-test.sh -n 3 -f disk.img | tee foo.out

$ ./compress-test.sh -r foo.out
### Fastest compressor:
### pzstd/1c:      .75 seconds / 61.900% smaller 
### pigz/1c:      3.25 seconds / 60.100% smaller 
### zstd/1c:      3.25 seconds / 62.000% smaller 
### bro/1c:       3.50 seconds / 59.500% smaller 
### pzstd/9c:     6.00 seconds / 67.700% smaller 
### pigz/9c:     11.00 seconds / 63.800% smaller 
### gzip/1c:     12.00 seconds / 59.900% smaller 
### zstd/9c:     17.50 seconds / 68.000% smaller 
### pbzip2/9c:   18.00 seconds / 67.400% smaller 
### pbzip2/1c:   27.50 seconds / 64.900% smaller 
### lzma/1c:     53.50 seconds / 68.700% smaller 
### xz/1c:       54.75 seconds / 68.700% smaller 
### gzip/9c:     65.00 seconds / 63.700% smaller 
### bzip2/1c:    66.00 seconds / 64.900% smaller 
### bzip2/9c:    66.00 seconds / 67.500% smaller 
### bro/9c:      83.25 seconds / 70.500% smaller 
### lzma/9c:    240.50 seconds / 76.700% smaller 
### xz/9c:      243.75 seconds / 76.700% smaller 

### Smallest size:
### xz/9c:      243.75 seconds / 76.700% smaller 
### lzma/9c:    240.50 seconds / 76.700% smaller 
### bro/9c:      83.25 seconds / 70.500% smaller 
### xz/1c:       54.75 seconds / 68.700% smaller 
### lzma/1c:     53.50 seconds / 68.700% smaller 
### zstd/9c:     17.50 seconds / 68.000% smaller 
### pzstd/9c:     6.00 seconds / 67.700% smaller 
### bzip2/9c:    66.00 seconds / 67.500% smaller 
### pbzip2/9c:   18.00 seconds / 67.400% smaller 
### pbzip2/1c:   27.50 seconds / 64.900% smaller 
### bzip2/1c:    66.00 seconds / 64.900% smaller 
### pigz/9c:     11.00 seconds / 63.800% smaller 
### gzip/9c:     65.00 seconds / 63.700% smaller 
### zstd/1c:      3.25 seconds / 62.000% smaller 
### pzstd/1c:      .75 seconds / 61.900% smaller 
### pigz/1c:      3.25 seconds / 60.100% smaller 
### gzip/1c:     12.00 seconds / 59.900% smaller 
### bro/1c:       3.50 seconds / 59.500% smaller 

### Fastest decompressor:
### pzstd/dc:      .25 seconds
### zstd/dc:      1.00 seconds
### bro/dc:       2.25 seconds
### pigz/dc:      2.75 seconds
### gzip/dc:      4.25 seconds
### pbzip2/dc:    5.00 seconds
### lzma/dc:     14.25 seconds
### xz/dc:       15.25 seconds
### bzip2/dc:    20.75 seconds
Now the only thing left is to extend our little benchmark scripts to actually compare these to last year's results...

letsencrypt.sh

So, while this site cannot be equipped with any kind of TLS certificates (don't ask), I'm using Let's Encrypt certificates for some other web sites. But as much as I like everything the EFF does, I despise their official LE client, for apparent reasons:

$ sudo apt-get install certbot
[....]
The following NEW packages will be installed:
  certbot python-acme python-certbot python-cffi-backend python-configargparse python-configobj python-cryptography python-enum34 python-funcsigs python-idna python-ipaddress
  python-mock python-openssl python-parsedatetime python-pbr python-pyasn1 python-requests python-rfc3339 python-six python-tz python-urllib3 python-zope.component
  python-zope.event python-zope.hookable python-zope.interface
0 upgraded, 25 newly installed, 0 to remove and 3 not upgraded.
No, thank you :-\ Luckily, their ACME protocol allows for many more client options to choose from. After some experiments, I almost settled for letsencrypt.sh, but ran into unfixed bugs and ended up with a fork of the same. With that, I wanted to cover two use cases:

Local server

In this scenario, the letsencrypt.sh client is requesting (and validating) certificates for the same machine it's running on. This is also the machine where our account key resides. The work flow is basically:
## Needs to be done only once:
$ letsencrypt.sh register -a letsencrypt-account-key.pem \
      -e webmaster@example.org

$ umask 0022
$ letsencrypt.sh sign -a letsencrypt-account-key.pem \
      -k letsencrypt-example-key.pem \
      -w /var/www/.well-known/acme-challenge/ \
      -c letsencrypt-$(date -I)-example-cert.pem www.example.org mail.example.org
The well-known path should be writeable by the user executing the letsencrypt.sh script and readable by the webserver. That way, we don't have to play funky games with our webserver configuration, trying to generate the responses dynamically but instead we (temporarily) create actual files in that directory to be validated in the process. This may not work when renewing certificates for different domains, though.

We may need the full certificate chain (and some DH parameters too), so let's concatenate them altogether:
$ cat letsencrypt-$(date -I)-example-cert.pem \
      letsencrypt-$(date -I)-example-cert.pem_chain \
      dhparams-2048.pem \
      > letsencrypt-$(date -I)-example-cert-combined.pem
The resulting file can then be installed as SSLCertificateFile or ssl_certificate or ssl_cert or whatever service is in use here.

Remote server

While the letsencrypt.sh client seems small enough, we still don't want to install it on every server that needs a certificate. Instead, we'll use letsencrypt.sh to issue certificates for a remote machine. However, as the certificates are domain-validated, we need a way to transfer the validation token to the remote server. Luckily, this version of letsencrypt.sh is able to do just that:
$ letsencrypt.sh sign -a letsencrypt-account-key.pem \
      -k letsencrypt-foobar-key.pem \
      -P /usr/local/bin/push-response-ssh \
      -c letsencrypt-$(date -I)-foobar-cert.pem foobar.net www.foobar.net
Here, a hook script is transferring the token to the remote server (configured in the same script). On the remote side, another hook will read the validation token from stdin and install it in its own well-known location (TOKEN_DIR). This can all be configured with SSH key authentication:
foobar$ cat ~www-data/.ssh/authorized_keys 
command="/usr/local/sbin/push-response-ssh-remote" ssh-ed25519 AAAAC3[...] admin@local

foobar$ grep ^TOKEN_DIR /usr/local/sbin/push-response-ssh-remote 
TOKEN_DIR="/var/www/.well-known/acme-challenge"
The resulting certificate is still installed locally and needs to be transferred to the remote side. Since we configured the remote www-data account to only allow the hook script to execute, we have adjusted the same somewhat to allow the certificate to be installed by the same user. Since we're now abusing the original hook script (and multiple command directives for a single key are not supported), our deployment command looks somewhat convoluted:
$ cat letsencrypt-foobar-key.pem \
      letsencrypt-$(date -I)-foobar-cert.pem{,_chain} | \
      ssh -i ~/.ssh/letsencrypt-key www-data@foobar.net \
      installkey \
      aol.com \
      $(openssl rand -hex 32 | cut -c-43) \
      $(letsencrypt.sh thumbprint -a letsencrypt-account-key.pem | awk '{print $NF}')
So, the new installkey parameter tells the remote hook script what to do. The aol.com and the random value are just place holders for a valid domain name resp. something that looks like a validation token. This is all because push-response-ssh-remote expects all these things. The deployment would be much easier if we 1) used a different user or key or 2) re-write the remote hook to allow for a simpler deployment :-)

With that in place, the certificate for the remote side has been saved to whatever is configured in push-response-ssh-remote and can now be used in the respective services. Yay! \o/

Of character and block devices

While playing around with an OpenBSD system, I came across the different represenation of disk devices in BSD systems again:

$ ls -l /dev/{r,}wd0c
crw-r-----  1 root  operator   11,   2 Jun 28 02:22 /dev/rwd0c
brw-r-----  1 root  operator    0,   2 Jun 28 02:22 /dev/wd0c

$ pv -Ss 200m < /dev/wd0c > /dev/null  
 200MiB 0:00:11 [17.5MiB/s] [==================>] 100%

$ pv -Ss 200m < /dev/rwd0c > /dev/null 
 200MiB 0:00:03 [56.4MiB/s] [==================>] 100%
The FreeBSD Architecture Handbook documents this quite nicely:

  > Block devices are disk devices for which the kernel provides
  > caching. This caching makes block-devices almost unusable,
  > or at least dangerously unreliable. The caching will reorder
  > the sequence of write operations, depriving the application of
  > the ability to know the exact disk contents at any one instant
  > in time.
In short: don't use block devices on BSD systems but use their raw (character) devices instead, at least when accessing them directly.

Ext4 on MacOS X

With the new Raspberry Pi 3 Model B at hand and Raspbian already running, I wanted to see if the AArch64 port of Arch Linux would run as well. As I didn't have a real computer available at that time, I tried to get the image on the microSD card on MacOS .

First, let's unmount (but not eject) the microSD card:
$ diskutil umountDisk disk2
Unmount of all volumes on disk2 was successful
Create two partitions on the device:
$ sudo fdisk -e /dev/rdisk2
fdisk: 1> erase
fdisk:*1> edit 1
Partition id ('0' to disable)  [0 - FF]: [0] (? for help) 0B
Do you wish to edit in CHS mode? [n] 
Partition offset [0 - 31116288]: [63] 
Partition size [1 - 31116225]: [31116225] 204800

fdisk:*1> edit 2
Partition id ('0' to disable)  [0 - FF]: [0] (? for help) 83
Do you wish to edit in CHS mode? [n] 
Partition offset [0 - 31116288]: [204863] 
Partition size [1 - 30911425]: [30911425] 

fdisk:*1> p
Disk: /dev/rdisk2       geometry: 1936/255/63 [31116288 sectors]
Offset: 0       Signature: 0xAA55
         Starting       Ending
 #: id  cyl  hd sec -  cyl  hd sec [     start -       size]
------------------------------------------------------------------------
 1: 0B    0   1   1 - 1023 254  63 [        63 -     204800] Win95 FAT-32
 2: 83 1023 254  63 - 1023 254  63 [    204863 -   30911425] Linux files*
 3: 00    0   0   0 -    0   0   0 [         0 -          0] unused      
 4: 00    0   0   0 -    0   0   0 [         0 -          0] unused      
fdisk:*1> write
Writing MBR at offset 0.
fdisk: 1> quit
Create a file system on each partition (we'll need e2fsprogs to create an ext4 file system):
$ sudo newfs_msdos -v boot /dev/rdisk2s1
$ sudo /opt/local/sbin/mkfs.ext4 /dev/rdisk2s2 
As MacOS is able to read FAT-32, we should be able to mount it right away:
$ diskutil mount disk2s1
Volume BOOT on disk2s1 mounted

$ df -h /Volumes/BOOT
Filesystem     Size   Used  Avail Capacity  Mounted on
/dev/disk2s1  100Mi  762Ki   99Mi     1%    /Volumes/BOOT
Mounting a ext4 file systems turned out to be more difficult and there are several solutions available.

ext2fuse

ext2fuse is said to provide ext2/ext3 support via FUSE, but it segfaults on our newly created ext4 file system:
$ sudo /opt/local/bin/ext2fuse /dev/disk2s2 /mnt/disk
/dev/disk2s2 is to be mounted at /mnt/disk
fuse-ext2fs: Filesystem has unsupported feature(s) while trying to open /dev/disk2s2
Segmentation fault: 11

$ mount | tail -1
/dev/disk2s2 on /mnt/disk (osxfuse, synchronous)

$ df -h /mnt/disk
Filesystem     Size   Used  Avail Capacity  Mounted on
/dev/disk2s2    0Bi    0Bi    0Bi   100%    /mnt/disk

$ touch /mnt/disk/foo
touch: /mnt/disk/foo: Device not configured
Maybe ext4 is just too new for ext2fuse, let's try with ext2 instead:
$ sudo /opt/local/sbin/mkfs.ext2 /dev/rdisk2s2
$ sudo /opt/local/bin/ext2fuse /dev/disk2s2 /mnt/disk
/dev/disk2s2 is to be mounted at /mnt/disk
fuse-ext2 initialized for device: /dev/disk2s2
block size is 4096
ext2fuse_dbg_msg: File not found by ext2_lookup while looking up "DCIM"
ext2fuse_dbg_msg: File not found by ext2_lookup while looking up "VSCAN"
ext2fuse_dbg_msg: File not found by ext2_lookup while looking up "DCIM"
ext2fuse_dbg_msg: File not found by ext2_lookup while looking up ".Spotlight-V100"
ext2fuse_dbg_msg: File not found by ext2_lookup while looking up ".metadata_never_index"
[...]
This command never completes but could be terminated with ^C. The same happens with an ext3 file system.

ext4fuse

ext4fuse aims for ext4 support via FUSE, let see how that goes:
$ sudo /opt/local/sbin/mkfs.ext4 /dev/rdisk2s2
$ sudo /opt/local/bin/ext4fuse /dev/disk2s2 /mnt/disk 
$ mount | tail -1
ext4fuse@osxfuse0 on /mnt/disk (osxfuse, synchronous)

$ df -h /mnt/disk
Filesystem          Size   Used  Avail Capacity  Mounted on
ext4fuse@osxfuse0    0Bi    0Bi    0Bi   100%    /mnt/disk

$ sudo touch /mnt/disk/foo
touch: /mnt/disk/foo: Function not implemented
So close! :-) But there's no write support for ext4fuse yet.

fuse-ext2

There's another option, called fuse-ext2 which appears to feature (experimental) write support. We'll need FUSE for macOS again and then build fuse-ext2 from scratch:
$ sudo port install e2fsprogs
$ git clone https://github.com/alperakcan/fuse-ext2.git fuse-ext2-git
$ cd $_
$ ./autogen.sh && LDFLAGS="-L/opt/local/lib" CFLAGS="-I/opt/local/include" \
    ./configure --prefix=/opt/fuse-ext2
$ make && sudo make install
So, let's try:
$ sudo /opt/fuse-ext2/bin/fuse-ext2 /dev/rdisk2s2 /mnt/disk -o rw+
Rats - a window pops up with:
FUSE-EXT2 could not mount /dev/disk2s2
at /mnt/disk/ because the following problem occurred:
But the error description is empty, and there's nothing in the syslog too. After some digging I decided to reboot and this time it worked:
$ sudo /opt/fuse-ext2/bin/fuse-ext2 /dev/rdisk2s2 /mnt/disk -o rw+
$ mount | tail -1
/dev/rdisk2s2 on /mnt/disk (osxfuse_ext2, local, synchronous)

$ df -h /mnt/disk/
Filesystem      Size   Used  Avail Capacity  Mounted on
/dev/rdisk2s2   15Gi  104Mi   14Gi     1%    /mnt/disk

$ sudo touch /mnt/disk/foo
$ ls -l /mnt/disk/foo
-rw-r--r--  1 root  wheel  0 Mar  5 14:29 /mnt/disk/foo
That should be enough for us to finally install the ArchLinux image on that microSD card:
$ tar -C /Volumes/BOOT/ -xzf ArchLinuxARM-rpi-3-latest.tar.gz boot
$ mv /Volumes/BOOT/{boot/*,} && rmdir /Volumes/BOOT/boot
And for the root file system:
$ sudo tar --exclude="./boot" -C /mnt/disk/ -xvzf ArchLinuxARM-rpi-3-latest.tar.gz 
x ./bin
x ./dev/: Line too long
tar: Error exit delayed from previous errors.
Apparently bsdtar has trouble when the --exclude switch is used, so let's try without and remove the superfluous /boot contents later:
$ sudo tar -C /mnt/disk/ -xzf ArchLinuxARM-rpi-3-latest.tar.gz
$ sudo rm -r /mnt/disk/boot/*
This takes quite a long while to complete, but completed eventually. Of course, all this could be avoided if would have used another operating system in the first place :-)

tr: Bad String

Trying to mangle some characters resulted in a weird error message:
$ echo hello | tr [:lower:] [:upper:]
Bad string
Huh? Before debugging any further, searching the interwebs returns quite a few results, of course, so let's look at our options then:

$ type tr
tr is /usr/bin/tr

$ find /usr -type f -perm -0500 -name tr -ls 2>/dev/null
32054   11 -rwxr-xr-x   1 root bin  9916 Jan 23  2005 /usr/ucb/tr
16674   19 -r-xr-xr-x   1 root bin 18540 Jan 23  2005 /usr/xpg6/bin/tr
  410   20 -r-xr-xr-x   1 root bin 19400 Jan 23  2005 /usr/bin/tr
75251   19 -r-xr-xr-x   1 root bin 18520 Jan 23  2005 /usr/xpg4/bin/tr
Besides our default from SUNWcsu, we have three other versions of tr(1) available. The UCB version tries do do...something:

$ echo hello | /usr/ucb/tr [:lower:] [:upper:]
heuup
Apparently it replaces each character (position) literally, but fails to recognize the bracket expressions. Since the UCB tools were removed in later versions anyway, let's skip that for now. The two X/Open versions seem to manage:

$ echo hello | /usr/xpg6/bin/tr [:lower:] [:upper:]
HELLO

$ echo hello | /usr/xpg4/bin/tr [:lower:] [:upper:]
HELLO
But why wouldn't it work with the SUNWcsu version? truss(1) reports a missing file, but this turns out to be a red herring:

$ echo hello | truss -elfda tr [[:lower:]] [[:upper:]]
Base time stamp:  1481011767.7308  [ Tue Dec  6 09:09:27 MET 2016 ]
26125/1:         0.0000 execve("/usr/bin/tr", 0xFFBFFC9C, 0xFFBFFCAC)  argc = 3
26125/1:         argv: tr [[:lower:]] [[:upper:]]
26125/1:         envp: LC_MONETARY=en_GB.ISO8859-15 TERM=xterm SHELL=/bin/bash
26125/1:          LC_NUMERIC=en_GB.ISO8859-15 LC_ALL=en_US.UTF-8
26125/1:          LC_MESSAGES=C LC_COLLATE=en_GB.ISO8859-15 LANG=en_US.UTF-8
26125/1:          LC_CTYPE=en_GB.ISO8859-1 LC_TIME=en_GB.ISO8859-15
[...]
26125/1:         0.0061 stat64("/usr/lib/locale/en_US.UTF-8/libc.so.1", 0xFFBFE8D0) Err#2 ENOENT
26125/1:         0.0063 open("/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSCMD.mo", O_RDONLY) Err#2 ENOENT
26125/1:         0.0064 fstat64(2, 0xFFBFEA38)                          = 0
Bad string
26125/1:         0.0064 write(2, " B a d   s t r i n g\n", 11)          = 11
26125/1:         0.0065 _exit(1)
(Un)fortunately I had my share of weird experiences with character encodings and the like. And indeed, if we use a single-byte locale, /usr/bin/tr works just fine:

$ echo $LC_ALL
en_US.UTF-8

$ echo hello | LC_ALL=en_US tr [[:lower:]] [[:upper:]]
HELLO
Another workaround would be to use another expression, if possible:

$ echo hello | tr [a-z] [A-Z]
HELLO
In newer SunOS versions, /usr/bin/tr has been fixed and works as expected.

Encrypted network block device

While backing up with Crashplan works fine most of the time (and one trusts their zero-knowledge promise), sometimes new software updates, power outages or other unplanned interruptions cause Crashplan to fail and either stop backing up or discard the whole archive and start to backup from scratch, uploading the whole disk again :-\

So yeah, it mostly works but somehow I'd like to be a bit more in control of things. The easiest thing would be to order some disk space in the cloud and rsync all data off to a remote location - but of we need to encrypt it first. But how? There are a few solutions I've came across so far, I'm sure there are others, but let's look at them real short:

  • duplicity uses librsync to upload GnuPG encrypted parts to the remote destination. I've heard good (and bad) things about it, but the tought of splitting up data into small chunks and encrypting it, uploading thousands of small bits of random-looking data sounds cool and a bit frightening at the same time. Especially the restore scenario boggles my mind. I don't want to dismiss this entirely (and may even come back to it later on), but let's look for something saner for now.

  • Attic is a deduplicating backup program written in Python. I've haven't actually tried this one either, although it seems to support encryption and remote backup destinations, although the mentioning of FUSE mounts make me a bit uneasy.

  • Obnam supports encrypted remote backups, again via GnuPG. I gotta check this out if this really works as advertised.

  • Burp uses librsync and supports something called "client side file encryption" - but that turns off "delta differencing", which sounds like the whole purpose of using librsync in the first place is then gone.

  • Rclone supports encrypted backups, but only to some pre-defined storage providers and not to arbitrary SSH-accessible locations.

  • BorgBackup has the coolest name (after Obnam :-)) and supports deduplication, compression and authenticated encryption - almost too good to be true. This should really be my go-to-solution for my usecase and if my hand-stitched version isn't working out, I'll come back to this for sure.

With that, let's see if we can employ a Network Block Device to serve our needs.
As an example, let's install nbd-server on the remote location and set up a disk that we want to serve to our backup client later on:
$ sudo apt-get install nbd-server

$ cd /etc/nbd-server/
$ grep -rv ^\# .
./config:[generic]
./config:       user = nbd
./config:       group = nbd
./config:       listenaddr = localhost
./config:       allowlist = true
./config:       includedir = /etc/nbd-server/conf.d
./conf.d/local.conf:[testdisk]
./conf.d/local.conf:    exportname = /dev/loop1
./conf.d/local.conf:    flush = true
./conf.d/local.conf:    readonly = false
./conf.d/local.conf:    authfile = /etc/nbd-server/allow
./allow:127.0.0.1/32
We will of course serve a real disk later on, but for now a loop device will do:
$ dd if=/dev/zero bs=1M count=10240 | pv | sudo dd of=/var/tmp/test.img
$ sudo losetup -f /var/tmp/test.img
With that, our nbd-server can be started and should listen on localhost only - we'll use SSH port-forwarding later on to connect back to this machine:
$ ss -4lnp | grep nbd
tcp LISTEN  0 10 127.0.0.1:10809 *:* users:(("nbd-server",pid=9249,fd=3))
The client side needs a bit more work. An SSH tunnel of course, but also the nbd kernel module and the nbd-client program. However, I noticed that the nbd-client version that comes with Debian/8.0 contained an undocumented bug that made it impossible to gain write access to the export block device. And we do really want write access :-) Off to the source, then:
$ sudo apt-get install libglib2.0-dev
$ git clone https://github.com/NetworkBlockDevice/nbd.git nbd-git && cd nbd-git
While the repository appears to be maintained, the build system looks kinda archaic. And we don't want to install almost 200 MB in dependencies for the docbook-utils packages to provide /usr/bin/docbook2man to build man pages. So let's skip all that and build only the actual programs:
$ sed -r '/^make -C (man|systemd)/d' -i autogen.sh
$ sed    '/man\/nbd/d;/systemd\//d'  -i configure.ac

$ ./autogen.sh
$ ./configure --prefix=/opt/nbd --enable-syslog
$ make && sudo make install
The configuration file format changed (again) or be passed on the command line:
$ sudo modprobe nbd
$ sudo /opt/nbd/sbin/nbd-client -name testdisk localhost 10809 /dev/nbd0 -timeout 30 -persist
On the server side, this is noticed too:
nbd_server[9249]: Spawned a child process
nbd_server[9931]: virtstyle ipliteral
nbd_server[9931]: connect from 127.0.0.1, assigned file is /dev/loop1
nbd_server[9931]: Starting to serve
nbd_server[9931]: Size of exported file/device is 10737418240
We can now use /dev/nbd0 as if it were a local disk. We'll create a key, initialize dm-crypt and create a file system:
$ openssl rand 4096 | gpg --armor --symmetric --cipher-algo aes256 --digest-algo sha512 > testdisk-key.asc
$ gpg -d testdisk-key.asc | sudo cryptsetup luksFormat --cipher twofish-cbc-essiv:sha256 \
                  --hash sha256 --key-size 256 --iter-time=5000 /dev/nbd0
gpg: AES256 encrypted data
Enter passphrase: XXXXXXX
gpg: encrypted with 1 passphrase

$ gpg -d testdisk-key.asc | sudo cryptsetup open --type luks /dev/nbd0 testdisk
$ sudo file -Ls /dev/nbd0 /dev/mapper/testdisk
/dev/nbd0:            LUKS encrypted file, ver 1 [twofish, cbc-essiv:sha256, sha256] UUID: 30f41e4...]
/dev/mapper/testdisk: data

$ sudo cryptsetup status testdisk
/dev/mapper/testdisk is active.
  type:    LUKS1
  cipher:  twofish-cbc-essiv:sha256
  keysize: 256 bits
  device:  /dev/nbd0
  offset:  4096 sectors
  size:    20967424 sectors
  mode:    read/write

$ sudo mkfs.xfs -m crc=1,finobt=1 /dev/mapper/testdisk
$ sudo mount -t xfs /dev/mapper/testdisk /mnt/disk/
$ df -h /mnt/disk
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/testdisk   10G   33M   10G   1% /mnt/disk
Deactivate with:
$ sudo umount /mnt/disk 
$ sudo cryptsetup close testdisk
$ sudo pkill -f /opt/nbd/sbin/nbd-client
When mounted, the disk speed is limited of course by the client's upload speed and the CPU speed too (for SSH and dm-crypt). Let's play with this for a while and see how this works out with rsync workloads. Maybe I'll come back for BorgBackup after all :-)