Skip to main content

A kingdom for a music player!

The quest for the my perfect music player continues. I won't go into the details, why iTunes sucks so much, others have done this already.

A long time ago, I started with Songbird. They were crazy enough to take Mozilla's XULRunner runtime and turned it into a music player. But it had the browser still built in and came with lots of features, making this thing quite bloated. Maintaining ~20k songs became very unpleasant, CPU usage was definitely too high for a music player.

Then there's Cog, which is neat, but was more of an ad-hoc player and not meant to maintain a music library.

Later on, Unixhaus suggested to use Clementine. Crossplatform, open-source, nice interface and a Nyanalyzer - what more could one ask for? Sticked with it for quite a while, but there were still 20k songs to manage and cpu usage was rather high. 15% CPU usage on a MacBook Pro just for playing music? And often enough CPU usage spiked for a few minutes, doing something and then dropped to 15% again. Very annoying.

OK, what else is out there? Winamp! No, seriously, there's Winamp for Mac. Yeah, I tried it, I admit. Again with the 20k songs, Winamp performed quite well...but: it feels kinda creepy having Winamp on a Mac. Try it and see feel for yourself :-\

Another contender was Ecoute. It's has a 15-day-free-trial version, then it's US$8 in the Mac App Store. It's got a very nice UI, but after a while it kept freezing again and again and became unusable. Good thing they offered a trial version!

I haven't tried Enqueue yet. It's US$ 10 in the Mac App Store but unfortunately there's no trial version. It looks a lot like iTunes, exactly what I'm trying to avoid.

Someone mentioned Vox - their tagline is "The Lightweight Music App for Mac OS X". OK, I haven't tried it yet, but from looking at their screenshots they seem to have a different understanding of lightweight.

So, what now? Clementine is the player I used most of the time, but gave up some months ago, trying out other alternatives.

A few days ago I thought "Hm, I wonder what happed to Songbird?". They are at version 2.x now and I felt like giving it another shot. They still have this browser stuff underneath, but the application feels a lot faster now, they cut down on bloated plugins in the initial installation (though there are plenty addons to choose from) and for now Songbird seems to have made it and it's very usable, for me. And, like Clementine, it's cross-platform and open-source (sans the Nyanalyzer) and for now it's working just fine.

I could end this entry with "Stay tuned for updates" but I hope that I won't need to update this post and that Songbird continues to stay usable.

Remove U3 from an USB flash disk

This "SanDisk Cruzer" USB flash drive has this nasty U3 thingy included. Every time it connects with a host, this U3 partition reappears and gets in the way, trying to do super smart stuff.

So, how to get rid if this malware feature? Overwriting the whole device did not help. There's a removal tool...for Windows. There's a removal tool for Macs too, but only up to MacOS 10.6 :-\

For Linux, there's u3_tool. This hasn't been updated in a while but let's hope we won't need tools like this in the future. Here's a short description to get u3_tool going:

 svn co u3-tool-svn
 cd trunk u3-tool-svn
 automake --add-missing
 ./configure --prefix=/opt/u3-tool
 make && sudo make install
After the build completed, let's use it:
$ /opt/u3-tool/sbin/u3-tool -i -v /dev/sdb
Total device size:   1.88 GB (2017525760 bytes)
CD size:             16.00 MB (16777216 bytes)
Data partition size: 1.86 GB (2000748544 bytes)

$ /opt/u3-tool/sbin/u3-tool -p 0 -v /dev/sdb

WARNING: Loading a new cd image causes the whole device to be wiped. This INCLUDES
 the data partition.

Are you sure you want to continue? [yn] y

$ echo $?
And it worked :-) Good riddance, U3!

Repair broken MP3 files

Some application (which shall remain nameless) complained about "invalid MP3 formatted files", but gave no clue about what exactly was "invalid". The files played just fine, their ID3 tag were displayed too, so this application appeared to be overly picky about these files. Let's have a closer look:

$ file foo.mp3
foo.mp3: Audio file with ID3 version 2.3.0, contains: MPEG ADTS, layer II, v1, 192 kbps, 44.1 kHz, Stereo
Aha! Although the file was named .mp3 it really was an mp2 file. Of course, most program can play MPEG-2 just fine, but this application refused to do so.

The ever-so-faithful lame was quick to help:
$ mv foo.{mp3,mp2}
$ lame --mp2input foo.mp2 foo.mp3
Also, the ID3 version said "2.3.0", which is perfectly valid but maybe there was something else wrong with these files so I needed some magic program to check (and repair) this file's ID3 tag. mid3iconv (from python-mutagen) is supposed to do just that:
$ mid3iconv -d foo.mp3
Updating foo.mp3
Now our file looked like this:
foo.mp3: Audio file with ID3 version 2.4.0, contains: MPEG ADTS, layer III, v1, 128 kbps, 44.1 kHz, JntStereo
...and the application was happy to process this file :-)

smartctl & external disks

When monitoring S.M.A.R.T. values of disks in a Unix system, smartmontools is usually the way to go.

Unfortunately monitoring external disk enclosures may be difficult or not possible at all. I haven't seen a firewire enclosure that supported SMART commands yet.

USB enclosures tend to work, but I noticed that the scheduled self-tests would not complete. For example, for the following disk a short (S) self-test is scheduled every day at 3am and a long (L) self-test every saturday at 6am:

$ cat /etc/smartd.conf
/dev/disk/by-id/scsi-SSAMSUNG_HD103UJ -d sat -a -o on -S on \
                           -s (S/../.././03|L/../../6/06) -I 190 -I 194 -W 5
But so far every test did not complete:
$ smartctl -d sat -l selftest /dev/sdb
SMART Self-test log structure revision number 0
Warning: ATA Specification requires self-test log structure revision number = 1
Num  Test_Description    Status           Remaining  LifeTime(hours)
# 1  Extended offline    Aborted by host      00%     25980
# 2  Short offline       Aborted by host      00%     25973
Someone else had a similar problem and suggested to run "smartctl -a /dev/disk... every few seconds while the self-tests are in progress, so that the disk would not shut down. Preliminary tests showed that this helped in my case as well.

From now on the self-test schedule in smartd.conf will be accompanied by some cronjob doing just this:
while smartctl -d sat -l selftest /dev/sdb 2>&1 | \
               grep -q "Self-test routine in progress"; do 
      sleep 30
The crontab(5) entry for the schedule above:
# m h  dom mon  dow   command
0   3    \*   \*    \*   script /dev/disk/by-id/scsi-SSAMSUNG_HD103UJ
0   6    \*   \*    6   script /dev/disk/by-id/scsi-SSAMSUNG_HD103UJ
We might add some fuzzyness to this "script" of course, so that it will work when the actual self-tests starts a bit late.

SSH/HTTPS multiplexer

Hm, this nmap scan looked funny:

22/tcp  open  ssh     OpenSSH 5.2 (protocol 2.0)
80/tcp  open  http    Gatling httpd 0.13
443/tcp open  ssh     OpenSSH 5.2 (protocol 2.0)
SSH listening on :443, yet the site was serving a website there? Looking around a bit I came across a few SSH/HTTP/HTTPS multiplexers. There are even binary packages out there for a few distributions, nice! So, how is it done?


When using ssh-https.c, the ports are hardcoded:
$ grep execl ssh-https.c
                execl("/bin/nc", "/bin/nc", "localhost", "8443", NULL);
                execl("/bin/nc", "/bin/nc", "localhost", "22", NULL);

$ gcc -o ssh-https ssh-https.c
$ mv ssh-https /usr/local/sbin/
SSH will continue to listen on :22, the webserver will have to listen on :8443 and ssh-https will listen on :443:
$ grep ssh-https /etc/inetd.conf
https   stream  tcp  nowait  nobody  /usr/sbin/tcpd /usr/local/sbin/ssh-https


sslh is a bit more flexible, as ports can be passed on the command line:
$ grep sslh /etc/inetd.conf
https   stream  tcp  nowait  sslh  /usr/sbin/tcpd /usr/sbin/sslh \
       --listen --inetd --ssh localhost:22 --ssl localhost:8443
In any case, we should now have 3 listening ports:
$ netstat -anptu | grep LISTEN
tcp    0      0\*   LISTEN    2211/dropbear
tcp    0      0\*   LISTEN    6510/inetd
tcp    0      0\*   LISTEN    6012/lighttpd
And it's even working :-)
$ ssh-keyscan -p 443
# foo SSH-2.0-dropbear_2012.55

$ wget -qO-
Hello, world :-)

MacOS X disk I/O

While we're at it, some more "benchmarks". On a MacBook Pro with a Crucial m4 SSD inside:

$ dd if=/dev/rdisk0 bs=1024k count=2048 2>/dev/null | pv > /dev/null
   2GiB 0:00:09 [ 215MiB/s]
Oddly enough, the blockdevice of the same disk had much worse performance:
$ ls -lgo /dev/{r,}disk0
brw-r-----  1    14,   0 Dec 14 08:13 /dev/disk0
crw-r-----  1    14,   0 Dec 14 08:13 /dev/rdisk0

$ dd if=/dev/disk0 bs=1024k count=2048 2>/dev/null | pv > /dev/null
   2GiB 0:01:06 [30.8MiB/s]
On the same machine, Debian/wheezy was running in a VirtualBox virtual machine, we're still getting half the performance:
vm$ dd if=/dev/sda bs=1024k count=2048 2>/dev/null | pv > /dev/null
   2GB 0:00:18 [ 109MB/s]
And inside this virtual machine, another Debian/wheezy installation was running as a Xen DomU virtual machine, performance halves again:
vm|domU$ dd if=/dev/xvda1 bs=1024k count=2048 2>/dev/null | pv > /dev/null
   2GB 0:00:29 [69.4MB/s]

dm-crypt benchmarks

While setting up encrypted swap on yet another Linux machine, I wondered what the "best" crypto algorithm would be.

There are plenty of combinations of ciphers, modes, hash algorithms and keysizes to chose from with cryptsetup(8), let's see if we can find a fast, yet sufficiently "secure" one.

Before testing these combinations I wanted to find out which combinations were actually possible. E.g. setting up a dm-crypt device with aes-cbc-plain with a keysize of 128 or 256 bit would be possible - but any larger keysize was rejected. There were many "invalid" combinations, for reasons rooted deeply in their mathematical properties. So, let's find out these valid combinations then:

cryptsetup -c $CIPHER -s $KEYSIZE -d /dev/urandom create test /dev/sdc 2>/dev/null
if [ $? = 0 ]; then
      echo "Valid combination: cipher $CIPHER size $KEYSIZE"
      echo "Invalid combination: cipher $CIPHER - size $KEYSIZE"
After quite some iterations over a predefined set of combinations (12 ciphers, 7 modes, 14 hashing algorithms, 5 keysizes), there were 1125 valid combinations left. Yeah, testing took a while :-)

Now we wanted to see which combinations performed "best". As stated above, the usecase was a blockdevice for encrypted swap - so "fast, yet pretty secure" were the criteria to look for. As a (very) simple test, the following should be done for each newly set up crypto block device:
while [ $i -lt 30 ]; do
      # Empty all caches, including filesystem buffers  
      sysctl -qw vm.drop_caches=3
      dd if=/dev/mapper/test of=/dev/null bs=1M 2>/dev/null

The results, summarized:
  • cipher_null* is the fastest - but it's absolutely insecure, because...well, it's a NULL cipher :-)
  • Interestingly, PCBC mode is sometimes very slow. As per the Wikipedia article, this mode is not very common anyway, so we'll not choose this one.
  • As expected, Twofish is very fast, along with AES and Blowfish.
Ruling out some of the obvious combinations and omitting some of the "exotic" algorithms, these are my winners:
$ egrep -v 'cipher_null|-(ecb|lrw|pcbc)-|-plain|md[45]|rmd|tgr|crc32|sha1' results | \
    grep ' / 256' | sort -nk5  | head -10
twofish-ctr-essiv:sha256 / 256 : 66            <== 66 seconds for 30 runs. Lower is better.
twofish-cbc-essiv:sha256 / 256 : 69
twofish-xts-essiv:sha256 / 256 : 73
aes-xts-essiv:sha256 / 256 : 79
blowfish-cbc-essiv:sha256 / 256 : 81
blowfish-ctr-essiv:sha256 / 256 : 86
aes-ctr-essiv:sha256 / 256 : 90
aes-cbc-essiv:sha256 / 256 : 91
camellia-xts-essiv:sha256 / 256 : 103
serpent-ctr-essiv:sha256 / 256 : 103

Now my /etc/crypttab proably looks like this:
swap /dev/sda2 /dev/urandom swap,cipher=twofish-xts-essiv:sha256,size=256,hash=sha512
Word of caution: this is a benchmark - some arbitrary test for a very special usecase, executed on one machine and one machine only (Fedora 18 in an ESX virtual machine, equipped with 2 AMD Opteron 848 processors). Before applying these results to your environment, run the benchmark yourself or, better yet: write your own benchmark for your usecase!

Mediawiki restore

The other day this box went down and I could not access my Mediawiki installation. The box was meant to come back online later on but I really wanted to read the wiki, now. Luckily I had somewhat fresh backups from the mediawiki installation - so why not using them? Just to see how quickly a full restore would take.

So I fired up an openSUSE 12.1 installation, running in a virtual machine. The base system was already installed, a few more packages were needed now:

zypper install nginx mysql-community-server php5-fpm php5-mysql php5-intl php5-gd
Note: Mediawiki likes to utilize object caching, such as xcache or APC. However, PHP modules for openSUSE like php-APC or php5-xcache are only available via extra repositories. For the sake of simplicity, let's skip those now.

With these packages installed, their configuration comes next. This may be a bit openSUSE centric and other distributions may work differently.

For PHP-FPM, only the following parts were changed from its original configuration:
 $ cp -p /etc/php5/fpm/php-fpm.conf{.default,}
 $ cat /etc/php5/fpm/php-fpm.conf
 pid       = /var/run/
 error_log = /var/log/php-fpm.log
 listen    = /var/run/php5-fpm.sock
 user  = nobody
 group = nobody
 pm = dynamic
 pm.max_children = 50
 pm.start_servers = 20
 pm.min_spare_servers = 5
 pm.max_spare_servers = 35
Enable and start PHP-FPM:
chkconfig php-fpm on && service php-fpm start
Next up is nginx. A very basic configuration:
 $ cat /etc/nginx/nginx.conf
 user                    nginx;
 worker_processes        1;
 error_log       /var/log/nginx/error.log;
 pid             /var/run/;
 events {
        worker_connections      1024;
        use                     epoll;
 http {
        include         mime.types;
        default_type    application/octet-stream;
        access_log      /var/log/nginx/access.log;
        # This will access our PHP-FPM installation via a socket
        upstream php5-fpm-sock {
                server          unix:/var/run/php5-fpm.sock;
   server {
        listen          80;
        server_name     suse0.local;
        root            /var/www;
        index           index.html index.php;
        autoindex       on;
        access_log      /var/log/nginx/suse0.access.log;
        location ~ \.php?$ {
                try_files $uri =404;
                include fastcgi_params;
                fastcgi_pass php5-fpm-sock;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_intercept_errors on;
Enable and start nginx (and MySQL):
  $ chkconfig nginx on && service nginx start
  $ chkconfig mysql on && service mysql start
By now we should have a working webserver, able to serve PHP pages. Now to the actual Mediawiki installation:
  $ cd /var/www
  $ wget{,.sig}
Import their GPG keys and verify the signature:
  $ wget -O - | gpg --import 
  $ gpg --verify mediawiki-1.20.0.tar.gz.sig 
  $ tar -xzf mediawiki-1.20.0.tar.gz 
  $ ln -s mediawiki-1.20.0 mediawiki
  $ cd mediawiki
With that in place, we could go to http://suse0.local/mediawiki/ and use the install wizard to install a basic, but empty Mediawiki. Once this is done, we restore a few things from our backup:
  $ tar -C ../backup/mediawiki/ -cf - LocalSettings.php extensions images | tar -xvf -
  $ bzip2 -dc ../backup/DB_wikidb.sql.bz2 | mysql -D wikidb
In our case, a few modifications to LocalSettings.php had to be made:
 # We have not yet set up any rewrite rules so short urls
 # won't work for now
 ## $wgArticlePath               = "/wiki/$1";
 # Disable this one for now
 ## $wgServer              = "";
 # Our database details are different of course:
 $wgDBtype           = "mysql";
 $wgDBserver         = "";
 $wgDBname           = "wikidb";
 $wgDBprefix         = "mw_";          # Our original database used a table prefix!
 $wgDBuser           = "root";
 $wgDBpassword       = "s3cr3t";
 # no APC/Xcache for openSUSE just now
 ## $wgMainCacheType       = CACHE_ACCEL;
Also: check those ''extensions'' or disable them if things don't work as expected.

Now, let's run the update.php script, to address any version differences of our new Mediawiki instance:
  $ php maintenance/update.php --conf `pwd`/LocalSettings.php
Done! The Mediawiki installation should now work. If it doesn't, try to set a few more things in LocalSettings.php:
 # At the very top:
 error_reporting( E_ALL | E_STRICT );
 ini_set( 'display_errors', 1 );
 $wgShowExceptionDetails        = true;
 $wgShowSQLErrors               = true;
Good luck! :-)

DVD Region Code Hell

So, I got this DVD and wanted to play it on a MacBook Pro (Mid 2009) - but what's this? A Drive Region chooser pops up, prompting me to set the "correct" DVD region for this DVD. And indeed, I've seen this chooser before and remember setting it to region 2 (Europe) a long time ago. Now this Drive Region window tells me that I have only 4 changes left, then the "region code" could not be changed any more.

What? I don't play a lot DVDs on this computer and while I've heard of "region codes" I thought this was a thing of the past and nowadays it was considered impractical to enforce or even honor this ridiculous craziness. Apparently not.

Of course, in most cases one can just kill the Drive Region chooser and use VLC to play the DVD, but this time VLC just couldn't play it. To put this stupidity to an end I decided to resort to more drastic measures. After all, I bought this DVD and I intended to buy even more DVDs, from all over the world - and I don't want to have to deal with shenanigans like "DVD region codes".

Searching the net on how to accomlish that brings quite a few results, but many of them are horribly dated (PowerPC Macs, remember those? :-)) or stop halfway how to actually get rid of this nonsense. After reading through a few related posts, here's how I did it.

We have to patch the firmware of the DVD drive our Mac was equipped with:

$ system_profiler -detailLevel mini | grep -A4 ^Disc
Disc Burning:
      Firmware Revision: KB19
      Interconnect: ATAPI
Luckily, able people have already done this and provided tools and firmware images to accomplish this. Unfortunately, their links often lead to strange places and are not always accurate (any more). For your (and mine) convenience, I set up a small mirror site for these tools.

Grab and unzip it. The newer version, (Version 2.01) did not work and exited with:
  fatal: Selected drive (null) does not appear to be a matshita device
We also need the correct firmware for our device. In my case, I needed the one called UJ-868_KB19_Stock_RPC1 for this "MATSHITA DVD-R UJ-868" DVD drive. Grab it and unzip it.

With all that in place, we can finally start flashing. Make sure there is no disk in the drive!
$ unzip
$ cd MatshitaFlasher\
$ unzip ~/

$ sudo ./simple_flash 0 UJ-868_KB19_Stock_RPC1/KB19_rpc1.dat
compiled at Apr 14 2011 22:33:18
Selected device: MATSHITADVD-R   UJ-868  KB1
Continue? y

$ echo $?
The flashing took about 30 seconds to complete. Afterwards (and without a reboot!), the DVD could be played without the nag screen, just as I would've expected in the first place.

Backing up a Windows host with rsnapshot

After setting up this Windows box I thought I could skip HardlinkBackup this time and configure its backups with rsnapshot, my most favourite backup solution anyway:

  • Install an SSH Server for Windows. Copssh is no longer a free product but an older version was made available for our convenience. Note: using an stale version also means that no security updates will be available - probably not a good idea if the box is directly connected to an untrusted network. Download & install and SSH should be good to go. You may want to open a firewall port. After public key authentication has been set up, the sshd_config(5) could be tweaked a bit:

    +Protocol 2
    +PasswordAuthentication no
    +AllowAgentForwarding no
    +AllowTcpForwarding no
    +AllowUsers      Administrator
    Don't forget to restart the SSH service after adjusting the configuration: net stop "Openssh SSHD" (and then "start" again)

  • With rsnapshot to work, we'll need a Windows version of rsync, Cwrsync Again, this is no longer a free product but they also made an older version available. Neat. Download & install should be an easy clickfest. However, rsync.exe may not be in our PATH when logging in via SSH, so let's add a symlink:

    $ ln -s /cygdrive/c/Program\ Files/ICW/cwRsync/bin/rsync.exe /bin/rsync.exe
  • With all that in place only rsnapshot is left to be configured. Here's a (shortened) configuration file:

    snapshot_root   /mnt/backup/rsnapshot/windows/
    cmd_rsync       /usr/bin/rsync
    interval        daily   7
    interval        weekly  4
    interval        monthly 2
    verbose         2
    loglevel        3
    logfile /var/log/rsnapshot/rsnapshot-windows.log
    lockfile        /var/run/
    rsync_short_args -rlptDzv
    rsync_long_args  --delete --numeric-ids --delete-excluded --relative
    exclude_file     /etc/rsnapshot/rsnapshot-windows.exclude
    link_dest       1
    backup  Administrator@windows:/cygdrive/c/Documents?and?Settings/   windows/
    backup  Administrator@windows:/cygdrive/c/Program?Files/ICW/CopSSH/ windows/
    Note that I did not use the usual -a option for rsync because the windows ownerships (-go) could not be mapped to a Unix user and files would get transferred over and over again because of this. The same goes for ACLs (-A) and EAs (-X).

  • Happy recovering :-)