Skip to main content


Wow, Miro needs a lot of memory, for holding ~20k of media files in its database:

$ top -o rsize -U `whoami` -n 5 -l 1 | cut -c-90
755-  Miro            0.0  08:42.11 20  3   232+   1678+  727M+  63M+ 772M+ 780M+ 1641M+ 
592   firefox-bin     0.0  33:52.33 27  1   297+   1785+  316M+ 118M+ 529M+ 500M+ 3673M+
594   thunderbird-bin 0.0  04:09.21 27  1   218+    907+  176M+  87M+ 285M+ 273M+ 2839M+
668   firefox-bin     0.0  17:52.92 43  1   371+   1135+  146M+  92M+ 257M+ 400M+ 3603M+
1342- songbird        0.0  19:59.40 14  1   213+    984+   68M+  56M+ 147M+ 142M+  876M+
Indeed, Songbird, having indexed the very same directory is significantly lighter on memory requirements, at least in this particular case. Unfortunately, both are feeling rather sluggish, guess I'll have to look into mpg321 again :-\

Online backups with CrashPlan

When I came across Evaluating Online Backup Services the other day, I remembered that I looked into that topic too a year ago or so. And I was surprised to see that the finalists were the same one as well. In alphabetical order:

  • Arq - unlimited storage, per-user encryption, their backup client costs $29, but then they're offering the restore client for free. Also, data integrity seems to be one of their key benefits
  • Backblaze - unlimited storage at an incredibly low price, per-user encryption but unfortunately no Linux client.
  • CrashPlan - unlimited storage at an incredibly low price, per-user encryption and a Java client, which is good enough.
  • Jungle Disk - unlimited storage, per-user encryption, Linux client, not sure about their security approach though.
  • SpiderOak - per-user encryption, Linux client, Android client coming soon, but unfortunately no unlimited storage plans.
With unlimited storage and an able backup software and because I already had a trial account from last year, I went with CrashPlan. Since I did not want to backup from my workstation, I went for the headless client installation, but with a few modifications.

On this Ubuntu 10.04 system (ia32), the Java JRE was already installed:
$ dpkg -l | grep java
ii  java-common      0.34                     Base of all Java packages
ii  sun-java6-bin    6.24-1build0.10.04.1     Sun Java(TM) Runtime Environment (JRE) 6
ii  sun-java6-jre    6.24-1build0.10.04.1     Sun Java(TM) Runtime Environment (JRE) 6
Also, we wanted to run CrashPlan as a different user:
# useradd -d /opt/crashplan -m -s /bin/false crashplan
# id crashplan 
uid=1002(crashplan) gid=1002(crashplan) groups=1002(crashplan)

# su -s /bin/bash - crashplan 

$ tar -xzf /tmp/CrashPlan_3.0.3_Linux.tgz 
$ cd CrashPlan-install/
$ ./ 
Would you like to start CrashPlanDesktop? (y/n) [y] n
Now that CrashPlan is installed (notice that we did not start the GUI), we'll fix a few permissions and file ownerships:
cd /opt/crashplan
rm -rf .bash_history CrashPlan-install
chown -R root:root .
chmod 0750 .

mkdir tmp
chown :crashplan . 
chown -R crashplan:crashplan .crashplan
chown -R :crashplan log/ conf/  lang/ tmp/ cache/ manifest/
chmod -R g+rw       log/ conf/  lang/ tmp/ cache/ manifest/
Almost done. A few quirks are still left:
sed 's/TARGETDIR\/\${NAME}\.pid/TARGETDIR\/log\/\${NAME}\.pid/' \
               -i.bak bin/CrashPlanEngine 
sed 's/SRV_JAVA_OPTS=\"/&\/opt\/crashplan\/tmp /' \
               -i.bak bin/run.conf
               -i.bak conf/
The first command makes sure the PID file gets written into log/, which is writable for our CrashPlan user. The second one fixes a java exception we got, because /tmp is mounted with noexec here. Finally, we reduce the loglevel, so our logfiles are not getting spammed with debug information. You might want to postpone this change until things are up & running. With all that in place, we can start the CrashPlan engine:
# su -s /bin/sh -c "/opt/crashplan/bin/CrashPlanEngine start" crashplan
The engine will listen on - that's where our desktop client has to connect to. We'll use the desktop client only to configure and schedule the backups. The actual backup jobs will be run by the engine on the server and we can just close the desktop client at any time. Enjoy! But make sure your restores are working too! :-)

Update: Bonus points for extra leetness: CrashPlan offers to ship your initial backup to them. Incredibly useful for bigger backups!

gnu/stubs-32.h: No such file or directory

While trying to compile for ia32 on x86-64 system, this happened:

$ gcc -m32 file.c -o file.exe
In file included from /usr/include/features.h:378,
                 from /usr/include/stdio.h:28,
                 from file.c:10:
/usr/include/gnu/stubs.h:7:27: error: gnu/stubs-32.h: No such file or directory
Hm, let's see where to get gnu/stubs-32.h from:
$ apt-file search gnu/stubs-32.h
libc6-dev-i386: /usr/include/gnu/stubs-32.h
However, installing libc6-dev-i386 was not enough:
$ gcc -m32 file.c -o file.exe
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.4.5/libgcc.a \
                  when searching for -lgcc
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.4.5/libgcc.a \
                  when searching for -lgcc
/usr/bin/ld: cannot find -lgcc
collect2: ld returned 1 exit status
Turns out we needed GCC Multilib support:
$ sudo apt-get install gcc-multilib
(installs gcc-multilib, depends on lib32gcc1 and libc6-dev-i386)

$ gcc -m32 file.c -o file.exe
$ file file.exe
file.exe: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), \
             dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped

Mediawiki & SimpleSecurity

With the latest release of MediaWiki, the SimpleSecurity extension reveals a really nasty bug: with every request to an article page, the php-cgi process handling the request would allocate as much memory as possible (until it hits the configured memory-limit. With multiple requests to the site, the machine begins to swap very heavily and will run out of memory eventually.

In the meantime I decided to disable the extension via LocalSettings.php, but how to protect all the articles ''SimpleSecurity'' was supposed to protect? Luckily it weren't that many articles with read restrictions and most of them were categorized too. Instead of protecting these articles on the Wiki-layer, the webserver now has to handle that task. Since we're running lighttpd, the following directives were added to the configuration:

  $HTTP["url"] =~ "^/phpmyadmin|^/wiki/(Category|Special|User)" {
     auth.require = ( "" => 
         ("method" => "digest", "realm" => "Restricted", "require" => "valid-user"))
However, that alone would not protect from calling articles via index.php?title=Special. I was surpised to see that Lighttpd can match on the querystring too:
  $HTTP["querystring"] =~ "title\=(Category|Special|User)" { ...
It's not as nice as the SimpleSecurity configuration and one needs to restart the webserver for every change of the protected sites, but as long as #29960 is not fixed, that could be the only way to go here.

MacOS X Lion Boot Disc

So, apparently MacOS X Lion has been released and it's (currently) only available via an AppStore download which allows for an in-place upgrade from a running MacOS X Snow Leopard system to the newest one. While this may be the way to go in 2011, I still wanted to be able to boot from an optical medium - just in case. Here is how I did it:

Use AppStore to download the Mac OS X Lion installer. Unfortunately on this particular system the AppStore would freeze after clicking on the Buy! button in the AppStore. A workaround suggested to enable Spotlight. Indeed, Spotlight was disabled on this system, because I don't need/use it. But how to enable it again?

  # mdutil -E /Applications
  Spotlight server is disabled.
Hm. Where is that magic .plist file again, to enable Spotlight again again? Ah, here it is:
  # launchctl load -w /System/Library/LaunchDaemons/
Now that Spotlight was enabled, I tried again. After clicking the Buy! button again, AppStore would not freeze/hang but instead come up with an error message now:
  Your request is temporarily unable to be processed. Please try again later.
Trying again (later) did not help, searching the interwebs did:
  On the iTunes toolbar click Store and then click Authorize this computer.
After doing that and a bit of fiddling around with iTunes, agreeing to the iTunes Terms & Conditions and restarting AppStore, it would finally start to download the MacOS X Lion installer. But where to? iftop revealed that Port :53218 was busy downloading data via HTTP:
  # lsof -li :53218
  storeagen 2126  TCP local:53218->[...] (ESTABLISHED)

  # lsof -ln -p 2126 | sort -nk7 | tail -3
  storeagen 2126 [...] /private/var/db/dyld/dyld_shared_cache_x86_64
  storeagen 2126 [...] ../Library/Application Support/AppStore/444303913/mzm.stuhjljp.pkg
  storeagen 2126 [...] ../Library/Application Support/AppStore/444303913/mzm.stuhjljp.pkg

  # xar -t -f ../mzm.stuhjljp.pkg  | grep dmg
Ah, there it is :-)

After the download has completed, we can just follow the initial instructions and burn the InstallESD.dmg on a DVD via Disk Utility.

mount: warning: /mnt seems to be mounted read-write

When trying to create readonly bind-mounts, this happens:

$ mkdir a b
$ mount -o bind,ro a b
mount: warning: b seems to be mounted read-write.
$ touch b/1
$ mount -o remount,ro b
$ touch b/2
touch: cannot touch `b/2': Read-only file system
This is even documented in the manpage:
   Note that the filesystem mount options will remain the same as those on the 
   original mount point, and cannot be changed by passing the -o option along 
   with --bind/--rbind. The mount options can be changed by a separate remount 
   command, for example:

        mount --bind olddir newdir
        mount -o remount,ro newdir
Interestingly enough, mount(2) states:
    Up until Linux 2.6.26, mountflags was also ignored (the bind mount has the
    same mount options as the underlying mount point).  Since Linux 2.6.26, 
    the MS_RDONLY flag is honored when making a bind mount.
FWIW, let's see how other systems handle that:
  • NetBSD handles this via its mount_null filesystem and gets the dupicated subtree readonly right away:
    $ mkdir /mnt/a /mnt/b
    $ mount -t null -o ro /mnt/a /mnt/b
    $ touch /mnt/b/1
    touch: /mnt/b/1: Read-only file system
    $ touch /mnt/a/2
    $ ls -l /mnt/b/*
    -rw-r--r--  1 root  wheel  0 Jul 13 15:57 /mnt/b/2
  • Same for FreeBSD, only here it's called mount_nullfs.

  • OpenBSD removed mount_nullfs years ago.

  • Solaris has lofs(7FS):
    $ mkdir /mnt/a /mnt/b
    $ mount -F lofs -o ro /mnt/a /mnt/b
    $ touch /mnt/b/1
    touch: cannot create /mnt/b/1: Read-only file system
    $ touch /mnt/a/2
    $ ls -l /mnt/b/*
    -rw-r--r--   1 root  root   0 Jul 13 16:53 /mnt/b/2

Oh no, IPv6!

Now that IPv6 is up & running, several things are not though:

Go, go, IPv6!

This article reminded me that I'm still (!) not connected to IPv6. While waiting for my SixXS account, I tried with GoGo6 (formerly known as Hexago):
$ apt-get install gogoc
$ cat /etc/gogoc/gogoc.conf

$ /etc/init.d/gogoc start 
Starting IPv6 TSP Client: gogoc
Not starting gogoc - no server key ... (warning).
After a bit of searching it turned out to be a missing server key. Since we never connected to any Freenet6 server, we don't have any keys. The quick solution is to disable the keyfile check:
$ grep ^[A-Z] /etc/default/gogoc 
However, gogoc still would not work. Well, the process was running, spinning wildly in fact - but not setting up a IPv6 tunnel. What was going on?
$ strace -p `pgrep gogoc` 2>&1 | grep -v read
write(1, "(Y/N) is a"..., 4096) = 4096
write(1, " is an unknown host,"..., 4096) = 4096
write(1, " an unknown host, do you want to"..., 4096) = 4096
write(1, "t, do you want to add its key?? "..., 4096) = 4096
Apparently there's a public key needed too:
$ gogoc -n is an unknown host, do you want to add its key?? (Y/N) Y

$ ls -lgo /var/lib/gogoc/ 
-rw-r----- 1 607 Jul  5 19:55 /var/lib/gogoc/
With that in place*, gogoc was running just fine and IPv6 connectivity was established - yay :-)

*and a few more tweaks for the webserver