Skip to content

gnu/stubs-32.h: No such file or directory

While trying to compile for ia32 on x86-64 system, this happened:
$ gcc -m32 file.c -o file.exe
In file included from /usr/include/features.h:378,
                 from /usr/include/stdio.h:28,
                 from file.c:10:
/usr/include/gnu/stubs.h:7:27: error: gnu/stubs-32.h: No such file or directory
Hm, let's see where to get gnu/stubs-32.h from:
$ apt-file search gnu/stubs-32.h
libc6-dev-i386: /usr/include/gnu/stubs-32.h
However, installing libc6-dev-i386 was not enough:
$ gcc -m32 file.c -o file.exe
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.4.5/libgcc.a \
                  when searching for -lgcc
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.4.5/libgcc.a \
                  when searching for -lgcc
/usr/bin/ld: cannot find -lgcc
collect2: ld returned 1 exit status
Turns out we needed GCC Multilib support:
$ sudo apt-get install gcc-multilib
(installs gcc-multilib, depends on lib32gcc1 and libc6-dev-i386)

$ gcc -m32 file.c -o file.exe
$ file file.exe
file.exe: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), \
             dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped

Mediawiki & SimpleSecurity

With the latest release of MediaWiki, the SimpleSecurity extension reveals a really nasty bug: with every request to an article page, the php-cgi process handling the request would allocate as much memory as possible (until it hits the configured memory-limit. With multiple requests to the site, the machine begins to swap very heavily and will run out of memory eventually.

In the meantime I decided to disable the extension via LocalSettings.php, but how to protect all the articles ''SimpleSecurity'' was supposed to protect? Luckily it weren't that many articles with read restrictions and most of them were categorized too. Instead of protecting these articles on the Wiki-layer, the webserver now has to handle that task. Since we're running lighttpd, the following directives were added to the configuration:
  $HTTP["url"] =~ "^/phpmyadmin|^/wiki/(Category|Special|User)" {
     auth.require = ( "" => 
         ("method" => "digest", "realm" => "Restricted", "require" => "valid-user"))
  }
However, that alone would not protect from calling articles via index.php?title=Special. I was surpised to see that Lighttpd can match on the querystring too:
  $HTTP["querystring"] =~ "title\=(Category|Special|User)" { ...
It's not as nice as the SimpleSecurity configuration and one needs to restart the webserver for every change of the protected sites, but as long as #29960 is not fixed, that could be the only way to go here.

MacOS X Lion Boot Disc

So, apparently MacOS X Lion has been released and it's (currently) only available via an AppStore download which allows for an in-place upgrade from a running MacOS X Snow Leopard system to the newest one. While this may be the way to go in 2011, I still wanted to be able to boot from an optical medium - just in case. Here is how I did it:

Use AppStore to download the Mac OS X Lion installer. Unfortunately on this particular system the AppStore would freeze after clicking on the Buy! button in the AppStore. A workaround suggested to enable Spotlight. Indeed, Spotlight was disabled on this system, because I don't need/use it. But how to enable it again?
  # mdutil -E /Applications
  Spotlight server is disabled.
Hm. Where is that magic .plist file again, to enable Spotlight again again? Ah, here it is:
  # launchctl load -w /System/Library/LaunchDaemons/com.apple.metadata.mds.plist
Now that Spotlight was enabled, I tried again. After clicking the Buy! button again, AppStore would not freeze/hang but instead come up with an error message now:
  Your request is temporarily unable to be processed. Please try again later.
Trying again (later) did not help, searching the interwebs did:
  On the iTunes toolbar click Store and then click Authorize this computer.
After doing that and a bit of fiddling around with iTunes, agreeing to the iTunes Terms & Conditions and restarting AppStore, it would finally start to download the MacOS X Lion installer. But where to? iftop revealed that Port :53218 was busy downloading data via HTTP:
  # lsof -li :53218
  COMMAND  PID  NODE NAME
  storeagen 2126  TCP local:53218->[...].akamaitechnologies.com:http (ESTABLISHED)

  # lsof -ln -p 2126 | sort -nk7 | tail -3
  storeagen 2126 [...] /private/var/db/dyld/dyld_shared_cache_x86_64
  storeagen 2126 [...] ../Library/Application Support/AppStore/444303913/mzm.stuhjljp.pkg
  storeagen 2126 [...] ../Library/Application Support/AppStore/444303913/mzm.stuhjljp.pkg

  # xar -t -f ../mzm.stuhjljp.pkg  | grep dmg
  InstallMacOSX.pkg/InstallESD.dmg
Ah, there it is :-)

After the download has completed, we can just follow the initial instructions and burn the InstallESD.dmg on a DVD via Disk Utility.

mount: warning: /mnt seems to be mounted read-write

When trying to create readonly bind-mounts, this happens:
$ mkdir a b
$ mount -o bind,ro a b
mount: warning: b seems to be mounted read-write.
$ touch b/1
$ mount -o remount,ro b
$ touch b/2
touch: cannot touch `b/2': Read-only file system
This is even documented in the manpage:
   Note that the filesystem mount options will remain the same as those on the 
   original mount point, and cannot be changed by passing the -o option along 
   with --bind/--rbind. The mount options can be changed by a separate remount 
   command, for example:

        mount --bind olddir newdir
        mount -o remount,ro newdir
Interestingly enough, mount(2) states:
    Up until Linux 2.6.26, mountflags was also ignored (the bind mount has the
    same mount options as the underlying mount point).  Since Linux 2.6.26, 
    the MS_RDONLY flag is honored when making a bind mount.
FWIW, let's see how other systems handle that:
  • NetBSD handles this via its mount_null filesystem and gets the dupicated subtree readonly right away:
    $ mkdir /mnt/a /mnt/b
    $ mount -t null -o ro /mnt/a /mnt/b
    $ touch /mnt/b/1
    touch: /mnt/b/1: Read-only file system
    $ touch /mnt/a/2
    $ ls -l /mnt/b/*
    -rw-r--r--  1 root  wheel  0 Jul 13 15:57 /mnt/b/2
    
  • Same for FreeBSD, only here it's called mount_nullfs.

  • OpenBSD removed mount_nullfs years ago.

  • Solaris has lofs(7FS):
    $ mkdir /mnt/a /mnt/b
    $ mount -F lofs -o ro /mnt/a /mnt/b
    $ touch /mnt/b/1
    touch: cannot create /mnt/b/1: Read-only file system
    $ touch /mnt/a/2
    $ ls -l /mnt/b/*
    -rw-r--r--   1 root  root   0 Jul 13 16:53 /mnt/b/2
    

Oh no, IPv6!

Now that IPv6 is up & running, several things are not though:

Go, go, IPv6!

This article reminded me that I'm still (!) not connected to IPv6. While waiting for my SixXS account, I tried with GoGo6 (formerly known as Hexago):
$ apt-get install gogoc
$ cat /etc/gogoc/gogoc.conf
[...]
userid=foo
passwd=bar
server=authenticated.freenet6.net
auth_method=any
if_prefix=eth0
log_file=1
log_rotation_size=1024

$ /etc/init.d/gogoc start 
Starting IPv6 TSP Client: gogoc
Not starting gogoc - no server key ... (warning).
After a bit of searching it turned out to be a missing server key. Since we never connected to any Freenet6 server, we don't have any keys. The quick solution is to disable the keyfile check:
$ grep ^[A-Z] /etc/default/gogoc 
CHECK_KEYFILE="no"
However, gogoc still would not work. Well, the process was running, spinning wildly in fact - but not setting up a IPv6 tunnel. What was going on?
$ strace -p `pgrep gogoc` 2>&1 | grep -v read
write(1, "(Y/N) montreal.freenet6.net is a"..., 4096) = 4096
write(1, "freenet6.net is an unknown host,"..., 4096) = 4096
write(1, " an unknown host, do you want to"..., 4096) = 4096
write(1, "t, do you want to add its key?? "..., 4096) = 4096
Apparently there's a public key needed too:
$ gogoc -n
montreal.freenet6.net is an unknown host, do you want to add its key?? (Y/N) Y

$ ls -lgo /var/lib/gogoc/gogockeys.pub 
-rw-r----- 1 607 Jul  5 19:55 /var/lib/gogoc/gogockeys.pub
With that in place*, gogoc was running just fine and IPv6 connectivity was established - yay :-)

*and a few more tweaks for the webserver

gzip vs. pigz vs. bzip2 vs. pbzip2

Shortly after the last benchmark, I came across pigz (parallel gzip) and a bigger (real-world) task to complete:
$ time gzip -c file.tar > file.tar
real     41m52.636s
user     33m58.392s
sys       2m26.903s

$ time pigz -c file.tar > file.tar.pigz
real     18m34.894s
user     54m07.784s
sys       3m47.910s

$ time bzip2 -c file.tar > file.tar.bz2
real    838m47.771s
user    830m48.621s
sys       2m18.429s

$ time pbzip2 -c file.tar > file.tar.pbz2
real     58m06.466s
user   1748m17.785s
sys       4m49.537s

$ ls -lhgo
-rw-r--r--   1  15G Jun 24 02:03 file.tar
-rw-r--r--   1 598M Jun 24 22:10 file.tar.gz
-rw-r--r--   1 600M Jun 24 21:02 file.tar.pigz
-rw-r--r--   1 304M Jun 25 12:44 file.tar.bz2
-rw-r--r--   1 306M Jun 25 13:42 file.tar.pbz2
Hardware: Sun SPARC Enterprise T5120, 1.2GHz 8-Core SPARC V9, 4GB RAM