Skip to main content

Ignoring unknown extended header keyword `'

While extracting a tarball, GNU/tar told me:

Ignoring unknown extended header keyword `'
Ignoring unknown extended header keyword `SCHILY.ino'
Ignoring unknown extended header keyword `SCHILY.nlink'
Hm, what does that remind me of? Ah, star! Digging in the bits still left on BerliOS, there seems to be something what looks like a POSIX proposal:
$ cat star/README.posix-2001
        Star supports the following fields in the extended header:
        Vendor unique:
        "SCHILY.devmajor" "SCHILY.devminor"     (create/extract)

        In -dump mode (a preparation for incremental dumps) star archives:

        ""            The field stat.st_dev   - the filesys indicator
        "SCHILY.ino"            The field stat.st_ino   - the file ID #
        "SCHILY.nlink"          The field stat.st_nlink - the hard link count
        "SCHILY.filetype"       The real file type      - this allows e.g.
Wow. Alas, these headers remain unrecognized in GNU/tar, hence the (benign) warnings.

test -w

Learn something new every day: I'm trying to test if a directory on a different filesystem is writable. Instead of really writing to it (e.g. by using touch(1)), I wanted to test with -w:

$ ls -ld /mnt/usb/foo
drwx------ 13 root root 4096 Jan  1 15:38 /mnt/usb/foo

$ [ -w /mnt/usb/foo ]; echo $?
OK, /mnt/usb/foo is not writable, because /mnt/usb was mounted read-only at this point. But look what dash(1) thinks of this:
$ [ -w /mnt/usb/foo ]; echo $?
Huh? But the manpage explains:
-w file  True if file exists and is writable. True indicates only that the write flag is on.
         The file is not writable on a read-only file system even if this test indicates true.
...whereas bash(1) only states:
-w file  True if file exists and is writable.
Zsh and ksh93 behave just as bash - that is returning 1 when the file is not writable, even though its permissions would allow for writes. Note that /usr/bin/test is shell-specific as well! -- Not anymore? Let's try again:
$ mount | grep /mnt
/dev/loop0 on /mnt type ext2 (rw,relatime,block_validity,barrier,user_xattr,acl)

$ bash -c "[ -w /mnt ]"; echo $?

$ dash -c "[ -w /mnt ]"; echo $?

$ bash -c "/usr/bin/test -w /mnt"; echo $?

$ dash -c "/usr/bin/test -w /mnt"; echo $?
And now for the read-only mount:
$ mount | grep /mnt
/dev/loop0 on /mnt type ext2 (ro,relatime,block_validity,barrier,user_xattr,acl)

$ bash -c "[ -w /mnt ]"; echo $?

$ dash -c "[ -w /mnt ]"; echo $?

$ bash -c "/usr/bin/test -w /mnt"; echo $?

$ dash -c "/usr/bin/test -w /mnt"; echo $?

patchadd: Not enough space in /var/run to copy overlay objects.

When pca wanted to install 144500-19, patchadd aborted with:

Running patchadd
Validating patches...
Loading patches installed on the system...
Loading patches requested to install.
Checking patches that you specified for installation.

Unable to install patch. Not enough space in /var/run to copy overlay objects.
 401MB needed, 220MB available.

Failed (exit code 1)
Well, this Sun Enterprise 250 only has 768 MB memory, not too much in these days. Let's add some virtual memory then:
# mkfile 1g /var/tmp/swap.tmp
# swap -a /var/tmp/swap.tmp
/var/tmp/swap.tmp: Invalid operation for this filesystem type
Oh, right - we're on ZFS already. Let's try again:
# rm /var/tmp/swap.tmp
# zfs create -V 1gb rpool/tmpswap
# swap -a /dev/zvol/dsk/rpool/tmpswap
# df -h /var/run
Filesystem             size   used  avail capacity  Mounted on
swap                   1.4G   107M   1.3G     8%    /var/run
Now we should be good to go :-)

Oh, and regarding those "overlay objects in /var/run" mentioned above: once patchadd(1M) is running, take a look:
# df -h | grep -c /var/run

VirtualBox & SysRq

Sometimes I need to send sysrq keys to a Linux virtual machine and I always forget how to do this, so here it is:

VBoxManage controlvm <VM> keyboardputscancode 1d 38 54 <PRESS> <RELEASE> d4 b8 9d
The PRESS and RELEASE values are derived from the scancodes: The PRESS value is the bare scancode, the RELEASE value is the PRESS value plus 0x80.

So, to send a "s" (to Sync the filesystems), the scancode would be 0x1F. And 0x1F + 0x80 equals 9F, this would be the scancode for releasing the key. Putting this all together, sending sysrq-s to the virtual machine goes like this:
VBoxManage controlvm <VM> keyboardputscancode 1D 38 54 1F 9F D4 B8 9D
Note: Be sure to set kernel.sysrq = 1 in your Linux guest machine, so that sysrq-keycodes are actually honored by the guest.

This can also be used to switch to a different terminal (if you have a VirtualBox console window open):
VBoxManage controlvm <VM> keyboardputscancode 1d 38 3b
This is the equivalent to Ctrl-Alt-F1 and would switch to the first terminal. Iterate 0x3b up to 0x42 to switch up to Ctrl-Alt-F8.

Update: I finally released the wrapper script to send sysrq keycodes to a VirtualBox VM.

Error: Protected multilib versions

Did I say that I don't like yum? I think I did and others did, too.

So this yum upgrade failed due to insufficient diskspace and yum exited with:

/usr/sbin/build-locale-archive: cannot add to locale archive: No such file or directory
could not write to ts_done file: [Errno 28] No space left on device
Error unpacking rpm package imsettings-libs-1.2.6-1.fc16.x86_64
error: gtk2-2.24.8-2.fc16.x86_64: install failed
error: unpacking of archive failed on file \
  /usr/lib64/;4ed44d97: cpio: write
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yum/", line 444, in callback
    self._instCloseFile(  bytes, total, h )
  File "/usr/lib/python2.7/site-packages/yum/", line 507, in _instCloseFile
  File "/usr/lib/python2.7/site-packages/yum/", line 246, in _scriptout
    self.base.history.log_scriptlet_output(data, msgs)
  File "/usr/lib/python2.7/site-packages/yum/", line 871, in log_scriptlet_output
  File "/usr/lib/python2.7/site-packages/yum/", line 640, in _commit
    return self._conn.commit()
sqlite3.OperationalError: database or disk is full
error: python callback > failed, aborting!
OK, no big deal. Just resized the root partition so that enough space is available now and try again:

$ yum upgrade
There are unfinished transactions remaining. You might consider running \
yum-complete-transaction first to finish them.
Error: Protected multilib versions: glibc-2.14.90-19.x86_64 != glibc-2.14.90-14.i686
This didn't go so well. Let's try yum-complete-transaction then, as suggested:
$ yum-complete-transaction
Loaded plugins: langpacks, presto, refresh-packagekit
There are 1 outstanding transactions to complete. Finishing the most recent one
The remaining transaction had 87 elements left to run
Package glibc-common-2.14.90-19.x86_64 already installed and latest version
--> Processing Dependency: for package: elfutils-0.152-1.fc16.x86_64
--> Processing Dependency: /bin/sh for package: kernel-3.1.2-1.fc16.x86_64
This goes on for quite a while, hours even. I went to bed at this time, only to see the next morning that yum got killed by the out-of-memory killer:
Out of memory: Kill process 1457 (yum-complete-tr) score 782 or sacrifice child
Killed process 1457 (yum-complete-tr) total-vm:1643616kB, anon-rss:830884kB, file-rss:0kB
The VM has 1 GB RAM and 512 MB swap - not too much, but certainly enough for doing an upgrade, I assumed. OK, so how to go on from here? With yum-complete-transaction failing, I decided to cleanup any old transactions and start from scratch:
$ yum-complete-transaction --cleanup-only
Cleaning up unfinished transaction journals
$ yum-complete-transaction
No unfinished transactions left.
But now the upgrade would stop with:
$ yum upgrade
Error: Protected multilib versions: glibc-2.14.90-19.x86_64 != glibc-2.14.90-14.i686
Using --setopt=protected_multilib=false (and --skip-broken) brought us only little further:
$ yum upgrade --setopt=protected_multilib=false --skip-broken
Transaction Check Error:
  file /usr/share/doc/glibc-2.14.90/NEWS conflicts between attempted
  installs of glibc-2.14.90-14.i686 and glibc-2.14.90-19.x86_64
Moving /usr/share/doc/glibc-2.14.90/NEWS out of the way did not help in this case. What did help was to "remove" the conflicting package from the package database. Of course, we could not really delete glibc, since it's needed for pretty much everything:
$ rpm --erase --nodeps --noscripts --justdb glibc-2.14.90-14.x86_64
$ yum upgrade --setopt=protected_multilib=false --skip-broken
This went through successfully and the system is now properly updated and even survived a reboot. Just in case that it's still not clear from these notes: I find it unacceptable that yum is having such a hard time finding out how to do the Right ThingTM after a failed transaction. And yes, I've been using apt-get for years now - never had anything remotely similar to this mess. Incredible, I cannot understand how people can work with that. I mean, really work. I'm using Fedora only for playing around and while I really like some of the approaches Fedora is going for, this yum crap is a major show stopper for me to ever adopt any rpm-based distribution. I'd rather do ports :-\

Oh, apparently there's still one thing left to clean up:
$ yum check
Loaded plugins: changelog, langpacks, presto, refresh-packagekit
glibc-common-2.14.90-14.x86_64 has missing requires of glibc = ('0', '2.14.90', '14')
glibc-common-2.14.90-19.x86_64 is a duplicate with glibc-common-2.14.90-14.x86_64
In my case, "package-cleanup --cleandupes" solved this one. Sigh...

Mediawiki & MySQL & SQLite

I got a MediaWiki instance up & running with MySQL as its database backend. Now, I wanted to play around with this wiki in a VM running Fedora 16. Let's prepare the VM for running MediaWiki:

  $ yum install httpd php links mediawiki mediawiki-Cite
  $ grep ^Alias /etc/httpd/conf.d/mediawiki.conf 
  Alias /wiki/skins /usr/share/mediawiki/skins
  $ cp -a /var/www/wiki/ /var/www/html/
  $ systemctl enable httpd.service
  $ systemctl start httpd.service
Note that we did not install a MySQL server here: I did not want to run yet another service in this small virtual machine.

After that, the MediaWiki instance can be accessed & setup via http://fedora.local/wiki/ - be sure to choose SQLite for the database backend.

Now MediaWiki is up & running with an empty SQLite database. But how can I convert my original MySQL database into SQLite?

There's mysql2sqlite, a shell script using mysqldump and awk to do the job. And it did the job pretty well so far: -u admin -p mw_wiki > wiki.sqlite.raw
I could pipe the whole thing through sqlite already but I had to alter the output a bit: my original database had $wgDBprefix set to "mw_" but somehow this parameter seems to be ignored when an SQLite database is used. So let's cut out the prefix from our preliminary dump whenever a table is created, indexed or inserted1) into:
  sed -e '/[INDEX|INTO|TABLE] \"mw_/s/mw_//g' -i wiki.sqlite.raw
Now we can generate our SQLite database as simple as:
  sqlite wiki.sqlite < wiki.sqlite.raw
Point $wgDBname to this filename and off we go: the wiki should now be up & running with the original data visible. Yay ;-)

Great, but we could not update pages or create new articles2):
  INSERT INTO text (old_id,old_text,old_flags) VALUES (NULL,...
  Database returned error "19: text.old_id may not be NULL". 
Hm, shouldn't old_id be set to AUTOINCREMENT? Let's look at our wiki.sqlite.raw again:
  "old_id" int(10)  NOT NULL ,
  "old_text" mediumblob NOT NULL,
  "old_flags" tinyblob NOT NULL,
  PRIMARY KEY ("old_id")
Again: this is the output of mysql2sqlite and it looks pretty sane to me. But somehow old_id wasn't set to PRIMARY KEY (aka "AUTOINCREMENT") when parsed by sqlite (v3.7.7.1). When the "PRIMARY KEY" is moved right before the "NOT NULL" statement for old_id, it still would not recognize it. We also had to replace the "int(10)" with INTEGER and remove the superfluous space on the same line. Now it reads:
  "old_text" mediumblob NOT NULL,
  "old_flags" tinyblob NOT NULL
We have to modify all (19) occurences of PRIMARY KEY in our wiki.sqlite.raw. Be sure to omit the INTEGER keyword when the field is declared varbinary. With all that in order now, wiki.sqlite.raw can be fed into sqlite again. FWIW, this is what sqlite makes of the statement above:
  $ sqlite3 data/wikidb.sqlite 
  sqlite> .schema text
    old_text BLOB NOT NULL,
    old_flags BLOB NOT NULL
Now we should be able to update pages and create articles. Phew, what a ride :-)

Update: Right after finishing this article, I came across this knowledge base article on how to do this with scripts provided by the MediaWiki installation. In short:
  $ php maintenance/dumpBackup.php --full --uploads --conf `pwd`/LocalSettings.php > wiki.xml
Then, in the VM again:
  $ cd /var/www/html/wiki
  $ php /usr/share/mediawiki/maintenance/importDump.php wiki.xml
  $ php /usr/share/mediawiki/maintenance/rebuildrecentchanges.php 
This takes quite a lot of time - around 7 minutes for a 13MB .xml dump. The resulting was written into the configured $wgDBname, or data/wiki.sqlite in our case. All pages were in place, only the MainPage was overwritten with its initial version. Going back one version with the page history revealed the most current version of the (imported) article.

1) Is this really the correct statement? It works, but I thought it should use () instead of [] for the OR statement.
2) We had to set $wgShowExceptionDetails, $wgShowSQLErrors, $wgDebugDumpSql to true for this to be shown.

Growl 1.3.1

Update Available - A newer version of Growl is available online. Click here to download it right now. What's up with the latest Growl update? Apparently it costs money now? Not that $1.99 for this useful piece of software were too much, announcement about this change? Why not? Users are already upset about this change and I really fail to see why they did this w/o any prior notification.

For the brave and able, Growl 1.3.1 can still be built from source - good luck fighting all the build errors then :-\

Schneierfacts fortune cookies

A (long) while ago I stumbled upon the incredible Schneierfacts and thought "I must have these snippets of wisdom as a fortune(6) file!" Here's how I did that:

  mkdir schneierfacts && cd schneierfacts
  while [ $f -lt 1610 ]; do 
      echo "fact: $f"
      wget -q"$f"
      sleep 1

  grep 'p class="fact"' * | \
          sed 's/.*fact\">//;s///;s/\&quot\;/\"/g' | \
          sort -u > ../schneierfacts.txt
  sed G ../schneierfacts.txt | sed 's/^$/%/' > ../schneierfacts

  strfile -r ../schneierfacts
  "../schneierfacts.dat" created
  There were 1600 strings
  Longest string: 24713 bytes
  Shortest string:  142 bytes
With that in place, we can install our new fortune file:
  sudo mkdir -p /usr/local/share/games/fortunes
  sudo cp ../schneierfacts{,.dat} /usr/local/share/games/fortunes/

  $ fortune schneierfacts
  When Bruce Schneier does modulo arithmetic, there are no remainders. Ever.
Note: There are currently over 1600 facts, be kind when downloading the facts! (i.e. use sleep(1) after every wget(1) call.)

TCP: Peer unexpectedly shrunk window 4197231805:4197240525 (repaired)

From time to time our Linux 2.6 kernel generates the following message:

  TCP: Peer unexpectedly shrunk window \
          4197231805:4197240525 (repaired)
It was a bit scarier back then when it read something like this:
  TCP: Treason uncloaked! Peer shrinks window \
          3166327388:3166327393. Repaired.
The message has been modified in December 2008. While there are quite a lot of posts and articles on this topic, a pretty good explanation can be found in commit 2ad4106: Clear stale pred_flags when snd_wnd changes.