Skip to content

whois: Invalid charset for response

As if MacOS didn't have enough charset problems, here's another one:
$ /usr/bin/whois denic.de
% Error: 55000000013 Invalid charset for response
Although the problem has been reported to DENIC years ago, they still send out UTF-8 data if the handles contain e.g. umlauts.

But why can't the MacOS version of whois(1) handle UTF-8 data? A quick look on the binary reveals:
$ strings /usr/bin/whois
[...]
de.whois-servers.net
-T dn,ace -C US-ASCII %s
So, the -T dn,ace -C US-ASCII seems to be hardcoded, as we can see in the source:
#define GERMNICHOST	"de.whois-servers.net"
[...]
if (strcmp(hostname, GERMNICHOST) == 0) {
		fprintf(sfo, "-T dn,ace -C US-ASCII %s\r\n", query);
There's no -C switch to pass to whois(1) to change this behaviour. Experimenting with LC_ALL environment variables did not help either.

What did help was to pass options directly to their whois server
$ /usr/bin/whois -h whois.denic.de -- "-T dn,ace denic.de"
This way, -C US-ASCII is skipped and the (UTF-8) output can be displayed just fine.

Of course, we could also install whois from Macports, it seems to handle UTF-8 data just fine (although it had a similar problem years ago):
$ sudo port install whois

$ /opt/local/bin/whois denic.de | file -
/dev/stdin: UTF-8 Unicode text

$ /opt/local/bin/whois denic.de
[...]
[Tech-C]
Type: PERSON
Name: Business Services
Organisation: DENIC eG
Address: Kaiserstraße 75-77

Mozilla defaults

Every now and then I come across a new machine I've never logged in to before and starting Firefox for the first time. And then I always have to make my way through oh so many preference knobs and about:config entries just to get it into a usuable state.

So, while I knew the configuration could be tweaked via user.js, I never got around actually creating this file and adding some sensible defaults to it. Well, that's been done now. And with site-wide defaults, it's even more fun!

In short:
  • Create local-settings.js in defaults/pref/ underneath the Firefox installation directory.
  • Create firefox.cfg in the Firefox installation directory.
  • Create user.js inside your profile directory and fill it with some sensible defaults.
Of course, we can skip the first two steps and just fill user.js with the contents of firefox.cfg but we have to replace defaultPref and lockPref entries with user_pref.

You don't exist, go away!

After opening my laptop today, the first thing was of course to login to various systems, as I usually do. But this time I couldn't and instead was greeted with:
  $ ssh foobar
  You don't exist, go away!
At first I thought the remote system was at fault, but ssh would print the same message for every other system I was trying to login to. This had been reported by others already and after just clicking those links I tried again and this time ssh was able to login w/o a problem. So, while this was only a temporary issue, let's recap and dig into that once again.

Apparently, the error message is generated by the client:
$ strings `which ssh` | grep away
You don't exist, go away!
It's right there in ssh.c:
pw = getpwuid(original_real_uid);
	if (!pw) {
		logit("You don't exist, go away!");
		exit(255);
	}
So, the call to getpwuid() failed. Now, why would it do that? In the manpage it says:
   These functions obtain information from DirectoryService(8),
   including records in /etc/passwd
And /etc/passwd was there all the time (hah!), so maybe DirectoryService(8) screwed up? Let's see if we find something in /var/log/system.log:
14:59:57 coreservicesd[54]: _scserver_ServerCheckin: client uid validation failure; getpwuid(502) == NULL
14:59:58 loginwindow[376]: resume called when there was already a timer
14:59:58 coreservicesd[54]: _scserver_ServerCheckin: client uid validation failure; getpwuid(502) == NULL
There it is. Now, restarting coreservicesd (or securityd) would have helped, but by now the system was fully waken up from sleep and getpwuid() was able to do what it does - and ssh was working again, too. If it happens again and won't recover by itself - we know what to do :-)

Zero padding shell snippets

I was looking for a way to zero-pad a number sequence in bash. While the internet was helpful as usual, one particular post had lots of examples in its comments, very neat stuff.

Of course, with so many differeant approaches, this called for a benchmark! :-)
$ time bash padding_examples.sh bash41 1000000 > /dev/null
real    7m38.238s
user    3m7.056s
sys     0m7.884s

$ time   sh padding_examples.sh printf 1000000 > /dev/null
real    1m39.314s
user    0m41.244s
sys     0m2.064s

$ time   sh padding_examples.sh    seq 1000000 > /dev/null
real    0m10.883s
user    0m5.016s
sys     0m0.040s
So, seq(1) is of course the fastest - if it's not installed, use printf.

Update: with bash-4.0, the following is also possible:
$ time echo {01..1000000} > /dev/null
real    0m38.852s
user    0m14.948s
sys     0m0.260s
However, this will consume a lot of memory:
  PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+  COMMAND
23468 dummy     25   5  189m 186m 1380 R  43.1 14.9  0:28.48  bash

rTorrent: Hash check on download completion found bad chunks, consider using "safe_sync"

rTorrent would not complete a download and print the following:
* file.foo
* [OPEN]  506.3 /  785.3 MB Rate: 0.0 / 0.0 KB Uploaded: 248.8 MB  [T R: 0.49]
* Inactive: Hash check on download completion found bad chunks, consider using "safe_sync".
Initiating a check of the torrent's hash (^R) succeeded and then rTorrent tried to download the remaining part of the file - only to fail again, printing the same message :-\

Setting safe_sync (which got renamed to pieces.sync.always_safe) did not help. There's a longish and old ticket that got closed as "invalid". While this might have been the Right ThingTM to do (see the LKML discussion related to that issue) there was another hint: decreasing max_open_files (which got renamed to network.max_open_files) to a lower value, say 64. Needless to say that this didn't help either, so maybe there's something else going on here.

strace might be able to shed some light on this, so let's give it a try. After several hours (and a good night's sleep) a 2GB strace(1) logfile was waiting to be analyzed. I only needed the part of the logfile up to where the error message occured first - and from there on upwards I'd search for negative exitcodes, as they will denote some kind of error. And lo and behold, there it was:
    mmap2(NULL, 292864, PROT_READ, MAP_SHARED, 13, 0x31100) = -1 ENOMEM (Cannot allocate memory)
Before we continue to find out why we failed, let's see how much memory we tried to allocate here. mmap2() is supposed to "map files or devices into memory":
    void *mmap2(void *addr, size_t length, int prot, int flags, int fd, off_t pgoffset);
In our case, size_t is 292864 (bytes) with an offset of 0x31100. However, this offset is in "pagesize units". So, what is our page size?
$ uname -rm && getconf PAGE_SIZE
3.9.0-rc4 ppc
4096
Let's calculate the size rTorrent was trying to mmap2() here:
$ bc
obase=16
4096
1000

ibase=16
obase=A
1000 * 31100                <== PAGE_SIZE * 0x31100
823132160

ibase=A
823132160 + 292864          <== add size_t
823425024
So, 823425024 bytes are 786 MB - we have 1.2 GB RAM on this machine and some swapspace too. Not too much, but this box mmap()'ed larger files than this before - why would mmap2() fail with ENOMEM here?

Maybe this "reduce max_open_files" hint tipped me off but now I remembered playing around with ulimit(3) a while ago. So maybe these ulimits were too tight?

And they were! Setting ulimit -v ("per process address space") to a larger value made the ENOMEM go away and rTorrent was able to complete the download:
$ ls -lgo file.foo
-rw-r----- 1 823425024 Apr  1 11:38 file.foo
...with the exact same size mmap2() was trying to allocate. Btw, we could've checked the file before rTorrent completed the download, because it's an sparse file anyway.

Update: while raising the ulimit(3) certainly resolved the ENOMEM issue, the torrent would still not complete successfully. Turns out it was a kernel bug after all, but it was resolved rather quickly.