Skip to main content

Mounting VirtualBox VDI images on a MacOS X host

During all this VirtualBox hackery stuff I came across an interesting blogpost on how to mount a VirtualBox VDI in MacOS X. That is, we don't really want to mount it, we merely want to access the VDI file via a blockdevice. In GNU/Linux or Solaris one would use losetup resp. lofiadm to attach any file to a blockdevice.

In MacOS X there's hdid. By default, hdid not only tries to assign a blockdevice to the file but it tries to mount it too. We don't want this, so we use -nomount:

$ file linux.vdi
linux.vdi: VDI Image version 1.1 (<<< Oracle VM VirtualBox Disk Image >>>), \
           2147483648 bytes

$ hdid -nomount linux.vdi 
hdid: attach failed - not recognized
Still, hdid failed. The blogpost above helped, we have to use the magic .img extension for the filename, oh well:
$ ln linux.vdi linux.img
$ hdid -nomount linux.img
However, we're still not entirely satisfied. Our linux.vdi contains a whole virtual disk (partition table + data), so let's apply the blogpost above to our disk. Read the post again to understand what we do here:
$ hdiutil detach disk5
$ hexdump -C linux.vdi | grep -m1 ^00000150
00000150  00 4e 88 00 00 10 00 00  00 50 10 00 00 00 00 00  |.N.......P......|

$ echo 'obase=16; 512; ibase=16; 00015000 / 200' | bc
Now that we have the offset to our disk, we can instruct hdid to just attach this disk (minus the VDI header):
$ hdid -section 0xa8 -nomount linux.img 
/dev/disk5             GUID_partition_scheme
/dev/disk5s1           EFI

$ file -Ls /dev/disk5*
/dev/disk5:   x86 boot sector
/dev/disk5s1: Linux rev 1.0 ext4 filesystem data, [...]
Now we could even fsck our virtual Linux partion from MacOS, hey! :-)

Update: Two mindful readers noted that my calculation was incorrect. This should now be fixed in the article.

Virtualbox: How to resize a VDI disk

Resizing virtual disks (VDI, Virtual Disk Image) in Virtualbox is still not possible*). There are several rather long tutorials out there how to do this, that's the short version of it:

  • Create a new VDI disk of desired size. We've created a 2GB deb02.vdi, as our 1GB deb01.vdi was too small.

  • Create a new VM, attach both the old (too small) and new (bigger, but still empty) disk to the VM, boot from a bootable CD, e.g. grml.

  • Once booted, we transfer the old disk (sda) to the new, bigger one (sdb):
      $ dd if=/dev/sda of=/dev/sdb bs=1M
      $ sfdisk -R /dev/sdb
    Yes, that's right. We're just copying the whole disk (with its partition table!) to the new disk. I tried to just copy the partition and make it larger with GNU/parted, but it kept barking about unsupported flags on the ext4 partiton (sdb1) and whatnot and I gave up quickly. Anyway, now we have a 2GB sdb with the partitiontable from sda, that is: sdb1 is still 1GB in size, 1GB is unallocated space.

  • Luckily our disklayout was easy enough (and we had a simple MS-DOS partition-table). Thus, we just started cfdisk, deleted sdb1 and created a new sdb1, but filling out the whole disk (2GB).

  • $ sfdisk -R /dev/sdb again to re-read the partition-table.

  • Now that our partition is in good shape, we need to enlarge the peni^W filesystem as well:
       $ e2fsck -vf /dev/sdb1
       $ resize2fs -p /dev/sdb1
    We might have to mount /dev/sdb1 for this, I don't remember.

If all goes well, we should now have a perfectly good sdb, so we could go on and replace the small deb0.vdi VDI disk with the bigger one, deb1.vdi. I've done this a few days ago and I already forgot wether I had to re-install the bootloader. But I'm sure you'll find out if you have to :-)

*) as opposed to e.g. VMware, where it should be possible to resize a virtual disk. I've even done it once :-)

Migrating from VMware Server via OVF

After manually migrating a VMware VM to Virtualbox and all the hackery involved (although it was fun to learn), we need to remember that we should be able to accomplish the same with the help of OVF, the Open Virtual Machine Format. With that, things are a lot easier. Let's export that WindowsXP VMware-Server VM again, so that I can deploy it in a VMware-ESX Server later on:

# ls -lgho *vmx* *vmdk
-rwxr-xr-x 1 2.0K 2010-06-22 21:54 winxp.vmx
-rw-r--r-- 1  278 2010-05-15 00:32 winxp.vmxf
-rw-r--r-- 1 6.0G 2010-06-08 00:22 winxp-flat.vmdk
-rw-r--r-- 1  435 2010-06-07 23:44 winxp.vmdk

# time ovftool winxp.vmx winxp.ovf
Opening VMX source: winxp.vmx
Opening OVF target: winxp.ovf
Target: winxp.ovf
Disk Transfer Completed         
Completed successfully

real    13m25.328s
user    7m56.998s
sys     1m32.942s

# ls -lgho *vmx* *vmdk
-rw-r--r-- 1 3.1G 2010-06-22 22:07 winxp-disk1.vmdk
-rw-r--r-- 1 4.4K 2010-06-22 22:07 winxp.ovf
-rw-r--r-- 1  123 2010-06-22 22:07
Note that our 6GB winxp-flat.vmdk has been converted to a 3.1GB winxp-disk1.vmdk:
# file winxp-flat.vmdk winxp-disk1.vmdk
winxp-flat.vmdk:      x86 boot sector, Microsoft Windows XP MBR
winxp-disk1.vmdk:     VMware4 disk image
Now we can logon to our ESX Server an deploy the winxp.ovf. We should be able to import the same VM into VirtualBox (supported since v2.2.0), I did not try it though. So yeah, OVF FTW, hm? :)

Migrating from VMware Server to VirtualBox

Even though VMware Server was working fine with Ubuntu 10.04 (apart from random lockups without a backtrace in sight to debug with), I was kinda unhappy with all the hoops one has to go through just to get a virtual machine going. The kernel modules might break on the next upgrade and are tainting the kernel unnecessarily. Fortunately today we have a few virtualization options to pick from and I chose VirtualBox for this particular setup, as it seemed to be the easiest migration path. Let's begin with installing the prerequisites:

# apt-get install virtualbox-ose virtualbox-ose-dkms qemu
Then we had to convert our 2GB-split VMware VMDK files into a single VMDK file, otherwise qemu-bin would produce empty raw files in the 2nd step:

# vmware-vdiskmanager -r orig/test.vmdk -t 2 test.vmdk
# qemu-img convert -O raw test-flat.vmdk test.raw

# VBoxManage convertfromraw test.raw test.vdi
Converting from raw image file="test.raw" to file="test.vdi"...
Creating dynamic image with size 2147483648 bytes (2048MB)...

# ls -lgo *vmdk *raw *vdi
-rw------- 1 2147483648 2010-06-05 18:17 test-flat.vmdk
-rw-r--r-- 1 2147483648 2010-06-05 18:28 test.raw
-rw------- 1 1676681728 2010-06-06 12:50 test.vdi
-rw------- 1        432 2010-06-05 18:17 test.vmdk
Somehow VBoxManage cannot convert VMDK images directly, hence the qemu-img step. All these conversions will take a while, depending on image-size and diskspeed. There's no progress-bar, so just be patient. With our VDI image now in place, we can register it to VirtualBox:
# VBoxManage openmedium disk test.vdi
# VBoxManage list hdds
UUID:       ddaaf826-3d25-48d6-9b2a-1afefdd3350f
Format:     VDI
Location:   /data/vbox-vm/test/test.vdi
Accessible: yes
Type:       normal
Now for the actual virtual machine creation. It's important to create the new machine with the same/similar hardware as the initial VMware instance was configured with, so that the guest OS won't be too suprised about the "new" hardware, i.e. storage- or network-controllers.
# VBoxManage createvm --ostype Debian --register --name "test" \
   --basefolder `pwd`
# VBoxManage modifyvm test --memory 128 --audio none \
   --boot1 disk --clipboard disabled
# VBoxManage modifyvm test --pae off --hwvirtex off \
  --hwvirtexexcl off --nestedpaging off --vtxvpid off
# VBoxManage modifyvm test --nic1 bridged --bridgeadapter1 eth1 \
  --nictype1 Am79C970A --macaddress1 000c291ac243
I've disabled any kind of hardware virtualization features, as the host-CPU is too old and doesn't support it anyway. Also, I used the MAC address of the VMware VM, so that the guest-OS will (hopefully) receive its known DHCP address. Now for the storage devices. Again, try to use the same controller as configured in the VMware server (see the .vmx file of the old VMware instance). Also, we're attaching the virtual harddisk from above to our virtual machine.
# VBoxManage storagectl test --name "SCSI Controller" \
   --add scsi --controller LsiLogic
# VBoxManage storageattach test --storagectl "SCSI Controller" \
   --port 0 --device 0 --type hdd --medium ddaaf826-3d25-48d6-9b2a-1afefdd3350f
Having done that, it should look like this:
# VBoxManage list -l vms | egrep 'Control|MAC'
Storage Controller Name (0):            SCSI Controller
Storage Controller Type (0):            LsiLogic
Storage Controller Instance Number (0): 0
Storage Controller Max Port Count (0):  16
Storage Controller Port Count (0):      16
SCSI Controller (0, 0): /data/vbox-vm/test/test.vdi 
        (UUID: ddaaf826-3d25-48d6-9b2a-1afefdd3350f)
NIC 1:           MAC: 000C291AC243, Attachment: Bridged Interface \
                    'eth1', Cable connected: on, Trace: off (file: none), \
                    Type: Am79C970A, Reported speed: 0 Mbps
Now our virtual machine should be able to start just fine:
# VBoxHeadless -s test
You probably want to remove the VMware tools from the guest ( and tweak your startscripts to start your VM during bootup. Oh, and if the machine just won't start up, we can still cheat and install the VirtualBox GUI:
# apt-get install virtualbox-ose-qt tightvncserver xfonts-base wm2
Update: Migrating a WindowsXM VM from VMware to Virtualbox was equally straightforward, but I could not get the NIC type right. Neither Am79C970A (PCnet-PCI II) nor Am79C973 (PCnet-FAST III) seemed equal to the VMware Accelerated AMD PCNet Adapter in VMware. So I had to use the VirtualBox GUI again, as VirtualBox OSE does not ship with RDP support to connect to. Also, the Ubuntu/Lucid version does not ship with VNC support, yet. Here are the commands for the WindowsXP VM again:
# VBoxManage createvm --ostype WindowsXP --register --name winxp --basefolder `pwd`
# qemu-img convert -O raw ../../vmware-vm/winxp/winxp-static-flat.vmdk winxp.raw
# VBoxManage convertfromraw winxp.raw winxp.vdi
# VBoxManage openmedium disk winxp.vdi
# VBoxManage modifyvm winxp --memory 256 --audio none --boot1 disk \
                     --clipboard disabled --pae off --hwvirtex off --hwvirtexexcl off \
                     --nestedpaging off --vtxvpid off --nic1 bridged \
                     --bridgeadapter1 eth1 --nictype1 Am79C970A \
                     --macaddress1 000c11b9c19c
# VBoxManage storagectl winxp --name "IDE Controller" --add ide --controller PIIX4
# VBoxManage storageattach winxp --storagectl "IDE Controller" --port 0 --device 0 \
                       --type hdd --medium a6723e4d-2caa-433d-91ec-f67238ff36a9

iStat Menus alternative?

For quite some time now I'm using iStat Menus (now by Bjango). With its latest version 3, it's now a paid app and one is urged to upgrade for $16. I don't mind the price so much, but the only reason (for me!) to upgrade would be a fix to one particular bug, the rest is just bloat I won't need anyway. With that being the case, I'm now looking for alternative programs for the features I'm currently using:

  • smcFanControl - displays temperature and fanspeed in the menubar. It even offers to tweak the fanspeed (why would I want to do this??) but it doesn't display all the other sensors available. However it's opensource, so a big plus here!

  • MenuMeters - displays CPU and network load (also disk and memory, but I don't need that). Seems clean and simple enough. And it's free (not only as in "beer") too!

The only feature left is the clock from iStat Menus where you can have different timezones displayed and a calendar on top. But maybe I finally have to make friends with the dashboard now. Oh well...

Exim4 with clamd

Either my Xen DomU gets slower or my MTA keeps getting busier. But when looking at the stats I could see that a lot of clamscan have been spawned on every fetchmail. Nothing unusual, this is how it always worked. But to be honest, the setup was rather inefficient, to say the least: for every incoming mail, maildrop spawns a clamscan process, sometimes more than one in parallel. ps(1) shows, for just one process:

 8749 12.8 164500 49081 196324 clamscan
So, one process needs 12.8% of the systems memory, with just 5 process we're at 64% - and the box was indeed swapping heavily. So I finally got around *) to move the virus-scanning to Exim and let it speak to clamd instead:
  • /etc/exim4/conf.d/main/02_exim4-config_options
     +av_scanner = clamd:/var/run/clamav/clamd.ctl
  • /etc/exim4/conf.d/acl/40_exim4-config_check_data
  •      +  warn
         +    message = X-Virus-Status: Infected
         +    demime  = *
         +    malware = *
    Note: I chose warn over deny here - I still want to have those viruses, I just want to have it annotated :-)

  • /etc/clamav/clamd.conf
  •      User clamav
         AllowSupplementaryGroups true
         LocalSocketGroup Debian-exim
         LocalSocketMode 0660
    For Debian/5.0, I also had to:
    # usermod -G Debian-exim clamav
    # mkdir -m0770 /var/spool/exim4/scan
    # chown Debian-exim:Debian-exim /var/spool/exim4/scan
    With all this in place (plus disabling the clamscan directives in .mailfilter), the box is far less loaded now. According to ps(1), our single clamd now goes sometimes up to 16%, but that's still just one process and better than those >60% before.

    Btw, if you want to test your email AV setup and your mailprovider doesn't even allow the sending of the Eicar Test File, try this instead.

    Update: And it helped indeed, see the loadavg going down after changing the configuration to use clamd now. Phew, now I wonder why I haven't done this earlier....

    *) I hate MTA configurations, I really do :-\

    svn: Repository moved permanently; please relocate

    Apparently, ispCP has changed its repository URL (why :800? Think of the children^Wfirewalls!), leading to:

    $ svn update
    svn: Repository moved permanently to '' ; \
    please relocate
    Luckily, svn switch is here to help, the magic command to resolve this one was:
    $ svn switch --relocate \ .
    $ svn info | grep -A1 ^URL
    Repository Root:

    That's When I Reach For My Resolver

    So, the primary nameserver is down but luckily /etc/resolv.conf has been equipped with a secodary nameserver entry - great! And nslookup works like a charm too, heh! But all the other useful tools are waiting for ages until they'll get a response from the backup server - why is that?

    $ time ping eve
    eve is alive
    real    0m30.045s
    user    0m0.007s
    sys     0m0.018s
    Other than e.g. nslookup, the normal applications have to use the the resolver(4) to get their name requests answered. Now, we could cheat and put our backup server before the faulty one, but let's see if we can tackle this from a different angle. resolv.conf(4) was most helpful, of course:
       Allows certain internal resolver variables to be modified.
    timeout:n / retrans:n
       Sets the amount of time the resolver will wait for a response from a remote 
       name server before retrying the query by means of a different name server.
       Measured in seconds, the default is RES_TIMEOUT. See 
    attempts:n / retry:n
       Sets the number of times the resolver will send a query to its name 
       servers before giving up and returning an error to the calling application.
       The default is RES_DFLRETRY. See .
    In our resolv.h (Solaris 10) we have :
    $ egrep 'RES_TIMEOUT|RES_MAXRETRANS|RES_DFLRETRY' /usr/include/resolv.h
    #define RES_TIMEOUT         5      /* min. seconds between retries */
    #define RES_MAXRETRANS     30      /* only for resolv.conf/RES_OPTIONS */
    #define RES_DFLRETRY        2      /* Default #/tries. */
    So, let's tweak those options:
    $ grep options /etc/resolv.conf 
    options timeout:1 retry:1
    $ time ping eve
    eve is alive
    real    0m7.794s
    user    0m0.007s
    sys     0m0.018s
    Whooha, not bad.
    Note: in Linux the retry: parameter is called attempts:

    Let's tweak the retry: parameter a bit more:
    $ grep options /etc/resolv.conf 
    options timeout:1 retry:0
    $ time ping eve
    eve is alive
    real    0m2.100s
    user    0m0.007s
    sys     0m0.018s
    Even better. Of course, one has to realize that with zero retries the resolver will jump to the next nameserver on the first failure - so, if our backup server is a bit sleepy we won't get a reply at all. If you enable nscd, subsequent requests to the same name will be answered instantly:
    $ sudo svcadm enable svc:/system/name-service-cache
    $ time ping eve
    eve is alive
    real    0m3.218s
    user    0m0.007s
    sys     0m0.018s
    $ time ping eve
    eve is alive
    real    0m0.198s
    user    0m0.007s
    sys     0m0.017s