Skip to main content

Error while creating the raw disk VMDK

I have been attaching raw disks to VirtualBox before but I missed one particular detail: apparently one cannot attach disks readonly. But that's exactly what I wanted to do, I did not want the VM to be able to alter the disk in any way:

$ ls -l ../disk3.img
-r--------  1 christian  staff  3965190144 Nov 26 12:18 ../disk3.img

$ hdid -nomount ../disk3.img
/dev/disk3              FDisk_partition_scheme          
/dev/disk3s1            DOS_FAT_32 

$ diskutil list disk3
/dev/disk3
   #:                       TYPE NAME              SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                   *4.0 GB     disk3
   1:                 DOS_FAT_32 CANON_DC           4.0 GB     disk3s1

$ ls -l /dev/disk3*
br--r-----  1 christian  staff   14,   9 Nov 28 18:20 /dev/disk3
br--r-----  1 christian  staff   14,  10 Nov 28 18:20 /dev/disk3s1

$ VBoxManage internalcommands createrawvmdk -filename ../disk3.vmdk \
             -rawdisk /dev/disk3 -partitions 1 -register
ERROR: VMDK: could not open raw partition file '/dev/disk3s1'
Error code VERR_ACCESS_DENIED at \
/Users/vbox/tinderbox/3.2-mac-rel/src/VBox/Devices/Storage/VmdkHDDCore.cpp(3661) \
in function int vmdkCreateRawImage(VMDKIMAGE*, VBOXHDDRAW*, uint64_t)
Error while creating the raw disk VMDK: VERR_ACCESS_DENIED
The raw disk vmdk file was not created
Even making the blockdevice and/or its backing store writable (with the disk still attached to Virtualbox), did not help. The trick is to unregister the disk, eject it from the OS, make it writable and then reattach it to VirtualBox again:
$ diskutil eject disk3
Disk disk3 ejected

$ chmod u+w ../disk3.img
$ hdid -nomount ../disk3.img

$ ls -l /dev/disk3*
brw-r-----  1 christian  staff   14,   9 Nov 28 18:42 /dev/disk3
brw-r-----  1 christian  staff   14,  10 Nov 28 18:42 /dev/disk3s1

$ VBoxManage internalcommands createrawvmdk -filename ../disk3s1.vmdk \
             -rawdisk /dev/disk3 -partitions 1 -register
RAW host disk access VMDK file ../disk3.vmdk created successfully.
To make the disk readonly after all, we can mark it immutable:
$ VBoxManage modifyhd ../disk3.vmdk --type immutable

Cannot unregister the machine 'foo' because it has 1 snapshots

So, I dabbled with this VirtualBox VM a bit but the guest OS was broken anyway so I decided to get rid of the VM:

$ VBoxManage unregistervm foo --delete
ERROR: Cannot unregister the machine 'foo' because it has 1 snapshots
Details: code VBOX_E_INVALID_OBJECT_STATE (0x80bb0007), component 
            Machine, interface IMachine, callee nsISupports
Context: "UnregisterMachine(uuid, machine.asOutParam())" at line 164 
              of file VBoxManageMisc.cpp
Oh, right. The VM in question had a snapshop attached to it as well, so let's delete it too:
$ VBoxManage showvminfo foo | less
[...]
Snapshots:
   Name: snap1 (UUID: 8e2914fa-de59-4842-9391-3afac42e0125)

$ VBoxManage snapshot foo delete 8e2914fa-de59-4842-9391-3afac42e0125
0%...FAILED
Error: snapshot operation failed. Error message: Hard disk '../deb0.vdi' has 
       more than one child hard disk (2)
Hm, I remember now, the VM was using another VM's disk. Kinda weird setup, no wonder I wanted to get rid of it :-) So let's find out which VM also uses "deb0.vdi":
$ VBoxManage list hdds
[...]
UUID:        03b1afd9-4ce4-4e42-9347-226b55cba657
Parent UUID: base
Format:      VDI
Location:    ../deb0.vdi
State:       created
Type:        normal
Usage:       foo (UUID: 3c57773a-de6a-4714-9149-407a98f85ae7)
          [snap1 (UUID: 8e2914fa-de59-4842-9391-3afac42e0125)]

UUID:        bc3b45a9-db44-41a4-822d-52987f2734c8
Parent UUID: 03b1afd9-4ce4-4e42-9347-226b55cba657
Format:      VDI
Location:    ../foo/Snapshots/{bc3b45a9-db44-41a4-822d-52987f2734c8}.vdi
State:       created
Type:        normal

UUID:        06280b86-8389-40c4-8bfb-3562a3e206df
Parent UUID: 03b1afd9-4ce4-4e42-9347-226b55cba657
Format:      VDI
Location:    ../debian/Snapshots/{06280b86-8389-40c4-8bfb-3562a3e206df}.vdi
State:       created
Type:        normal
Usage:       debian (UUID: bdbf5c46-aefe-4004-acea-ac521eaedb2e)
So, "deb0.vdi" was used by VM foo and debian an also by a snapshot. Maybe I could just edit VirtualBox.xml manually, but that wouldn't be that much fun, right? So let's detach 06280b86-8389-40c4-8bfb-3562a3e206df from debian, then we should be able to delete the snapshot, right?
$ VBoxManage showvminfo debian | grep 06280b86-8389-40c4-8bfb-3562a3e206df
SATA Controller (1, 0): ../debian/Snapshots/{06280b86-8389-40c4-8bfb-3562a3e206df}.vdi
                                      (UUID: 06280b86-8389-40c4-8bfb-3562a3e206df)

$ VBoxManage storageattach debian --storagectl "SATA Controller" \
                                  --port 1 --device 0 --medium none


$ VBoxManage snapshot foo delete 8e2914fa-de59-4842-9391-3afac42e0125
0%...FAILED
Error: snapshot operation failed. Error message: Hard disk '../deb0.vdi' has
       more than one child hard disk (2)
Huh? OK, I say it again: the VM's setup was seriously braindamaged, so maybe VirtualBox got a little confused. A somehow related ticket suggested to revert the snapshot to a current state. Did that and went on to detach the disks from the VM but now things got a bit out of hand:
$ VBoxManage snapshot 3c57773a-de6a-4714-9149-407a98f85ae7 restorecurrent
$ VBoxManage showvminfo foo | grep SCSI

SCSI Controller (0, 0): ../foo/Snapshots/{95000117-e5e0-46dc-bbbd-4929afd9b88c}.vdi 
                                   (UUID: 95000117-e5e0-46dc-bbbd-4929afd9b88c)
SCSI Controller (1, 0): ../foo/Snapshots/{f209322c-b784-4ec3-b0af-e0374444b349}.vdi 
                                   (UUID: f209322c-b784-4ec3-b0af-e0374444b349)

$ VBoxManage storageattach foo --storagectl "SCSI Controller" \
                               --port 0 --device 0 --medium none
$ VBoxManage storageattach foo --storagectl "SCSI Controller" \
                               --port 1 --device 0 --medium none

$ VBoxManage showvminfo foo | grep ^SCSI
SCSI Controller (1, 0): ../foo/Snapshots/{f209322c-b784-4ec3-b0af-e0374444b349}.vdi 
                                   (UUID: f209322c-b784-4ec3-b0af-e0374444b349)
Huh? I just detached the disk from the VM, how comes it's still attached? Shortly after, both disks were attached to the SCSI controller again. This did not look right, so I felt like cheating a bit:
$ pkill VBoxSVC
$ mv ../Machines/foo/Snapshots/* ~/trash/
Meanwhile, I had 4 (!) disks referring to 03b1afd9-4ce4-4e42-9347-226b55cba657 now, I wonder why:
$ VBoxManage list hdds | egrep '^(UUID|Parent)'
[...]
UUID:        03b1afd9-4ce4-4e42-9347-226b55cba657
Parent UUID: base

UUID:        bc3b45a9-db44-41a4-822d-52987f2734c8
Parent UUID: 03b1afd9-4ce4-4e42-9347-226b55cba657

UUID:        06280b86-8389-40c4-8bfb-3562a3e206df
Parent UUID: 03b1afd9-4ce4-4e42-9347-226b55cba657

UUID:        95000117-e5e0-46dc-bbbd-4929afd9b88c
Parent UUID: 03b1afd9-4ce4-4e42-9347-226b55cba657

$ VBoxManage closemedium disk bc3b45a9-db44-41a4-822d-52987f2734c8
$ VBoxManage closemedium disk 06280b86-8389-40c4-8bfb-3562a3e206df
$ VBoxManage closemedium disk 95000117-e5e0-46dc-bbbd-4929afd9b88c
Also, I removed all the storage controllers attached to VM foo:
$ VBoxManage storagectl foo --name "IDE Controller"  --remove
$ VBoxManage storagectl foo --name "SATA Controller" --remove
$ VBoxManage storagectl foo --name "SCSI Controller" --remove
$ VBoxManage storagectl foo --name "SAS Controller"  --remove
This did the trick, apparently:
$ VBoxManage snapshot foo delete 8e2914fa-de59-4842-9391-3afac42e0125
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%

$ VBoxManage unregistervm foo --delete
Re-attaching the disks to the other VM, and we're done:
$ VBoxManage storageattach debian --storagectl "SATA Controller" \
                 --port 0 --device 0 --type hdd --medium ../debian/deb0.vdi 
$ VBoxManage storageattach debian --storagectl "SATA Controller" \
                 --port 1 --device 0 --type hdd --medium ../debian/deb1.vdi 

Balsamic vinegar, unleaded please!

Today I came across the following sign while shopping for aceto balsamico:

   CALIFORNIA PROPOSITION 65 WARNING
   The red wine vinegars and balsamic vinegars on these shelves contain lead,
   a chemical known to the state of California to cause birth defects or other
   reproductive harm.
Wait, what? Of course, by now I've searched the tubes and these signs have been there for quite a while. And it's widely discussed too, and people go on raving about these bad vinegars and if there are good vinegars and so on - but I did not find the answer to the question: why? Why would anyone still buy lead-containing vinegar, when it's clearly stated that it's harmful to the (human) body. Well, for Californian bodies, that is. As if lead was not toxic in other states :-)

I don't get it. At all.

Ubuntu 10.10 & btrfs

Ubuntu 10.10 (Maverick) now allows for a btrfs root filesystem. Even before the final relase, performance problems with dpkg have been spotted. The short answer was:

sudo apt-add-repository ppa:brian-rogers/btrfs
sudo apt-get update
sudo apt-get upgrade
And although I've experienced a similar performance impact before, it's somewhat better now, though I'm unsure why: This is still Linux 2.6.35 and dpkg hasn't received any upgrades either since then.
# ls -lgoh ./openoffice.org-core_*.deb
-rw-r--r-- 1 27M Sep 30 08:05./openoffice.org-core_1%3a3.2.1-7ubuntu1_amd64.deb
# echo 3 > /proc/sys/vm/drop_caches
# LC_ALL=C time -p /usr/bin/dpkg -i ./openoffice.org-core_*.deb
real 47.61
user 2.09
sys 7.52

# dpkg -i ./dpkg_1.15.8.4ubuntu3-nosync1_amd64.deb 
# dpkg --force-all -P openoffice.org-core
# echo 3 > /proc/sys/vm/drop_caches 
# LC_ALL=C time -p /usr/bin/dpkg -i ./openoffice.org-core_*.deb
real 31.34
user 2.58
sys 6.50

XDMCP

OK, it's been a while since I tried this but somehow I was curious if it's still possible to login to a remote system via XDMCP. Well, good thing I checked, because e.g. Ubuntu ships with a broken GDM since 10.04.

And I have a feeling that they will continue to ship GDM with XDMCP disabled, so I did what the Release Notes told me: install XDM.

$ apt-get install xdm
$ grep DisplayManager.requestPort /etc/X11/xdm/xdm-config 
!DisplayManager.requestPort:    0

$ grep ^\* /etc/X11/xdm/Xaccess 
*                                  #any host can get a login window

$ service xdm restart
Now I was able to connect from my MacOS X client:
$ X :1 -query 192.168.0.119
Apparently, the Debian's GDM3 has been fixed in version 2.30.2-3.

smb_maperr32: no direct map for 32 bit server error (0xc0000225)

Every now and then this gets logged on this MacOS 10.6.4 installation:

 kernel[0]: smb_maperr32: no direct map for 32 bit server error (0xc0000225)
Now, the messages stems from smb_maperr32() in smb_subr.c, but it's still unclear to me why this is logged.

The last part of the message seems to be the errornumber, here's how often each error got logged:
      32 (0xc0000185)
      45 (0xc00000c9)
    3181 (0xc0000225)
According to ntstatus.h, this deciphers to:
  (0xc0000185) - STATUS_IO_DEVICE_ERROR
  (0xc00000c9) - STATUS_NETWORK_NAME_DELETED
  (0xc0000225) - STATUS_NOT_FOUND
The server side is some samba-3.2.5 and when the clients logs the message, no error gets logged on the server. Also, I have no problem accessing the shares, it's all working perfectly, I'm just wondering about the error message...

Error creating thumbnail: Unable to run external programs in safe mode.

After uploading an image to this Mediawiki installation, the thumbnail wouldn't be displayed:

   Error creating thumbnail: Unable to run external programs in safe mode.
Hm, it probably tried to run imagemagick to generate that thumbnail, but safe_mode was off anyway, so what gives? As it turned out, the passthru() function was disabled. After removing it from disable_functions thumbnail generation was working again. However, I'm not sure about the security implications yet...

Solaris 10 Netinstall

Ah, finally I got around to do this. This E250 needed to be reinstalled. Well, there's Disk#1 of Solaris 10/08 inserted right now, but no one is there to play the "insert next disk to continue" game. Furthermore, installing from optical media is soooo last century :-)

Yes, there are ways to install Solaris via LAN (even via WAN!), but I did not want to setup a Jumpstart server (mainly because I don't have a 2nd Solaris machine running atm) and WAN setup was out of the question too, as our connection to the outside world is not that fast. Also, instead of letting "Jumpstart" do the magic I wanted to do things on my own. After all, it's just getting this box to boot and then we just need an NFS share to get our installation files from, right? Let's begin:

In this example, our server will be 192.168.0.1/24 (Linux, Ubuntu 10.04, x86), our client (Ultra-Sparc E250, where we want to install Solaris 10/09 on) will be 192.168.0.5/24.

We'll set up RARPD, TFTP and Bootparamd to get the E250 (sun4u) started; NFS to share the installation media later on. The installation media is basically just the downloaded .iso, loop-mounted somewhere on our Linux system:

 mount -t iso9660 -o loop -o ro sol-10-u9-ga-sparc-dvd.iso /mnt/cdrom
For rarpd to work, we add our client's MAC address to /etc/ethers:
 # grep 192.168.0.5 /etc/ethers
 08:00:10:A1:B2:C3      192.168.0.5
For tftp we need to create a bootfile for our install-client (192.168.0.5) to be found:
 # grep tftp /etc/inetd.conf 
 tftp dgram udp wait root /usr/sbin/in.tftpd /usr/sbin/in.tftpd -s /data/tftpboot

 # cd /data/tftpboot
 # printf %02x 192 168 0 5 | tr [:lower:] [:upper:]
 C0A80005
 # cp -p /mnt/cdrom/Solaris_10/Tools/Boot/platform/sun4u/inetboot .
 # ln -s inetboot C0A80005.SUN4U
 # ln -s inetboot C0A80005
For bootparamd, /etc/bootparams should look something like this:
 # grep -v ^\# /etc/bootparams
 e250 root=192.168.0.1:/mnt/cdrom/Solaris_10/Tools/Boot \
      install=192.168.0.1:/mnt/cdrom/Solaris_10 \
      rootopts=192.168.0.1:rsize=32768:nfsvers=2:vers=2
Somehow it's important that all these parameters are prefixed by the servername (192.168.0.1), otherwise the client may not find the requested files.

With all that in place, we still have to export the installation directory via NFS. As Solaris 10 still still has problems with a Linux NFS server, we're starting nfsd with:
 # grep RPCNFSDCOUNT /etc/default/nfs-kernel-server 
 RPCNFSDCOUNT="16 --no-nfs-version 3 --no-nfs-version 4"
...and exporting our share now:
 # exportfs -v -i -o ro,no_root_squash 192.168.0.0/24:/mnt/cdrom
 exporting 192.168.0.0/24:/mnt/cdrom
Now we should be able to boot the box:
{0} ok boot net -s -v - install
ChassisSerialNumber 12341010
Initializing    1 megs of memory at addr          2feca000
Initializing    1 megs of memory at addr          2fe00000
Initializing    2 megs of memory at addr          2fc02000
Initializing  192 megs of memory at addr          23c02000
Initializing  572 megs of memory at addr                 0
Rebooting with command: boot net - install
Boot device: /pci@1f,4000/network@1,1  File and args: - install
Using Onboard Transceiver - Link Up.
3a000
Server IP address: 192.168.0.1
Client IP address: 192.168.0.5
Using Onboard Transceiver - Link Up.
ramdisk-root ufs-file-system
Depending on your interface speed, this will take a long time to complete. We can watch with tcpdump on our install server that it's still fetching stuff and we have to be very patient for this to complete. What we could try is to boot with different bootoptions (though it wasn't supported by our E250):
{0} ok boot net:speed=100,duplex=full -s -v - install
After the ramdisk is loaded, booting continues:
Loading: /platform/SUNW,Ultra-250/kernel/sparcv9/unix
Loading: /platform/sun4u/kernel/sparcv9/unix
SunOS Release 5.10 Version Generic_142909-17 64-bit
Copyright (c) 1983, 2010, Oracle and/or its affiliates. All rights reserved.
os-io Configuring devices.
Using RPC Bootparams for network configuration information.
Attempting to configure interface hme0...
Configured interface hme0
ERROR: bpgetfile unable to access network
/sbin/install-discovery: information: not found
This might happen, because Solaris assumes a wrong netmask for our interface. We'll fix this with:
 # pkill /sbin/dial             # Kill the spinning cursor :)
 # ifconfig hme0 192.168.0.5 netmask 255.255.255.0 broadcast 192.168.0.255
 # exit
The exit will actually exit back to the installation process, until:
System identification is completed.
System identification complete.
Starting Solaris installation program...
Searching for JumpStart directory...
not found
Warning: Could not find matching rule in rules.ok
Press the return key for an interactive Solaris install program...

Executing JumpStart preinstall phase...
Searching for SolStart directory...
Checking rules.ok file...
Using begin script: install_begin
Using finish script: patch_finish
Executing SolStart preinstall phase...
Executing begin script "install_begin"...
Begin script install_begin execution completed.
So, pressing Enter seems to continue: the interactive installation-screen appears and we can click through a few screens, until the process is interrupted again:
There were problems loading the media from /cdrom.
Solaris installation program exited.
# 
Wait, what? We're doing a network install, so why the hell does it look for installation files in /cdrom? Nevermind, we can do that too and start the install-process again:
 # mount -F nfs -o ro 192.168.0.1:/mnt/cdrom /cdrom
 # /sbin/install-solaris
After a few screens we will be asked where our installation media resides:
 192.168.0.1:/mnt/cdrom
At this point, the installation should finally continue, without further interruptions. Yeah, right :-)

X11 fails to start after Security Update 2010-002

A little late, but that's what happend here today:

$ /Applications/Utilities/X11.app/Contents/MacOS/X11.bin
Dyld Error Message:
  Library not loaded: /usr/X11/lib/libpixman-1.0.dylib
  Referenced from: /Applications/Utilities/X11.app/Contents/MacOS/X11.bin
  Reason: Incompatible library version: X11.bin requires version 15.0.0 or later,
          but libpixman-1.0.dylib provides version 13.0.0
As I could not back out Security Update 2010-002, I just headed over to XQuartz on MacOSforge to install XQuartz 2.5.3 - logged out and back in and X11 is working again :-)