Last modified 7 years ago Last modified on 01/19/11 17:20:18

This page documents steps involved in deploying ESXi incompatible VirtualBox OVA file on ESXi server. The OVA/OVF format is supposed to be cross-platform, but vendors may support only specific OVF/OVA version. This may introduce incompatibility issue if care is not taken during OVA/OVF creation. Following page describes errors that we received as well brute-force method we used to make this work.

VMware way

Use VMware ESXi's deploy OVF template method

The first error reported by ESXi was incompatible/unsupported hardware error. I tried to resolve this by editing OVF file where VirtualBox was mentioned, but then it gave new errors related to incompatible drivers. Unfortunately I couldn't find any resources on how to resolve these issues.

     27       <Info>Virtual hardware requirements for a virtual machine</Info>
     28       <System>
     29         <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
     30         <vssd:InstanceID>0</vssd:InstanceID>
     31         <vssd:VirtualSystemIdentifier>Islandora</vssd:VirtualSystemIdentifier>
     32         <vssd:VirtualSystemType>virtualbox-2.2</vssd:VirtualSystemType>
     33       </System>

Use VMware Converter for deployment

The VMware converter gave error saying 'could not parse OVA file'.


In this method we create a new VM on ESXi similar to VirtualBox VM (processor 32/64 bit, OS, disk size). First step is to get this VM running in VirtualBox environment and then make block level copy of this VM using dd command. In second step we will copy these dd images over to ESXi VM disks. Following sections detail steps involved in it:

Examine and copy disks

  • Examined disk partitioning on Islandora VM running in VirtualBox.
    # sfdisk -l /dev/sda 
    Disk /dev/sda: 2610 cylinders, 255 heads, 63 sectors/track
    Units = cylinders of 822,5280 bytes, blocks of 1024 bytes, counting from 0
    Device	Boot	Start	End	 #cycls	#blocks	 Id	System
    /dev/sda1 * 0+	12	    13-  104391 83 Linux
    /dev/sda2   13  2609  2597  20860402 8e Linux LVM
    /dev/sda3 0 0 0 Empty
    /dev/sda4 0 0 0 Empty
  • Created a copy of sda1 and sda2:
    dd if=/dev/sda1 | ssh pavgi@nas01 "dd of=islandora-sda1.dump"
    dd if=/dev/sda2 | ssh pavgi@nas01 "dd of=islandora-sda2.dump"

Create new VM in ESXi environment with above disk images

  • Created a new VM on ESXi server with hard disk type as SCSI.
  • Started system using CentOS 5.5 LiveCD
  • Created disk partition similar to Islandora VM running in VirtualBox.
  • Copy dd images of sda1 and sda2:
    ssh pavgi@nas01 "dd if=islandora-sda1.dump" | dd of=/dev/sda1
    ssh pavgi@nas01 "dd if=islandora-sda2.dump" | dd of=/dev/sda2
  • Then we started the VM and tried to boot using hard disk. On boot we got following errors which indicate some driver issues:
    ### Kernel panic after dd-ing of islandora-vbox VM. 
    Unable to access resume device (/dev/VolGroup01/LogVol01)
    Creating root device
    Mounting root filesystem
    mount: could not mount filesystem '/dev/root'
    setting up other filesystems
    setting up new root fs
    setuproot: moving /dev failed . No such file or directory. 
    no fstab.sys, mounting internal defaults
    setuproot : error mounting /proc : No such file or directory 
    setuproot : error mounting /sys : No such file or directory 
    Switching to new root and running init
    unmounting old /dev
    unmounting old /proc
    unmounting old /sys
    switchroot : mount failed : No such file or directory 
    Kernel panic - not syncing : Attempted to kill init!

  • Next step was to boot VM again using LiveCD and resolve driver issues by creating a new initrd image.
  • Following steps were needed to mount LVM partitions present on physical disks.
    [root@livecd ~]# pvscan 
      PV /dev/sda2   VG VolGroup01   lvm2 [19.88 GB / 0    free]
      Total: 1 [19.88 GB] / in use: 1 [19.88 GB] / in no VG: 0 [0   ]
    [root@livecd ~]# ls /dev/mapper/
    control  live-osimg-min  live-rw
    [root@livecd ~]# vgscan 
      Reading all physical volumes.  This may take a while...
      Found volume group "VolGroup01" using metadata type lvm2
    [root@livecd ~]# vgchange -ay
      2 logical volume(s) in volume group "VolGroup01" now active
    [root@livecd ~]# ls /dev/
    Display all 214 possibilities? (y or n)
    [root@livecd ~]# ls /dev/VolGroup01/LogVol0
    LogVol00  LogVol01  
    [root@livecd ~]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/live-rw   4.0G  2.1G  2.0G  52% /
    tmpfs                 250M     0  250M   0% /dev/shm
    /dev/hdc              695M  695M     0 100% /mnt/live
    [root@livecd ~]# ls /dev/mapper/
    control  live-osimg-min  live-rw  VolGroup01-LogVol00  VolGroup01-LogVol01
  • Mount LVM partition to create new initrd image:
    [root@livecd ~]# mkdir /mnt/sysimage
    [root@livecd ~]# mount /dev/mapper/VolGroup01-LogVol00 /mnt/sysimage
    [root@livecd ~]# ls /mnt/sysimage/
    bin  boot  dev  etc  home  lib  lib64  lost+found  media  misc  mnt  net  opt  proc  root  sbin  selinux  srv  sys  tmp  usr  var
    [root@livecd ~]# mount /dev/sda1 /mnt/sysimage/boot
    [root@livecd ~]# mount --bind /sys /mnt/sysimage/sys
    [root@livecd ~]# mount --bind /dev /mnt/sysimage/dev
    [root@livecd ~]# mount --bind /proc /mnt/sysimage/proc
    [root@livecd ~]# chroot /mnt/sysimage
    [root@livecd /]# ls
    bin  boot  dev  etc  home  lib  lib64  lost+found  media  misc  mnt  net  opt  proc  root  sbin  selinux  srv  sys  tmp  usr  var
    [root@livecd /]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
                           16G  4.6G   11G  32% /
    /dev/sda1              99M   19M   75M  21% /boot
    [root@livecd /]#
  • Create new initrd
    # cd /boot
    # cp initrd- initrd-
    # mkinitrd -f /boot/initrd- 
  • Install grub boot loader
    # grub-install /dev/sda