Monday, May 14, 2012

Oracle Virtual Machine (OVM) LAB on VirtualBox

Oracle Virtual Machine (OVM) 3.1.1. was released on the 8th May 2012 and finally it's supported on VirtualBox.  This is great news for anyone who wants to give installing an OVM Lab a go.

I gave it a go.

I created 3 VirtualBox guests:
  1. Openfiler for iSCSI with a 40GB virtual disk for chopping up into LUNs and iSCSI.
  2. Oracle Linux 6.0 on which I installed OVM 3.1.1. in Demo mode.  This guest has 4GB of RAM assigned to it and a 25GB HD.  Probably a bit big, but OVM is a large application with an Oracle XE database and Oracle Weblogic services installed, I thought better safe than sorry.
  3. Oracle Virtual Server 3.1.1.  This guest had just a 4GB HD and 1536MB of RAM which I figured would be just enough to get 1 virtual machine up and running on it.
My VirtualBox is configured with a couple of host only networks:

vboxnet0 => => Management network.  I also have dnsmasq configured on this network to serve IP addresses via DHCP to clients on this network.  Dnsmasq is installed on my laptop for this specific purpose.  See previous blog post...
vboxnet1 => => Used for the storage network.
vboxnet2 => => Production network for virtual machines.

So I set up openfiler to offer 2x8GB LUNS, 1x2GB LUN for the cluster storage network on my server pool and 10GB NFS Share for the repository.

The installation of OVS and OVM on Oracle Linux all went by without a hitch.  I made sure that each VirtualBox guest set their own hostnames against dnsmasq using the DHCP_HOSTNAME parameter on /etc/sysconfig/network.  This meant that all guests resolve nicely on DNS which is a prerequisite for a successful OVM LAB.

The new OVM3.1.1. GUI is actually quite nice.  It's more polished and intuitive than 3.0.3. and more of the Right Mouse Button menus are enabled.

I also made sure that SELINUX and IPTABLES was disabled in both OVS and the OVM guests.

Server discovery went fine.  No problems at all.  Just enter the agent password and hostname of the OVS server.

Storage discovery was a little tricky.  The NFS Share (Filesystem share) was no problem.  OVM was able to find and mount it wouthout any difficulty.  The only trouble I had there was making sure that Openfiler was configured correctly.  But that's not relevant to the OVM LAB really.

The iSCSI discovery had me stumped for quite some time.  I could discover the iSCSI target presented by Openfiler without issue but could not get OVM to show me any LUNS. (Physical Disks)  After much trial and error I finally figured out that my OVS server did not have the multipathd service running.  Once I enabled and started multipathd (no change to default configuration), I was able to refresh the iSCSI storage in OVM and the physical disks were presented.  To recap: First perform iSCSI storage discovery using the OVS guest as the Storage Manager.  Then start multipathd on the OVS guest, then refresh the iSCSI storage on OVM and your LUNs will appear.

I was able to create a server pool, assign the repository, assign the 2GB iSCSI LUN to the pool for it's cluster configuration data and actually configure a virtual machine.  I configured an Ubuntu VM with just 512MB of ram and one iSCSI LUN.

Finally after so much success, my LAB failed me at the most crucial stage.  Starting the virtual machine. - It would not start.  It turns out that VirtualBox does not present hardware acceleration to it's guests.  So my OVS guest was running without any hardware acceleration in the virtual CPU.  OVM was not able to start the VM.  What a shame but there  you go.

The good news is that many of the minor irritations in OVM3.0.3 seem to be cleared up and folks are able to now use VirtualBox for their LABs but they can't start OVM Virtual Machines if their OVS is actually a VirtualBox guest.

Next step... Commandeer an old server somewhere, install OVS, bind it to the right LAN and carry on.

No comments: