VMworld 2009 Location and Dates!

From an email I received from VMworld for the upcoming VMworld 2009!

Save the Date for VMworld 2009

Mark your calendar for the industry’s leading virtualization event – VMworld 2009, August 31 – September 3, 2009 at The Moscone Center, San Francisco.

Today’s economy demands we do more with less—a key value proposition of virtualization. Attend VMworld 2009 to hear about the latest industry trends and learn how virtualization can help maximize your organization’s current and future IT investments.

Linux Physical to Virtual

While VMware provides great tools for managing the conversion of Windows based workloads from physical hardware to virtual machines, no tool is provided to aid in the conversion of Linux systems to virtual workloads.

This lesson will detail one method for completing this conversion. The procedure will vary slightly depending up on the distribution and version being converted.

This example was conducted using CentOS 4.7, RedHat Linux 4.x will be nearly identical; other distributions will vary.

Overview of Physical to Virtual (P2V) Process

The overall P2V process – the conversion of a Physical installed workload to a Virtual machine – has 3 distinct steps.
— Driver Injection
— Image Transfer
— Tools Installation

Depending on your distribution, the first two steps may be easier to perform in a specific order. In this example we will perform the steps in the order above.

Driver Injection

The most difficult (and critical) part of the P2V process is insuring that the migrated workload will actually boot, and have the needed drivers to access the virtual hardware once it has been transferred to the virtual machine.

The most difficult portion of this is making sure that we get the workload the ability to access the virtual machine’s hard disk, and thus the ability to boot.

We want to make the target (virtual) machine use a SCSI adapter, because while VMware’s hosted products support IDE disks, the ESX (and ESXi) products do not.

Under CentOS, (and RHEL) Linux, the drivers which are needed for hard disk access are provided as kernel modules. Getting the transferred machine to boot is simply a matter of getting the right modules loaded into the kernel. Now the question – what are the right modules? And how do we get them loaded?

First we need to understand that the boot loader will load the kernel, and it’s boot time configuration from the Init Ram Disk aka ‘initrd’ … so we need to get the modules and the config to load them into the initrd.

The inclusion of and loading of the relevant kernel modules is controlled by /etc/modeprobe.conf (or /etc/modules.conf on RHEL 3). This file needs to include lines to load the drivers for the SCSI adapter provided by the virtual environment.

On linux distributions using the 2.6.x kernels we want the LSI scsi drivers, on 2.4.x kernels we will want to use a buslogic adapter.

Make a backup, then edit the /etc/modules.conf file to include the lines highlighted.


Kernel Modules for LSI SCSI driver shown.

The initrd is generated using the mkinitrd script; again the examples shown are for CentOS 4.7, your distribution may vary.

To generate an initrd, use the following command:

mkinitrd <initrd-file> <kernel version>

This will generate a new initrd as shown.

Again, make a copy, or rename the old initrd so that you can fall back to it if you need to.


Once the initrd has been generated you’re ready to transfer the image to a virtual machine.

NOTE: If your source machine is using IDE disks and not using LVM ( so filesystems mounted as /dev/hda#) you will need to update your fstab. The same holds if your system is using disk-by-id or uuid based mountpoints. The simplest way of coping with the former is to convert /dev/hda# to /dev/sda#. With the latter conversion to /dev/sda# is probably the easiest.

ALSO NOTE: If for some reason your physical machine does not have a c compiler and the kernel headers on it. Now would be a good time to install those packages – if only temporarily; it will make installing VMware tools later a simpler proposition.

Image Transfer

Like most things, there is more than one way to skin this particular cat. What we’re after is to move the system ‘image’ from the source Physical machine to the target Virtual machine. You could use Ghost, Portlock Storage Manager, Altriris, DriveImage, or any number of commercial utilities. You could also use partimg or other open source utilities. You could even use tar or dump & restore if you’re so inclined. Use something you’re comfortable with.

An important note here is the issue of Logical Volume Managers – LVM – which abstract the filesystems from the physical disk structure. The challenge LVM introduces is that if you’re using a tool like Ghost which wants to image filesystems, few (if any) understand the LVM use of the disk. This means we either need to use an “above the filesystem” tool, such as Tar or Dump & Restore, or a tool which can perform sector/block level imaging. The downside of sector imaging is that the target hard disk needs to be the same size as the source hard disk. If your source Physical machine has a 300GB disk, your target VM will also have a 300GB disk. Dump and Restore, or even Tar will get you around this problem, but it will require you to spend more time installing boot loaders and the like.

For this example, we’ll use the old staple ‘dd’ and a linux live CD to facilitate the transfer. This sector-based technique will transfer the boot loader, partition table, and linux LVM partitions, so we don’t have to piece our image back together after the transfer.

Capture Source Image

Ok, yes you can get creative with SSH and avoid creating the intermediate image ‘file’ – but for simplicity sake (and to avoid the SSH encryption overhead and resultant performance hit) we’re going to create an image file with DD, and then restore it.

Step 1 – Boot the source system from the linux live CD.

Step 2 – Mount a file-space somewhere with enough free space to store the image. Remember this will be the same size as the source machine’s hard disk.
— for an NFS mount use: mount -t nfs file.server:/path/to/mount /mountpoint (example: mount -t nfs myserver.mydomain:/exports/scratchspace /mnt)
— for an Windows/SMB/CIFS mount the command takes a couple forms, depending on your distro. Choose the one that works.
mount -t cifs -o lfs,username=<userid> “//server.domain/share “/mountpoint (example: mount -t cifs -o lfs,username=kenf  “//mysever.mydomain/scratchspace” /mnt)
mount-t smbfs -o lfs,user=<userid> “//server.domain/share” /mountpoint (example: mount -t smbfs -o lfs,user=kenf “//myserver.mydomain/scratchspace” /mnt)

Step 3 – create the image with the ‘dd’ command. The format of the command is dd if=<source> of=<destination> bs=<bytes size of I/O>
— example: dd if=/dev/hda of=/mnt/image-file.dd /bs=65535

An example of the mount and image creation follows.

Note: if your live CD isn’t the same as your installed distro, the device mappings might not be the same – your installed distro might address your IDE disk as /dev/hda but the live CD might address it as /dev/sda. Check your device before imaging it to avoid unpleasant surprises.


Restore the image in the VM

Now that we have our image captured, we’re done with the source machine. if you’re not concerned about the data on the machine changing, we can reboot it normally and let it return to service.

At this point we want to restore the image we need to –
1.) Create a New virtual machine with appropriate memory, network interfaces, and critically a hard disk of exactly the same size as the source machine had.
2.) Boot the new virtual machine from our Linux Live CD.
3.) mount the file space in the same way we did earlier (NFS, CIFS, SMBFS…)
4.) Restore the image with the ‘dd’ command again. To do this we reverse the ‘if=’ and ‘of =’ parameters. Note that if your source was an ide disk (/dev/hd*) and our target is scsi (/dev/sd*) we need to make the appropriate adjustments. An example is shown.


Reboot the converted machine

Now that the image has been restored, we’re ready to reboot the machine. If all has gone well it will reboot and the machine will in fact boot the converted operating system.

This is a good place to note that some Linux distributions have other daemons and services which will modify and even overwrite the modules.conf or modeprobe.conf file we edited earlier. Redhat is one such distro. If you are working on one of these it’s important that you get the process which is managing the .conf file to recognize the new drivers – otherwise every time your initrd gets regenerated you’ll find a nice panic message after the reboot.

After the machine boots, we need to move on to installing VMware tools.

Install VMware Tools

What about the network adapter? I’m glad you asked…

Under VMware, the tools package provides a number of drivers and services to make virtual machines be better guest’s. One of the drivers provided by the tools is an enhanced network adapter. Therefore, it’s not really worth while to ‘solve’ the problem of the network adapter until we’re ready to install tools, and installing tools will resolve the network adapter problem for us.

That said there is another potential problem to be aware of – some distros lock their network config to the MAC address of the adapter. This process will assign a new MAC to the machine, so that the virtual machine’s MAC will not be the same as the physical machine’s. You may need to update the the network config of the converted machine to address this. In the case of RHEL and CentOS, edit the files under /etc/sysconfig/network-scripts/ifconfig-eth0 (and eth1, eth2.. etc if you have more than one NIC). You can either comment the line reading HWADDR= or update it with the new MAC address.

At this point we will install vmware tools per the ‘standard’ way of doing so. Right click the VM, select “install VMware tools” then mount the virtual CDrom device in the VM. Install tools per the usual method for the distribution you’re running. The RPM based install is shown.


Once the tools package is installed you will need to configure it, do so by running ‘vmware-configure-tools.pl’ and following the prompts. If you don’t have the c compiler and kernel headers installed on your machine it may be necessary to install them now to complete the tools configuration. The compiler and header files are used to build kernel modules for (among other things) the enhanced network driver.

The configuration of a basic CentOS tools install is shown.


answer the questions asked and complete the install process.


Notice the reference to needing to restart networking.. You can follow the instructions displayed or simply reboot the virtual machine at this point.

Conversion Complete

if all has gone well, your workload conversion is done, your machine is running in a virtual machine with it’s configuration and data intact.

Happy Virtualizing!

How good is your backup solution?

Most organizations believe that their data is well protected because they have a backup system in place, or because they use a RAID array to protect their data against disk failure. The truth is very few entities have complete protection.
A complete data protection solution requires protection in several areas. Hardware resiliency protects against component failure. Point in time protection ensures that data can be recovered from some point in the past, whether seconds, minutes, hours or days. Geographic protection prevents loss of data in case of some sort of site wide failure. Many organizations also require some sort of long term retention of critical data for compliance purposes or in case of legal action.

The following list describes some of the strategies used for each type of data protection:

Hardware Resilience – Protects against hardware component failure:

    RAID Controller cards
    Software RAID
    External Storage Arrays

Point-in-time Protection – Protects against data loss or corruption due to hardware or software failure, user error or deliberate actions:

    Enterprise Backup Utilities
    Software or Hardware Based Snapshots
    Continuous Data Protection Tools

Geographic Protection – Protects against site wide failures:

    Off-site Tape Storage
    Disk based backup replication
    Replication Software
    Storage Array Based Replication

Long-Term Protection – Used when business or legal policies require retention of data beyond standard backup retention:

    File System and E-Mail Archiving

Data Domain – New Features

Some Data Domain New Features in Software version 4.5.3:

    10 GB Ethernet
    Dual path connectivity to expansion shelves (in available systems)
    Retention Lock (for compliance purposes)
    Automatic detection of tape markers (useful when multiple backup software tools are used)
    Enhanced CIFS ACLs

Rare HP c7000 Blade Chassis Power Supply Issue

Customer alert for anyone who has a HP c7000 blade enclosure with the 2250W Hot Plug Power Supplies, manufactured prior to March 20th, 2008.

It is recommended to get these replaced under warranty support to avoid any possible issues (even though they are rare).

Here’s the link to the HP Support document:

HP PCC c7000 Power Supply Replacement Program:

Install, Enable and Use Storage vMotion GUI in vCenter

Storage vMotion (also called svmotion) with vCenter 2.5 and ESX 3.5 is currently a feature only available via a command line. Svmotion allows you to move a Virtual Machines Virtual Disk from one datastore to another while the Virtual Machine is live (powered on). A few programmers from the VMware Community forums have made a SourceForge project to enable a vCenter plugin to use svmotion, without the need to use a command line interface. This lesson will show you how to enable that plugin and perform a Storage vMotion through vCenter.

Initial Install – Obtain svmotion Plugin


Browse to http://sourceforge.net/project/showfiles.php?group_id=228535 to download the Storage vMotion GUI plugin for vCenter.

Install Storage vMotion Plugin


When you’ve downloaded the file, doubleclick the file to begin the installation. It is a very simple installation where you can accept the defaults for any prompts.

In vCenter, Manage Plugins


Start the VI Client (connecting to vCenter) and browse to the Plugins area by selecting “Plugins” from the toolbar and then select “Manage Plugins”.

Enable svmotion Plugin in VI Client


Typically all you will need to do is select the “Installed” tab and select the checkmark box under the svmotion plugin area to enable the plugin. Then select the “OK” button. If the svmotion plugin is not an option under the “Installed” tab, then I’ve seen where you’ll need to go to the “Available” tab and “install” it from there. Then return to the “Installed” tab and follow these instructions.

Migrating Storage through vCenter VI Client


If the plugin was enabled successfully, then you will have an option to “Migrate Storage” when you right click on a Virtual Machine. Select that option if you are ready to perform a Storage vMotion migration.

Drag and Drop Virtual Disk to New Location


A window will appear that looks like the screenshot. The Virtual Machines’ Virtual Disk will be shown under the current Datastore that it is located on. To move it to a new datastore, drag and drop the virtual disk to a different datastore that is listed. In our example above, we’ll drag and drop the virtual disk from the datastore called “SAN” to the new datastore called “dnvmes-vol2-nonrepl”.

Verify and Submit Operation


If this location is where you want to move the virtual disks to, then select the “Apply” button which will begin the Storage vMotion operation. You can monitor the task under the Task area in the VI Client. Once it is completed, the virtual disk will now be located on the new datastore thanks to Storage vMotion!