Moving a VMware ESXi 4.0 Guest From One Host to Another

To move an ESXi 4.0 guest is not all that hard – but you must be aware of several things along the way. Taken one step at a time, it won’t be difficult. In this discussion, we assume that you are moving from one ESXi 4.0 host to another – both with the same architecture. (Anything other than that gets much more complicated.)

First, make sure there are no snapshots. Snapshots are not compatible with this process and must be eliminated altogether.

Then, shut down the guest system. We don’t want the guest changing as it is copied across.

The next step is to copy the guest files from the original host to the destination host. This is the longest step considering you probably have gigabytes of data to transfer. This is also done from the ESXi command line.

I would normally use rsync but it doesn’t exist on an ESXi 4.0 system; use scp to copy the files. Your files for the guest should be located in /vmfs/volumes/datastoreX/guest/ where datastoreX is the data store containing the guest and guest is the name of the guest host. If you renamed the host in one of the GUIs such as VSphere Client, then this will reflect the original name.

Make a directory in the remote host (using the ESXi command line interface) in one of the data stores, and then use commands like these from the original host:

cd /vmfs/volumes/datastoreX/guest
scp * remotehost:/vmfs/volumes/datastoreY/guest/

This will copy the files to the remote host.

However, copying is not enough. Log into the remote host and go to the place you copied the files to. Check over the file ending in .vmx for any references to disks that must be changed. Convert remote host paths to local disk paths – you will probably need to know the long hexadecimal path for any paths, so list that before you start editing. If you execute a cd command to the directory containing the guest host, the long path will be in the prompt.

Next you must register the host so the system knows about it. Use this command at the command line to do this:

vim-cmd solo/registervm /vmfs/volumes/datastoreY/guest/guest.vmx

Now, to get the host started: start the host from VSphere Client. The client will give you a question to answer about where the guest came from. Click on the guest’s Summary tab and select I copied it. (which should be the default) and click Ok.

The guest will start up – and discover that the MAC address of its network interface has changed. For Linux, this means a new ethernet interface, and the configuration of the old interface is ignored: that means that there will be no network connectivity. Enter the console and change the old configuration from eth0 to eth1 (or whatever is appropriate; find out with ifconfig -a). This change varies by which Linux distribution you use; for Ubuntu, the configuration is in /etc/network/interfaces.

While you don’t have to reboot, it doesn’t hurt to do so after this change – and it tests the system in a clean reboot. The system should start up cleanly.

Now don’t forget to remove the original. Using the VSphere Client, right-click on the host and select Delete from Disk. This will remove the guest host entirely from the system and delete all of its files. If you want to retain the files, instead select Remove from Inventory which essentially unregisters the host, so that the system is not managing it – but the disk files remain.

Making the Case for Partitioning

What is it about partitioning? The old school rule was that there were separate partitions for /, /usr, /home, /var, and /tmp. In fact, default server installations (including Solaris) still use this partitioning set up.

Has partitioning outlived its usefulness?

This question has come up before. There are negative and positive aspects to partitioning, and the case for partitioning might not be as strong as it once was.

Partitioning means that you may have a situation with no space in one partition and lots in another. This, in fact, is the most common argument against partitioning. However, using LVM or ZFS where disks can be grown dynamically makes this a moot point. With technologies such as ZFS and LVM, you can expand a disk and filesystem any time you need to.

However, this still means that the system will require maintenance – but that is what administrators are for, right? If a filesystem fills – or is going to fill – it is up to a system administrator to find the disk space and allocate it to the disk.

Another argument against partitioning says “Disk is cheap.” Well, if this is true, then why do companies still balk at getting terabytes of disk into their SANs? The phrase “Disk is cheap” is trite but untested: in truth, buying 144Gb disks is not done in bulk. Companies still have to watch the budget, and getting more disk space is not necessarily going to be a high priority until disk runs out.

So, what are the benefits to partitioning disks? There are many.

Each partition can be treated seperately – so the /usr filesystem can be mounted read-only, the /home directory can be mounted with the noexec and nosuid options, which makes for a more secure and more robust system.

Also, if there are disk errors, then a single partition can be affected rather than the entire system. Thus, on a reboot, the system still comes up instead of being blocked because the root filesystem is trashed. In this same vein, if a filesystem requires a check, going through a 144Gb filesystem check could take a very long time – whereas, if the partition was 10Gb it would be not nearly as long and the system would come back up that much faster.

Backups – and restores – are another thing that is simplified by having multiple partitions. For example, when backing up an HP-UX system using make_tape_recovery, you specify which partitions to back up to tape. These partitions are then restored when the tape is booted. If you used a single partition for everything (data, home, etc.) then you would probably not be able to make this sort of backup at all.

One of the nicest reasons to partition is the ability to separate user data from system data. This allows the reinstallation of the system while keeping user data (and application data) untouched. This saves time and effort. I recently installed Ubuntu Server in place of Red Hat Enterprise Linux, and since the system was a single partition, there was no way to install Ubuntu Server without wiping out 200Gb of application data and restoring it – which took around 9 hours each way on a gigabit network link (if nothing else was sharing the network). Alternately, I converted my OpenSUSE laptop to using Xubuntu – and was able to keep all of my user settings because /home was on a separate partition. Keeping a system on a single partition cost the company somewhere in the order of a full day’s worth of time and effort – how much money would the company have saved by having a separate partition for /var/lib/mysql?

Performance is another reason for partitioning – but this is only relevant for separate disks. If you have multiple disks, you can segregate them into separate partitions, which means then that if a disk is heavily used for one purpose it can be dedicated to that purpose – you won’t have your database accesses slowing down because of system log writes, for example. However, this problem can be reduced or eliminated – or made moot – by moving data to a striped volume, and possibly with disk cache as well. Yet, as long as there are disk accesses, do you want them competing with your database?

Having said all of this, how does using a virtual machine change these rules? Do you need partitioning in a virtual machine?

A virtual machine makes all of the arguments relating to the physical disk moot – performance on a virtual disk doesn’t have a high correlation with true physical hardware unless the virtual machine host is set up with segregated physical disk. Even so, the disks may actually be separate LUNs created in a RAID.

However, the ability to secure a filesystem (such as /usr), save filesystem check time, prevent excessive /home usage, and other reasons suggest that the case for partitions is still valid.

Installing an ISO for Use in VMware ESXi 4

Having a CDROM available for use in a VMware ESXi server can be very useful. However, what if there is no way to get the CDROM into the host? Better yet, what if you just want to avoid the hassle of having to go back and forth inserting and pulling CDROMs?

There is a way to get the ISO onto the VMware ESXi server and make it available to a virtual machine. First, you have to put the ISO on the server itself. Secondly, you have to make the ISO available to the host. That’s it.

First step: copy the ISO to the ESXi server. You could do this with scp (if you’ve activated ssh) or by creating the ISO on the fly from the original disk. You could also use scp on the ESXi server to copy an ISO into the server without activating ssh.

The best destination would probably be a directory in the datastore you set up during installation of the ESXi server. If the datastore is datastore1, then the location for ISOs could be /vmfs/volumes/datastore1/ISO Images/. You’ll have to create the directory ISO Images yourself. Note that “datastore1” (or whatever your datastore name is) is not used by VMware ESXi, but translates to a hex string: so don’t be alarmed when you see it instead of the datastore name you expect.

Lastly, make the ISO available to the virtual machine. Using vSphere Client, you can get to the appropriate place (“Edit Settings”) in a number of ways. Right click on the virtual machine in the list of machines (either on the tree to the left, or on the list in the Virtual Machines tab) and select “Edit Settings”. Alternatively, click on the virtual machine on the left, then click on the Summary tab – and on that tab, click on “Edit Settings”.

In Edit Settings, click on the CD/DVD Drive 1 entry and then click the button for Datastore ISO File (on the right). If you click on browse – then on the drop-down menu at the top, look for “Datastores” (at the bottom). Your datastores will be shown when you click on Datastores, then the directory you created will be in the datastore you used earlier. Select the ISO you want and click Open – then OK.

Once the CDROM is chosen, you can then install from it, or use it in any way you like from the virtual machine. Don’t forget: this is just like putting a new CDROM into the machine – so whatever your OS needs to have happen, you have to do it after “putting the CDROM” into the drive.

Installing VMware vSphere CLI 4.0 in Ubuntu 10.04 LTS

Installing the VMware vSphere Command Line Interface (CLI) has the potential for problems. In my case, it generated an error – a three-year old error. Perl returns the error:

undefined symbol: Perl_Tstack_sp_ptr

Not only has this error been around for three years, it also has shown up in numerous other instances. Ed Haletky wrestled with the error in VMware vSphere CLI back in June of 2008. The error surfaced in Arch Linux in 2008, both in running their package manager and in running cpan itself. This error also came up (again in 2008) in attempting to build and run Zimbra. (The response from Zimbra support was cold and unwavering: we don’t support that environment and won’t discuss it. How unfortunate.) The error also affected the installation of Bugzilla according to this email thread from 2009.

On the Perl Porters mailing list, there is an in-depth response as to what causes this error. From reading these messages, it appears that there are two related causes:

  • Using modules compiled for Perl 5.8 with Perl 5.10
  • Using modules compiled against a threaded Perl with an unthreaded Perl

One recommended solution is to recompile the modules using the cpan utility:

cpan -r

That may or may not be enough; it depends on if there were other errors. In attempting to run the vSphere CLI, I get this error:

IO::Compress::Gzip version 2.02 required--this is only version 2.005

To fix this, I ran cpan this way:

cpan IO::Compress::Gzip

In my case, that loaded IO::Compress::Gzip version 2.033.

I also loaded the libxml2-dev package; I don’t know if that was necessary or not:

apt-get install libxml2-dev

Whenever using cpan, I always wonder how it affects my packaged installations and whether it installs for all users or just me (and how to control that) – but I’ve never had any problems and installs as root seem to go into /usr/local – which makes sense.

Having done all this, I can now use the vSphere CLI to activate SNMP on the ESXi 4 servers. For the record, this is an integral part of ESXi 4 and supports all SNMP polling and traps – previously, only SNMP traps were supported. Certainly a nice improvement.

Administrator Experiences with VMware and SUSE Linux

Recently, VMware announced a partnership with Novell in which they would support Novel SUSE Linux Enterprise Server (SLES) directly on VMware vSphere. Neil over at VirtuallyNil wrote about his experiences with SLES and VMware ESXI. Unfortunately, he had some problems with VMware’s additions to SLES.

To enhance the experience with virtual machines, virtual environment managers add tools to the guest environments – and VMware is no different. For SLES there are tools available that permit advanced operations directly from the virtual machine manager. With ESXI, these are available for SLES 10 and SLES 11 – but not SLES 11 SP1.

This means that you either build your own SLES 11 SP1 tools or you cannot upgrade your SLES 11 to the most recent patch level. This is unfortunate.

I have experienced this before with an application that required a particular version of Red Hat Linux (7.1 if I remember rightly) even though that version of Red Hat was no longer supported by Red Hat itself.

Also, Neil points out two other sites that have images of people’s direct experiences with the new VMware-supported SLES. One first look comes from vcritical.com (a blog by Eric Gray, a VMware employee); the other comes from Jase McCarty at Jase’s Place.

Canonical Kills Ubuntu Maverik Meerkat (10.10) for Itanium (and Sparc)

It wasn’t long ago that Red Hat and Microsoft released statements that they would no longer support Itanium (with Red Hat Enterprise Linux and Windows respectively). Now Canonical has announced that Ubuntu 10.04 LTS (Long Term Support) will be the last supported Ubuntu on not only Itanium, but Sparc as well.

Itanium has thus lost three major operating systems (Red Hat Enterprise Linux, Windows, and Ubuntu Linux) over the past year. For HP Itanium owners, this means that Integrity Virtual Machines (IVMs) running Red Hat Linux or Microsoft Windows Server will no longer have support from HP (since the operating system designer has ceased support).

The only bright spot for HP’s IVM is OpenVMS 8.4, which is supported under an IVM for the first time. However, response to OpenVMS 8.4 has been mixed.

Martin Hingley has an interesting article about how the dropping of RHEL and Windows Server from Itanium will not affect HP; I disagree. For HP’s virtual infrastructure – based on the IVM product – the two biggest environments besides HP-UX are no longer available. An interesting survey would be to find out how many IVMs are being used and what operating systems they are running now and in the future.

With the loss of Red Hat and Microsoft – and now Canonical’s Ubuntu – this provides just that many fewer options for IVMs – and thus, fewer reasons to use an HP IVM. OpenVMS could pick up the slack, as many shops may be looking for a way to take OpenVMS off the bare metal, letting the hardware be used for other things.

If HP IVMs are used less and less, this could affect the Superdome line as well, as running Linux has always been a selling point for this product. As mentioned before, this may be offset by OpenVMS installations.

This also means that Novell’s SUSE Linux Enterprise Server becomes the only supported mainstream Linux environment on Itanium – on the Itanium 9100 processor at least.

From the other side, HP’s support for Linux seems to be waning: this statement can be found in the fine print on their Linux on Integrity page:

HP is not planning to certify or support any Linux distribution on the new Integrity servers based on the Intel Itanium processor 9300 series.

Even if HP doesn’t feel the effect of these defections, the HP’s IVM product family (and Superdome) probably will.

Red Hat Drops Xen for KVM in Red Hat Enterprise 6

With the introduction of Red Hat Enterprise 6 Beta, Red Hat has changed direction in their choice of virtualization: they have dropped Xen entirely in favor of KVM.

This is not entirely a surprise, since Red Hat bought Qumranet, a company active early on in KVM development.

What does this mean for us as administrators? This means that we will have to convert any Xen virtual machines to KVM machines if there is to be support from Red Hat. Alternately, support for Xen will have to come from Citrix. This means either internal costs (such as labor, downtime, etc.) to migrate from Xen to KVM or external costs in adding Citrix support of Xen to the costs of Red Hat Enterprise Linux.

With this in mind, even if we do not have Xen virtual machines, we need to learn a new virtual environment before we are called on to support it in-house. When the company calls on you to support a KVM virtual machine, you will be ready.