4.4 The Xen Provisioning Adapter

This section includes the following information:

4.4.1 Configuration Policies Used by the Xen Provisioning Adapter

The following table provides information about the policies associated with the xen provisioning adapter job that manage the Xen hosts and VMs in the grid. The policy settings are applied to all the VMs in the grid.

Table 4-4 Virtual Machine Management Policies for Xen

Policy Name

Explanation

Additional Details

xen

Contains the policy settings for the xen provisioning adapter.

By default, the optimal values are configured for the job and joblets in the policy.

xenDiscovery

Contains the settings required to discover the Xen host machines. It also contains the default installation path of the Xen server.

If the Xen server is not installed in the default path, edit this policy to provide the correct information.

xenPA

Contains the constraints used to check whether the Xen host is registered to the Orchestration Server, and the host is up and running.

Do not edit the policy.

4.4.2 Provisioning Actions Supported by the Xen Provisioning Adapter

The following table lists the VM provisioning actions supported by the Orchestration Console for the xen provisioning adapter job.

Table 4-5 Provisioning Actions Supported by the Xen Provisioning Adapter

Cloud Manager Orchestration Managed VM Action

SLES 9 Guest

SLES 10 Guest

RHEL 4 Guest

RHEL 5 Guest

Other Linux Guest

Windows Guest

Provision

X

X

X

X

X

X

Clone

X

X

X

X

X

X

Shutdown

X

X

X

X

X

X

Destroy

X

X

X

X

X

X

Suspend

X

X

X

X

X

X

Pause

X

X

X

X

X

X

Resume

X

X

X

X

X

X

Create Template

X

X

X

X

X

X

Move Disk Image1

X

X

X

X

X

X

Hot Migrate 2

X

X

X

X

X

X

Checkpoint

X

X

X

X

X

X

Restore

X

X

X

X

X

X

Install Orchestration Agent

X

X

X

X

X

Make Standalone

X

X

X

X

X

X

Check Status

X

X

X

X

X

X

Personalize

X

X

X

X

Save Config

X

X

X

X

X

X

Cancel Action

X

X

X

X

X

X

Check Host Assignment

X

X

X

X

X

X

Build

?

?

?

?

?

?

Launch Remote Desktop

X

X

X

?

X

X

1 A “move” is the relocation of VM disk images between two storage devices when the VM is in a not running state (this includes VMs that are suspended with a checkpoint file). This function does not require shared storage; the move is between separate repositories.

2 A “hot migrate” (also called a “live migrate”) is the migration of a running VM to another host and starting it there with minimal resulting downtime (measured in milliseconds). This action requires shared storage.

4.4.3 Additional Xen Provisioning Adapter Information

Other behaviors of the xen provisioning adapter that you should be aware of include the following:

  • The VM Builder running on a SLES 11 VM host now supports three additional VM OS types:

    • openSUSE 11

    • SLED 11

    • SLES 11

    These OS types are not available for build when you use the VM Builder running on a SLES 10 SP2 VM host. These OS types can only be built and provisioned to a SLES 11 VM host.

  • VMs that were supported on SLES 10 SP2 are supported on SLES 11; that is, if the VM was built on SLES 10 SP2, the provisioning adapter supported for that VM on SLES 10 SP2 can also be supported when the VM runs on SLES 11.

  • Although VM migration from a SLES 11 VM host to a SLES 10 SP2 VM host is not supported, VM migration is supported in the following scenarios:

    • VM on a SLES 10 SP2 VM host migrating to a SLES 11 VM host

    • VM on a SLES 10 SP2 VM host migrating to a SLES 10 SP2 VM host

    • VM on a SLES 11 VM host migrating to a SLES 11 VM host

  • Virtual machines built on SLES 10 SP2 can be provisioned on either SLES 10 SP2 or SLES 11 VM hosts.

  • Virtual machines (regardless of OS type) built on SLES 11 cannot be provisioned on a SLES 10 SP2 VM host.

  • RHEL 5 VMs managed by the xen provisioning adapter must have the following characteristics:

    • They must not use LVM

      or

    • If LVM is used on the VM, its volume groups (VGs) must have unique (that is, non-default) names.

    To illustrate what can happen when you use a default LVM configuration, consider the following example:

    You create two RHEL 5 VMs. During this process, the default disk configuration (which incorporates LVM) is utilized, so the two VMs have identical VG names. If these VMs are located on the same Xen host, a naming conflict occurs when those VMs are concurrently discovered by the Orchestration Server. This results in one of the VMs not being properly discovered.

    HINT:As a general rule, we do not recommend using LVM for VM disks.

For information about the VMs provisioned by this provisioning adapter, see Section 4.4.1, Configuration Policies Used by the Xen Provisioning Adapter.

4.4.4 Known Configuration Limitations for Xen VMs

The following list describes the known limitations you can encounter when configuring Xen VMs in the Orchestration Server.

  • The checkpoint and restore features on the xen provisioning adapter only suspend and resume the specified VM. Xen does not support taking normal snapshots as other hypervisors do.

  • The Cloud Manager Orchestration Server configures the “netfront” paravirtualized driver in paravirtualized and fully virtualized guest VMs.

    Previous releases of the Orchestration Server allows for 16 disks on both paravirtualized and fully virtualized VMs. However, if the paravirtualized drivers have not been installed within a fully virtualized VM, you can configure only 4 disks. In this scenario, if the number of disks exceeds 4, Xen fails to start the VM.

  • If you use virt-manager to create a VM in the default repository and then copy its config file from /etc/xen/vm/<VM_name> to a new location on a different repository, both of these config files point to the same physical disk image. If you do not remember to use the xm delete command on the original VM, the Discovery action on both the original repository and the new repository results in the same VM being discovered in both repositories. This can cause the provisioning adapter to become confused as to which repository holds the VM image you want to use. To avoid this issue, we recommend that you use the Orchestration Console to move any VMs you happen to create manually. The client takes care of properly deleting and re-creating the configuration for the VM.

  • When the xendConfig job is used during the discovery of a very large number of Xen VM hosts (that is, Xen resources where you have installed the Orchestration Agent), the completion of the xendConfig job can take an unnecessary amount of time to complete. This happens because, by default, an instance of the xendConfig job is started for every VM host discovered, possibly resulting in a very large number of queued xendConfig jobs.

    By default, the xendConfig job is constrained to allow only one instance of the job to run at a time, causing all the other xendConfig job instances to queue.

    The following constraint from the xendConfg.policy file causes all the xendConfig job instances to run one at a time, rather than concurrently.

     <constraint type="start" >
        <lt fact="job.instances.active"
                value="1"
                reason="Only 1 instance of this job can run at a time" />
      </constraint>
    

    If you need to work around this issue to accommodate a large Xen environment, you can temporarily remove or comment out this constraint from the xendConfig policy, but you must ensure that no other the Orchestration Server administrator runs this job at the same time. Doing so might result in corruption of the /etc/xen/xend-config.sxp file because two xendConfig job instances could attempt to concurrently modify this config file.