4.2 Configuring the xen Provisioning Adapter and Xen VMs

This section includes the following information:

4.2.1 Configuring Policies for Xen

The following table provides detailed information about the policies associated with the xen provisioning adapter that are used to manage the Xen hosts and VMs in the grid. The policy settings are applied to all the VMs in the grid.

Table 4-4 Virtual Machine Management Policies for Xen 3.0 Server

Policy Name

Explanation

Additional Details

xen

Contains the policy settings for the xen provisioning adapter.

By default, the optimal values are configured for the job and joblets in the policy.

xenDiscovery

Contains the settings required to discover the Xen host machines. It also contains the default installation path of the Xen server.

If the Xen server is not installed in the default path, edit this policy to provide the correct information.

xenPA

Contains the constraints used to check whether the Xen host is registered to the Orchestrate Server, and the host is up and running.

Do not edit the policy.

4.2.2 Known Configuration Limitations for Xen VMs

The following list describes the known limitations you can encounter when configuring Xen VMs in PlateSpin Orchestrate:

  • The checkpoint and restore features on the xen provisioning adapter only suspend and resume the specified VM. Xen does not support taking normal snapshots as other hypervisors do.

  • PlateSpin Orchestrate configures the “netfront” paravirtualized driver in paravirtualized and fully virtualized guest VMs.

    Previous releases of Orchestrate allowed only 4 disks on a fully virtualized VM; PlateSpin Orchestrate 2.6 allows for 16 disks on both paravirtualized and fully virtualized VMs. However, if the paravirtualized drivers have not been installed within a fully virtualized VM, you can configure only 4 disks. In this scenario, if the number of disks exceeds 4, Xen fails to start the VM.

  • If you find that the VM is not obtaining proper connectivity or if you encounter some other networking error, check the vnic.type custom fact for the vNIC that is associated with the VM. Make sure the value for this fact matches the type of network adapter required by the VM.

  • If you use virt-manager to create a VM in the default repository and then copy its config file from /etc/xen/vm/<VM_name> to a new location on a different repository, both of these config files point to the same physical disk image. If you do not remember to use the xm delete command on the original VM, the Discovery action on both the original repository and the new repository results in the same VM being discovered in both repositories. This can cause the provisioning adapter to become confused as to which repository holds the VM image you want to use. To avoid this issue, we recommend that you use the Orchestrate Development Client to move any VMs you happen to create manually. The client takes care of properly deleting and re-creating the configuration for the VM.

  • When the xendConfig job is used during the discovery of a very large number of Xen VM hosts (that is, Xen resources where you have installed the Orchestrate Agent), the completion of the XendConfig job can take an unnecessary amount of time to complete. This happens because, by default, an instance of the xendConfig job is started for every VM host discovered, possibly resulting in a very large number of queued xendConfig jobs.

    By default, the xendConfig job is constrained to allow only one instance of the job to run at a time, causing all the other xendConfig job instances to queue.

    The following constraint from the xendConfg.policy file causes all the XendConfig job instances to run one at a time, rather than concurrently.

     <constraint type="start" >
        <lt fact="job.instances.active"
                value="1"
                reason="Only 1 instance of this job can run at a time" />
      </constraint>
    

    If you need to work around this issue to accommodate a large Xen environment, you can temporarily remove or comment out this constraint from the xendConfig policy, but you must ensure that no other Orchestrate administrator runs this job at the same time. Doing so might result in corruption of the /etc/xen/xend-config.sxp file because two xendConfig job instances could attempt to concurrently modify this config file.