The following table lists the virtual machine technologies or hypervisors, the host operating system for these technologies, the guest operating systems (also known as virtual machines (VMs) or “workloads”) supported by these technologies, and the provisioning adapter job available in the Cloud Manager Orchestration Server that is used to provision and manage the life cycle of the VMs.
More information about RHEL 6 VM support in Cloud Manager is also provided in this section.
For more detail about the life cycle management capabilities of Cloud Manager Orchestration, see Configuring Orchestration Provisioning Adapters
in the NetIQ Cloud Manager 2.1.5 Orchestration Installation Guide.
Table 4-1 VM Technologies with Supported Host Operating Systems, Guest Operating System, and Provisioning Adapter
Hypervisor or Virtualization Technology |
Host Operating System (that is, “VM Hosts”) |
Guest Operating System (that is, “VMs” or “Workloads) |
Orchestration Provisioning Adapter |
---|---|---|---|
|
Subject to the VMware support matrix |
|
vsphere |
Citrix XenServer 5.6, latest SP |
Citrix XenServer |
|
xenserv |
Citrix XenServer 6 Free Edition |
Citrix XenServer |
|
xenserv |
Microsoft Hyper-V4 |
Windows Server 2008 R2 with Hyper-V enabled |
|
hyperv |
|
|
xen |
|
Kernel-based Virtual Machine for Linux (KVM) |
SLES 11 SP1 or SP2 running libvirt 0.7.6 or greater |
Subject to the published KVM support matrix |
kvm |
1 For more information about RHEL 6 VM support, see RHEL 6 VM Support, below.
2 Windows VMs running on the Xen hypervisor require a VM host CPU with the Intel VT or AMD-V technology available and enabled.
4 A complete listing of guest OS support for the Hyper-V hypervisor is available at the Microsoft TechNet Web site and at the Windows Server 2008 Hyper-V product page. This matrix shows only those guest OS’s supported by Cloud Manager.
You need to be aware of the following limitations of Red Hat Enterprise Linux 6 VMs in the NetIQ Cloud Manager environment:
Although RHEL uses LVM partitioning by default, we recommend that you do not use it. You need to change the partitioning method manually.
SLES 11 hosts can mount the ext4 file system if you load the proper kernel module on the host. You can do this by entering the following command at the command line of the SLES 11 host:
modprobe –allow-unsupported ext4
To allow the ext4 module to be loaded at boot time:
Edit the /etc/modprobe.d/unsupported-modules file and set allow_unsupported_modules to 1.
Edit /etc/sysconfig/kernel and add ext4 to the MODULES_LOADED_ON_BOOT variable.
These procedures work only on SLES 11 kernel, not the SLES 10 kernel.
Making these changes could make the system unavailable for support. The unsupported-modules text file states:
Every kernel module has a ‘supported’ flag. If this flag is not set, loading this module taints your kernel. You will not get much help with a kernel problem if your kernel is marked as tainted. In this case you firstly have to avoid loading of unsupported modules.
Discovered RHEL 6 VMs show appropriate fact values. For example, the value for the resource.os.type fact is rhel6. The value for resource.os.vendor.string is Red Hat Enterprise Linux Server release 6.0 (Santiago) and the value for resource.os.vendor.version is 6.
RHEL 6 uses the udev service, which testing has shown renames the network interfaces on a cloned VM and causes configuration errors. To turn of the udev service so that network configuration can work with personalization,
In the file structure of the template VM, open the /etc/udev/rules.d/70-persistent-net.rules file and remove all its lines.
In the file structure of the template VM, open the /lib/udev/write_net_rules file and comment (that is, add a # sign preceding the code) the line that looks similar to this:
write_rule "$match" "$INTERFACE" "$COMMENT"
NOTE:Editing the template VM files assures that all its clones will work properly.