A.2 Sample Cluster Deployment on SLES 12 SP2

A.2.1 Prerequisites

  • Two servers running SLES 12 SP2 64-bit for nodes

  • One server running SLES 12 SP2 64-bit for iSCSI Server

  • SLES12 SP1 64-bit HA extension ISO image file

  • Six static IPs:

    • Two static IP addresses for each node.

    • One static IP address for the cluster. This IP address is dynamically assigned to the node currently running eDirectory.

    • One IP address for iSCSI Server.

A.2.2 Installation Procedure

This section explains the procedure to install and configure the following to set up the cluster environment. For more information about configuring the SLES High Availability Extension, see the SUSE Linux Enterprise High Availability Extension guide.

Configuring the iSCSI Server

An iSCSI target is a device that is configured as a common storage for all nodes in a cluster. It is a virtual disk that is created on the Linux server to allow remote access over an Ethernet connection by an iSCSI initiator.An iSCSI initiator is any node in the cluster that is configured to contact the target (iSCSI) for services. The iSCSI target should be always up and running so that any host acting as an initiator can contact the target. Before installing iSCSI target on the iSCSI server, ensure that the iSCSI target has sufficient space for a common storage.Install the iSCSI initiator packages on the other two nodes after installing SLES 12 SP2.

During the SLES 12 SP2 installation:

  1. Create a separate partition and specify the partition path as the iSCSI shared storage partition.

  2. Install the iSCSI target packages.

To configure the iSCSI server:

  1. Create a block device on the target server.

  2. Type the yast2 disk command in terminal.

  3. Create a new Linux partition, and select Do not format.

  4. Select Do not mount the partition.

  5. Specify the partition size.

  6. Type the yast2 iscsi-server or yast2 iscsi-lio-server command in terminal.

  7. Click the Service tab, then select When Booting in the Service Start option.

  8. In the Targets tab, click Add to enter the partition path (as created during the SLES installation).

  9. In the Modify iSCSI Target Initiator Setup page, specify iSCSI client initiator host names for the target server and then click Next.

    For example, iqn.sles12sp2node1.com and iqn.sles12sp2node2.com.

  10. Click Finish.

  11. Run the cat /proc/net/iet/volume command in the terminal to verify if the iSCSI target is installed

Configuring the iSCSI initiator on all Nodes

You must configure the iSCSI initiator on all cluster nodes to connect to the iSCSI target.

To configure the iSCSI initiator:

  1. Install the iSCSI initiator packages.

  2. Run the yast2 iscsi-client in terminal.

  3. Click the Service tab and select When Booting in the Service Start option.

  4. Click the Connected Targets tab, and click Add to enter the IP address of the iSCSI target server.

  5. Select No Authentication.

  6. Click Next, then click Connect.

  7. Click Toggle Start-up to change the start-up option from manual to automatic, then click Next.

  8. Click Next, then click OK.

  9. To check the status of the connected initiator on the target server, run the cat /proc/net/iet/session command on the target server. The list of initiators that are connected to iSCSI server are displayed.

Partitioning the Shared Storage

Create two shared storage partitions: one for SBD and the other for Cluster File System.

To partition the shared storage:

  1. Run the yast2 disk command in terminal.

  2. In the Expert Partitioner dialog box, select the shared volume. In our example, select sdb from the Expert Partitioner dialog box.

  3. Click Add, select Primary partition option, and click Next.

  4. Select Custom size, and click Next. In our example, the custom size is 10 MB.

  5. Under Formatting options, select Do not format partition. In our example, the File system ID is 0x83 Linux.

  6. Under Mounting options, select Do not mount partition, then click Finish.

  7. Click Add, then select Primary partition.

  8. Click Next, then select Maximum Size, and click Next.

  9. In Formatting options, select Do not format partition. In our example, specify the File system ID as 0x83 Linux.

  10. In Mounting options, select Do not mount partition, then click Finish.

Installing the HA Extension

To install the HA extension:

  1. Go to the SUSE Downloads website.

    SUSE Linux Enterprise High Availability Extension (SLE HA) is available for download for each available platform as two ISO images. Media 1 contains the binary packages and Media 2 contains the source code.

    NOTE:Select and install the appropriate HA extension ISO file based on your system architecture.

  2. Download the Media 1 ISO file on each server.

  3. Open YaST Control Center dialog box, click Add-on products > Add.

  4. Click Browse and select the DVD or local ISO image, then click Next.

  5. In the Patterns tab, select High Availability under Primary Functions.

    Ensure that all the components under high availability are installed.

  6. Click Accept.

Setting up Softdog Watchdog

In SLES HA Extension, the Watchdog support in the kernel is enabled by default. It is shipped with a number of different kernel modules that provide hardware-specific watchdog drivers. The appropriate watchdog driver for your hardware is automatically loaded during system boot.

  1. Enable the softdog watchdog:

    echo softdog > /etc/modules-load.d/watchdog.conf

    systemctl restart systemd-modules-load

  2. Test if the softdog module is loaded correctly:

    lsmod | grep dog

Configuring the HA Cluster

This example assumes that you are configuring two nodes in a cluster.

Setting up the first node:

  1. Log in as root to the physical or virtual machine you want to use as cluster node.

  2. Run the following command:

    ha-cluster-init

    The command checks for NTP configuration and a hardware watchdog service. It generates the public and private SSH keys used for SSH access and Csync2 synchronization and starts the respective services.

  3. Configure the cluster communication layer:

    1. Enter a network address to bind to.

    2. Enter a multicast address. The script proposes a random address that you can use as default.

    3. Enter a multicast port. By default, the port is 5405.

  4. Set up SBD as the node fencing mechanism:

    1. Press y to use SBD.

    2. Enter a persistent path to the partition of your block device that you want to use for SBD. The path must be consistent for both the nodes in the cluster.

  5. Configure a virtual IP address for cluster administration:

    1. Press y to configure a virtual IP address.

    2. Enter an unused IP address that you want to use as administration IP for SUSE Hawk GUI. For example, 192.168.1.3.

      Instead of logging in to an individual cluster node, you can connect to the virtual IP address.

Once the first node is up and running, add the second cluster node using the ha-cluster-join command.

Setting up the second node:

  1. Log in as root to the physical or virtual machine through which you want to connect to the cluster.

  2. Run the following command:

    ha-cluster-join

    If NTP is not configured, a message appears. The command checks for a hardware watchdog device and notifies if it is not present.

  3. Enter the IP address of the first node.

  4. Enter the root password of the first node.

  5. Log in to SUSE Hawk GUI and then click Status > Nodes. For example, https://192.168.1.3:7630/cib/live.

Installing and Configuring eDirectory and Identity Manager on Cluster Nodes

  1. Install eDirectory on cluster nodes:

    Install a supported version of eDirectory. For step-by-step instructions to configure eDirectory on a HA cluster, see “Deploying eDirectory on High Availability Clusters” in the eDirectory Installation Guide.

    IMPORTANT:Ensure that the virtual IP is configured on the Node1 before you install eDirectory on Node1.

  2. Install Identity Manager on Node 1 using the Metadirectory Server option.

  3. Install Identity Manager engine on Node 2 Server using the DCLUSTER_INSTALL option.

    Run the ./install.bin -DCLUSTER_INSTALL="true" command in the terminal.

    The installer installs the Identity Manager files are installed without any interaction with eDirectory.

Configuring the eDirectory Resource

  1. Log in to SUSE Hawk GUI.

  2. Click Add Resource and create a new group.

    1. Click next to the Group.

    2. Specify a group ID. For example, Group-1.

      Ensure that the following child resources are selected when you create a group:

      • stonith-sbd

      • admin_addr (Cluster IP address)

  3. In the Meta Attributes tab, set the target-role field to Started and is-managed field to Yes.

  4. Click Edit Configuration and then click next to the group you created in Step 2.

  5. In the Children field, add the following child resources:

    • shared-storage

    • eDirectory-resource

    For example, the resources should be added in the following order within the group:

    • stonith-sbd

    • admin_addr (Cluster IP address)

    • shared-storage

    • eDirectory-resource

    You can change the resource names if necessary. Every resource has a set of parameters that you need to define. For information about examples for shared-storage and eDirectory resources, see Primitives for eDirectory and Shared Storage Child Resources.

Primitives for eDirectory and Shared Storage Child Resources

The stonith-sbd and admin_addr resources are configured by HA Cluster commands by default when initializing the cluster node.

Table A-1 Example for shared-storage

Resource ID

Name of the shared storage resource

Class

ocf

Provider

heartbeat

Type

Filesystem

device

/dev/sdc1

directory

/shared

fstype

xfs

operations

  • start (60, 0)

  • stop (60, 0)

  • monitor (40, 20)

is-managed

Yes

resource-stickiness

100

target-role

Started

Table A-2 Example for eDirectory-resource

Resource ID

Name of the eDirectory resource

Class

systemd

Type

ndsdtmpl-shared-conf-nds.conf@-shared-conf-env

operations

  • start (100, 0)

  • stop (100, 0)

  • monitor (100, 60)

target-role

Started

is-managed

Yes

resource-stickiness

100

failure-timeout

125

migration-threshold

0

Change the location constraint score to 0.

  1. Log in to SUSE Hawk GUI.

  2. Click Edit Configuration.

  3. In the Constraints tab, click next to the node 1 of your cluster.

  4. In the Simple tab, set the score to 0.

  5. Click Apply.

Ensure that you set the score to 0 for all the nodes in your cluster.

NOTE:When you migrate the resources from one node to another from the SUSE Hawk GUI using the Status > Resources > Migrate option, the location constraint score will change to Infinity or -Infintity. This will give preference to only one of the nodes in the cluster and will result in delays in eDirectory operations.