14.2 Installation Procedure

14.2.1 Configuring the iSCSI Server

An iSCSI target is a device that is configured as a common storage for all nodes in a cluster. It is a virtual disk that is created on the Linux server to allow remote access over an Ethernet connection by an iSCSI initiator.An iSCSI initiator is any node in the cluster that is configured to contact the target (iSCSI) for services. The iSCSI target should be always up and running so that any host acting as an initiator can contact the target. Before installing iSCSI target on the iSCSI server, ensure that the iSCSI target has sufficient space for a common storage.Install the iSCSI initiator packages on the other two nodes after installing SLES 12 SP2.

During the SLES 12 SP2 installation:

  1. Create a separate partition and specify the partition path as the iSCSI shared storage partition.

  2. Install the iSCSI target packages.

To configure the iSCSI server:

  1. Create a block device on the target server.

  2. Type the yast2 disk command in terminal.

  3. Create a new Linux partition, and select Do not format.

  4. Select Do not mount the partition.

  5. Specify the partition size.

  6. Type the yast2 iscsi-server or yast2 iscsi-lio-server command in terminal.

  7. Click the Service tab, then select When Booting in the Service Start option.

  8. In the Targets tab, click Add to enter the partition path (as created during the SLES installation).

  9. In the Modify iSCSI Target Initiator Setup page, specify iSCSI client initiator host names for the target server and then click Next.

    For example, iqn.sles12sp2node2.com and iqn.sles12sp2node3.com.

  10. Click Finish.

  11. Run the cat /proc/net/iet/volume command in the terminal to verify if the iSCSI target is installed

14.2.2 Configuring the iSCSI initiator on all Nodes

You must configure the iSCSI initiator on all cluster nodes to connect to the iSCSI target.

To configure the iSCSI initiator:

  1. Install the iSCSI initiator packages.

  2. Run the yast2 iscsi-client in terminal.

  3. Click the Service tab and select When Booting in the Service Start option.

  4. Click the Connected Targets tab, and click Add to enter the IP address of the iSCSI target server.

  5. Select No Authentication.

  6. Click Next, then click Connect.

  7. Click Toggle Start-up to change the start-up option from manual to automatic, then click Next.

  8. Click Next, then click OK.

  9. To check the status of the connected initiator on the target server, run the cat /proc/net/iet/session command on the target server. The list of initiators that are connected to iSCSI server are displayed.

14.2.3 Partitioning the Shared Storage

Create two shared storage partitions: one for SBD and the other for Cluster File System.

To partition the shared storage:

  1. Run the yast2 disk command in terminal.

  2. In the Expert Partitioner dialog box, select the shared volume. In our example, select sdb from the Expert Partitioner dialog box.

  3. Click Add, select Primary partition option, and click Next.

  4. Select Custom size, and click Next. In our example, the custom size is 100 MB.

  5. Under Formatting options, select Do not format partition. In our example, the File system ID is 0x83 Linux.

  6. Under Mounting options, select Do not mount partition, then click Finish.

  7. Click Add, then select Primary partition.

  8. Click Next, then select Maximum Size, and click Next.

  9. In Formatting options, select Do not format partition. In our example, specify the File system ID as 0x83 Linux.

  10. In Mounting options, select Do not mount partition, then click Finish.

14.2.4 Installing the HA Extension

To install the HA extension:

  1. Go to the SUSE Downloads website.

    SUSE Linux Enterprise High Availability Extension (SLE HA) is available for download for each available platform as two ISO images. Media 1 contains the binary packages and Media 2 contains the source code.

    NOTE:Select and install the appropriate HA extension ISO file based on your system architecture.

  2. Download the Media 1 ISO file on each server.

  3. Open YaST Control Center dialog box, click Add-on products > Add.

  4. Click Browse and select the DVD or local ISO image, then click Next.

  5. In the Patterns tab, select High Availability under Primary Functions.

    Ensure that all the components under high availability are installed.

  6. Click Accept.

14.2.5 Setting up Softdog Watchdog

In SLES HA Extension, the Watchdog support in the kernel is enabled by default. It is shipped with a number of different kernel modules that provide hardware-specific watchdog drivers. The appropriate watchdog driver for your hardware is automatically loaded during system boot.

  1. Enable the softdog watchdog:

    echo softdog > /etc/modules-load.d/watchdog.conf

    systemctl restart systemd-modules-load

  2. Test if the softdog module is loaded correctly:

    lsmod | grep dog

14.2.6 Configuring the HA Cluster

This example assumes that you are configuring two nodes in a cluster.

Setting up the first node:

  1. Log in as root to the physical or virtual machine you want to use as cluster node.

  2. Run the following command:

    ha-cluster-init

    The command checks for NTP configuration and a hardware watchdog service. It generates the public and private SSH keys used for SSH access and Csync2 synchronization and starts the respective services.

  3. Configure the cluster communication layer:

    1. Enter a network address to bind to.

    2. Enter a multicast address. The script proposes a random address that you can use as default.

    3. Enter a multicast port. By default, the port is 5405.

  4. Set up SBD as the node fencing mechanism:

    1. Press y to use SBD.

    2. Enter a persistent path to the partition of your block device that you want to use for SBD. The path must be consistent for both the nodes in the cluster.

  5. Configure a virtual IP address for cluster administration:

    1. Press y to configure a virtual IP address.

    2. Enter an unused IP address that you want to use as administration IP for SUSE Hawk GUI. For example, 192.168.1.3.

      Instead of logging in to an individual cluster node, you can connect to the virtual IP address.

Once the first node is up and running, add the second cluster node using the ha-cluster-join command.

Setting up the second node:

  1. Log in as root to the physical or virtual machine through which you want to connect to the cluster.

  2. Run the following command:

    ha-cluster-join

    If NTP is not configured, a message appears. The command checks for a hardware watchdog device and notifies if it is not present.

  3. Enter the IP address of the first node.

  4. Enter the root password of the first node.

  5. Log in to SUSE Hawk GUI and then click Status > Nodes. For example, https://192.168.1.3:7630/cib/live.

14.2.7 Installing and Configuring Identity Vault and Identity Manager Engine on Cluster Nodes

  1. Install Identity Manager Engine on cluster nodes:

    1. Download the Identity_Manager_4.7_Linux.iso from the NetIQ Downloads website.

    2. Mount the downloaded .iso.

    3. From the ISO mounted location, run the following command:

      ./install.sh

    4. Read through the license agreement.

    5. Enter y to accept the license agreement.

    6. Decide the Identity Manager server edition you want to install. Enter y for Advanced Edition and n for Standard Edition.

    7. Select Identity Manager Engine from the list and proceed with the installation.

      This step installs the supported Identity Vault version.

  2. Configure Identity Manager Engine on all nodes.

    1. Navigate to the location where you have mounted the Identity_Manager_4.7_Linux.iso.

    2. From the ISO mounted location, run the following command:

      ./configure.sh

    3. Decide whether you want to perform a typical configuration or a custom configuration. The configuration options will vary based on the components that you select for configuration.

    4. Select the Identity Manager Engine component from the list.

    5. If your are configuring the Identity Vault for the first time, select the Create a new Identity Vault option. If you have installed Identity Vault previously and want to connect to that Identity Vault instance, select the Add to an Identity Vault existing on local machine or Add to an Identity Vault existing on local machine option.

  3. Navigate to the /etc/opt/novell/eDirectory/conf directory.

  4. Edit the nds.conf file and specify the virtual IP address of the cluster in the n4u.nds.preferred-server field.

  5. Stop the Identity Vault service.

    ndsmanage stopall

  6. Back up all the folders and files from the /var/opt/novell/nici, /etc/opt/novell/eDirectory/conf, and /var/opt/novell/eDirectory/ directories.

  7. Navigate to the /opt/novell/eDirectory/bin directory.

  8. Run the following command:

    nds-cluster-config -s /<shared cluster path>

    where, <shared cluster path> indicates the location that you want use for the Identity Vault shared cluster data.

  9. Start the Identity Vault service.

    ndsmanage startall

For more information on configuring Identity Vault in a clustered setup, see “Deploying eDirectory on High Availability Clusters” in the eDirectory Installation Guide.

14.2.8 Configuring the eDirectory Resource

  1. Log in to SUSE Hawk GUI.

  2. Click Add Resource and create a new group.

    1. Click next to the Group.

    2. Specify a group ID. For example, Group-1.

      Ensure that the following child resources are selected when you create a group:

      • stonith-sbd

      • admin_addr (Cluster IP address)

  3. In the Meta Attributes tab, set the target-role field to Started and is-managed field to Yes.

  4. Click Edit Configuration and then click next to the group you created in step 2.

  5. In the Children field, add the following child resources:

    • shared-storage

    • eDirectory-resource

    For example, the resources should be added in the following order within the group:

    • stonith-sbd

    • admin_addr (Cluster IP address)

    • shared-storage

    • eDirectory-resource

    You can change the resource names if necessary. Every resource has a set of parameters that you need to define. For information about examples for shared-storage and eDirectory resources, see Primitives for eDirectory and Shared Storage Child Resources.

14.2.9 Primitives for eDirectory and Shared Storage Child Resources

The stonith-sbd and admin_addr resources are configured by HA Cluster commands by default when the cluster node is initialized.

Table 14-1 Example for shared-storage

Resource ID

Name of the shared storage resource

Class

ocf

Provider

heartbeat

Type

Filesystem

device

/dev/sdc1

directory

/shared

fstype

xfs

operations

  • start (60, 0)

  • stop (60, 0)

  • monitor (40, 20)

is-managed

Yes

resource-stickiness

100

target-role

Started

Table 14-2 Example for eDirectory-resource

Resource ID

Name of the eDirectory resource

Class

systemd

Type

ndsdtmpl-shared-conf-nds.conf@-shared-conf-env

operations

  • start (100, 0)

  • stop (100, 0)

  • monitor (100, 60)

target-role

Started

is-managed

Yes

resource-stickiness

100

failure-timeout

125

migration-threshold

0

14.2.10 Changing the Location Constraint Score

Change the location constraint score to 0.

  1. Log in to SUSE Hawk GUI.

  2. Click Edit Configuration.

  3. In the Constraints tab, click next to the node 1 of your cluster.

  4. In the Simple tab, set the score to 0.

  5. Click Apply.

Ensure that you set the score to 0 for all the nodes in your cluster.

NOTE:When you migrate the resources from one node to another from the SUSE Hawk GUI using the Status > Resources > Migrate option, the location constraint score will change to Infinity or -Infintity. This will give preference to only one of the nodes in the cluster and will result in delays in eDirectory operations.