33.2 Upgrading a Traditional Sentinel HA Installation

This section provides information about upgrading a traditional Sentinel installation, and also about upgrading the operating system in a traditional Sentinel installation.

33.2.1 Upgrading Sentinel HA

  1. Enable the maintenance mode on the cluster:

    crm configure property maintenance-mode=true

    Maintenance mode helps you to avoid any disturbance to the running cluster resources while you update Sentinel. You can run this command from any cluster node.

  2. Verify whether the maintenance mode is active:

    crm status

    The cluster resources should appear in the unmanaged state.

  3. Upgrade the passive cluster node:

    1. Stop the cluster stack:

      rcopenais stop

      Stopping the cluster stack ensures that the cluster resources remain accessible and avoids fencing of nodes.

    2. Log in as root to the server where you want to upgrade Sentinel.

    3. Extract the install files from the tar file:

      tar xfz <install_filename>

    4. Run the following command in the directory where you extracted the install files:

      ./install-sentinel --cluster-node

    5. After the upgrade is complete, restart the cluster stack:

      rcopenais start

      Repeat Step 3 for all passive cluster nodes.

    6. Remove the autostart scripts so that the cluster can manage the product.

      cd /

      insserv -r sentinel

  4. Upgrade the active cluster node:

    1. Back up your configuration, then create an ESM export.

      For more information about backing up data, see Backing Up and Restoring Data in the NetIQ Sentinel Administration Guide.

    2. Stop the cluster stack:

      rcopenais stop

      Stopping the cluster stack ensures that the cluster resources remain accessible and avoids fencing of nodes.

    3. Log in as root to the server where you want to upgrade Sentinel.

    4. Run the following command to extract the install files from the tar file:

      tar xfz <install_filename>

    5. Run the following command in the directory where you extracted the install files:

      ./install-sentinel

    6. After the upgrade is complete, start the cluster stack:

      rcopenais start

    7. Remove the autostart scripts so that the cluster can manage the product.

      cd /

      insserv -r sentinel

    8. Run the following command to synchronize any changes in the configuration files:

      csync2 -x -v

  5. Disable the maintenance mode on the cluster:

    crm configure property maintenance-mode=false

    You can run this command from any cluster node.

  6. Verify whether the maintenance mode is inactive:

    crm status

    The cluster resources should appear in the Started state.

  7. (Optional) Verify whether the Sentinel upgrade is successful:

    rcsentinel version

33.2.2 Upgrading the Operating System

This section provides information about how to upgrade the operating system to a major version, such as upgrading from SLES 11 to SLES 12, in a Sentinel HA cluster. When you upgrade the operating system, you must perform few configuration tasks, to ensure that Sentinel HA works correctly after you upgrade the operating system.

Perform the steps described in the following sections:

Upgrading the Operating System

To upgrade the operating system:

  1. Log in as root user to any node in the Sentinel HA cluster.

  2. Run the following command to enable the maintenance mode on the cluster:

    crm configure property maintenance-mode=true

    The maintenance mode helps you to avoid any disturbance to the running cluster resources while you upgrade the operating system.

  3. Run the following command to verify whether the maintenance mode is active:

    crm status

    The cluster resources should appear in the unmanaged state.

  4. Ensure that you have upgraded Sentinel to version 8.0 or later on all the cluster nodes.

  5. Ensure that all the nodes in the cluster are registered with SLES and SLESHA.

  6. Perform the following steps to upgrade the operating system on the passive cluster node:

    1. Run the following command to stop the cluster stack:

      rcopenais stop

      Stopping the cluster stack ensures that the cluster resources remain inaccessible and avoids fencing of nodes.

    2. Upgrade the operating system. Perform the steps in Section 25.4, Upgrading the Operating System.

  7. Repeat step 6 on all the passive nodes to upgrade the operating system.

  8. Repeat step 6 on the active node to upgrade the operating system on it.

  9. Repeat step 6b to upgrade the operating system on shared storage.

  10. Ensure that the operating system on all the nodes in the cluster is upgraded to SLES12 SP1.

Configuring iSCSI Targets

To configure iSCSI targets:

  1. On the shared storage, check if the iSCSI LIO package is installed. If it is not installed yet, go to YaST2 Software Management and install the iSCSI LIO package (iscciliotarget RPM).

  2. Perform the following steps on all the nodes in the cluster:

    1. Run the following command to open the file which contains the iSCSI initiator name:

      cat /etc/iscsi/initiatorname.iscsi

    2. Note the initiator name which will be used for configuring iSCSI initiators:

      For example:

      InitiatorName=iqn.1996-04.de.suse:01:441d6988994

    These initiator names will be used while configuring iSCSI Target Client Setup.

  3. Click Service and select the When Booting option to ensure that the service starts when the operating system boots.

  4. Select the Global tab, deselect No Authentication to enable authentication, and then specify the user name and the password for incoming and outgoing authentication.

    The No Authentication option is enabled by default. However, NetIQ recommends that you enable authentication to ensure that the configuration is secure.

  5. Click Targets, and click Add to add a new target.

  6. Click Add to add a new LUN.

  7. Leave the LUN number as 0, browse in the Path dialog (under Type=fileio) and select the /localdata file that you created. If you have a dedicated disk for storage, specify a block device, such as /dev/sdc.

  8. Repeat steps 6 and 7, and add LUN 1 and select /networkdata this time.

  9. Repeat steps 6 and 7, and add LUN 2 and select /sbd this time.

  10. Leave the other options at their default values. Click Next.

    Modify iSCSI Target Client Setup screen is displayed.

  11. Click Add. When prompted for Client Name, specify the initiator name you have copied in Step 2. Repeat this step to add all the client names, by specifying the initiator names.

    The list of client names will be displayed in the Client List.

  12. (Conditional) If you have enabled authentication in Step 4, provide the authentication credentials you specified in Step 4.

    Select a client, select Edit Auth> Incoming Authentication, and specify the user name and password. Repeat this for all the clients.

  13. Click Next to select the default authentication options, and then click Finish to exit the configuration. Restart iSCSI if prompted.

  14. Exit YaST.

Configuring iSCSI Initiators

To configure iSCSI initiators:

  1. Connect to one of the cluster nodes (node01) and Start YaST.

  2. Click Network Services > iSCSI Initiator.

  3. If prompted, install the required software (iscsiclient RPM).

  4. Click Service, and select When Booting to ensure that the iSCSI service is started on boot.

  5. Click Discovered Targets.

    NOTE:If any previously existing iSCSI targets are displayed, delete those targets.

    Select Discovery to add a new iSCSI target.

  6. Specify the iSCSI Target IP address (10.0.0.3).

    (Conditional) If you have enabled authentication in Step 4 in Configuring iSCSI Targets, deselect No Authentication. In the Outgoing Authentication section, enter the authentication credentials you specified while configuring iSCSI targets.

    Click Next.

  7. Select the discovered iSCSI Target with the IP address 10.0.0.3 and select Log In.

  8. Perform the following steps:

    1. Switch to Automatic in the Startup drop-down menu.

    2. (Conditional) If you have enabled authentication, deselect No Authentication.

      The user name and the password you have specified should be displayed in the Outgoing Authentication section. If these credentials are not displayed, enter the credentials in this section.

    3. Click Next.

  9. Switch to the Connected Targets tab to ensure that you are connected to the target.

  10. Exit the configuration. This should have mounted the iSCSI Targets as block devices on the cluster node.

  11. In the YaST main menu, select System > Partitioner.

  12. In the System View, you should see new hard disks of the LIO-ORG-FILEIO type (such as /dev/sdb and /dev/sdc) in the list, along with already formatted disks (such as /dev/sdb1 or /dev/<SHARED1).

  13. Repeat steps 1 through 12 on all the nodes.

Configuring the HA Cluster

To configure the HA cluster:

  1. Start YaST2 and go to High Availability > Cluster.

  2. If prompted, install the HA package and resolve the dependencies.

    After the HA package installation, Cluster—Communication Channels is displayed.

  3. Ensure that Unicast is selected as the Transport option.

  4. Select Add a Member Address and specify the node IP address, and then repeat this action to add all the other cluster node IP addresses.

  5. Ensure that Auto Generate Node ID is selected.

  6. Ensure that the HAWK service is enabled on all the nodes. If it is not enabled, run the following command to enable it:

    service hawk start

  7. Run the following command:

    ls -l /dev/disk/by-id/

    The SBD partition ID is displayed. For example, scsi-1LIO-ORG_FILEIO:33caaa5a-a0bc-4d90-b21b-2ef33030cc53.

    Copy the ID.

  8. Open the sbd file (/etc/sysconfig/sbd), and change the ID of SBD_DEVICE with the ID you have copied in step 7.

  9. Run the following commands to restart the pacemaker service:

    rcpacemaker restart

  10. Run the following commands to remove the autostart scripts, so that the cluster can manage the product.

    cd /

    insserv -r sentinel

  11. Repeat steps 1 through 10 on all the cluster nodes.

  12. Run the following command to synchronize any changes in the configuration files:

    csync2 -x -v

  13. Run the following command to disable the maintenance mode on the cluster:

    crm configure property maintenance-mode=false

    You can run this command from any cluster node.

  14. Run the following command to verify whether the maintenance mode is inactive:

    crm status

    The cluster resources should appear in the Started state.