3.5 Configuring Clustering

You can cluster the Access Gateway for Cloud appliance. By default, it is a single node cluster. You add a node to the cluster by selecting Join Cluster during the initialization process. Access Gateway for Cloud supports up to a five node cluster.

3.5.1 Advantages of Clustering

Access Gateway for Cloud provides clustering for different advantages. Most of these advantages are only available if you configure an L4 switch or Round-robin DNS. The L4 switch is the best solution.

Disaster Recovery: Adding additional nodes to the cluster provides disaster recovery for your appliance. If one node goes down or becomes corrupt, you can promote another node to master.

High Availability for Authentications: Access Gateway for Cloud provides high availability for authentications and the single sign-on service, when using an L4 switch in conjunction with clustering. This solution allows users to authenticate in case of problems with the nodes within the cluster. The L4 switch sends authentication requests to the nodes with which it can communicate.

Load Balancing: You can configure the L4 switch to distribute authentications to nodes so one node does not receive all authentication requests while other nodes sit idle.

Scalability: Configuring an L4 switch with clustering increases the scalability of Access Gateway for Cloud. Each node in the cluster increases the number of possible simultaneous logins.

3.5.2 Managing Nodes in the Cluster

Access Gateway for Cloud supports up to five nodes in a cluster. You add nodes to the cluster through the initialization process, and perform all other initialization tasks in the Admin page.

Adding a Node to the Cluster

Follow the steps listed below to successfully add a node to the cluster:

  1. Verify the cluster is healthy.

    • All nodes must be up and communicating.

    • All components must be in a green state.

    • All failed nodes must be removed from the cluster.

    For more information on verifying your cluster is healthy, see Section 10.3, Troubleshooting Different States.

  2. Download and deploy a new VM machine for the new node.

    For more information, see Section 2.4, Deploying the Appliance.

  3. Select Join Cluster as the first step to initialize the new node, then follow the on-screen prompts.

    For more information, see Section 2.6, Initializing the Appliance.

  4. Wait for the login screen for the Admin page to be displayed. A progress bar indicates how much time this take. The process completes when the login screen for the Admin page is displayed.

  5. Log in to the Admin page, and wait for all spinner icons to stop processing and all components are green before performing any other tasks.

    The cluster is adding the node and there are a lot of back ground process running. This final step could take up to an hour to complete.

Promoting a Node to Master

The first node installed is the master node of the cluster by default. The master node runs provisioning, reporting, approvals, and mapping policies services. You can promote any node to become the master node.

  1. Take a snapshot of the cluster.

  2. Verify the cluster is healthy.

    For more information, see Section 10.3, Troubleshooting Different States.

  3. Click the node to become the master node in the Admin page, then click Promote to Master.

    An M appears on the front of the node icon indicating it is now the master node.

The services move from the old master to the new master. The old master is now just a node in the cluster.

When you switch the master node, the logs start again on the new master and reports start again on the new master. The historical logs are lost. The reporting data is also lost, unless you are using Sentinel Log Manager. For more information, see Section 7.2, Integrating with Sentinel Log Manager.

WARNING:If the old master node is down when you promote another node to master, remove the old master from the cluster, then delete it from the VMware server. Otherwise, the appliance sees two master nodes and becomes corrupted.

Removing a Node from the Cluster

You can remove a node from the cluster if something is wrong with the node. However, after removing a node, it cannot be added back into the cluster. You must delete this instance of the appliance from your VMware server, then deploy another instance to the VMware server to add a node back into the cluster.

To remove a node from the cluster:

  1. (Conditional) If the node you are removing is the master node, promote another node to be master.

  2. (Conditional) If you are using an L4 switch, delete the node from the L4 switch.

  3. In the Admin page, click the node you want to remove from the cluster.

  4. Click Remove from Cluster.

    The interface immediately reflects that the node is gone, but it takes some time for the background processes to finish.

  5. Delete the image of the node from the VMware server.

3.5.3 Configuring an L4 Switch for Clustering

If you want high availability or load balancing, you must configure an L4 switch for the Access Gateway for Cloud appliance. An L4 switch can be configured in many different ways. Use the following recommendations to configure the L4 switch to work with the appliance.

  • Heartbeat: Use the following URL to define the heartbeat for the L4 switch:

    https://dns_ag4c_appliance/osp/h/heartbeat

    or

    https://ip_address_ag4c_appliance/osp/h/heartbeat

    The L4 switch uses the heartbeat to determine if the node in the cluster is up and working or not. The heartbeat URL returns a text message of Success and a 200 response code.

  • Persistence: Also known as sticky sessions, allows all subsequent requests from a client to be sent to the same computer. To make this happen, select SSL session ID persistence when configuring the L4 switch.

Persistence increases the performance of the appliance for the end users, by removing the delay that might occur if the client sends a request to a new node instead of using the existing session to the same node.

3.5.4 Configuring an L4 Switch for Email Proxy

Access Gateway for Cloud contains an email proxy that supports three protocols: SMTP, POP3S, and IMAPS. You must configure your L4 switch to handle these protocols. Use the following high level steps to configure the protocols for your L4 switch. Refer to your specific L4 documentation for further information.

Configuring the SMTP Protocol Handler

Use the following steps to configure an SMTP protocol handler for your L4 switch:

  1. On your L4 switch, configure a new IP group (traffic group) or use an existing group for the virtual servers in the L4 switch.

    You can use this group for all of the protocols.

  2. (Optional) Create a health monitor.

    1. Set the health checking for the pool to TCP transaction monitor.

    2. Set the time out to 30 seconds.

    3. Set the health monitor to separately monitor each node.

  3. Create a traffic pool for the SMTP virtual server to use.

    1. Add each appliance node to the pool using the IP address with the port.

      For example: 192.168.1.14:25. The SMTP port is 25.

    2. (Optional) Add the health monitor created in Step 2.

    3. Select your load balancing settings.

      For example: round robin or random

    4. Set the session persistence to SSL Session ID.

  4. Create a new virtual server.

    1. Specify the protocol as SMTP and the port as 25.

    2. Use the traffic group defined in Step 1 and the pool defined in Step 3 for the virtual server.

  5. Start the virtual server.

Configuring the POP Protocol Handler

Use the following steps to configure a POP protocol handler for your L4 switch:

  1. On your L4 switch, configure a new IP group (traffic group) or use an existing group for the virtual servers in the L4 switch.

    You can use this group for all of the protocols.

  2. (Optional) Create a health monitor.

    1. Set the health checking for the pool to TCP transaction monitor.

    2. Set the time out to 30 seconds.

    3. Set the health monitor to separately monitor each node.

  3. Create a traffic pool for the POP virtual server to use.

    1. Add each appliance node to the pool using the IP address with the port.

      For example: 192.168.1.14:995. The POP port is 995.

    2. (Optional) Add the health monitor created in Step 2.

    3. Select your load balancing settings.

      For example: round robin or random

    4. Set the session persistence to SSL Session ID.

  4. Create a new virtual server.

    1. Specify the protocol as SSL (POP3S) and the port as 995.

    2. Use the traffic group defined in Step 1 and the pool defined in Step 3 for the virtual server.

  5. Start the virtual server.

Configuring the IMAP Protocol Handler

Use the following steps to configure an IMAP protocol handler for your L4 switch:

  1. On your L4 switch, configure a new IP group (traffic group) or use an existing group for the virtual servers in the L4 switch.

    You can use this group for all of the protocols.

  2. (Optional) Create a health monitor.

    1. Set the health checking for the pool to Connect.

    2. Set the health monitor to separately monitor each node.

  3. Create a traffic pool for the IMAP virtual server to use.

    1. Add each appliance node to the pool using the IP address with the port.

      For example: 192.168.1.14:993. The IMAP port is 993.

    2. (Optional) Add the health monitor created in Step 2.

    3. Select your load balancing settings.

      For example: round robin or random

    4. Set the session persistence to SSL Session ID.

  4. Create a new virtual server.

    1. Specify the protocol as SSL (IMAPS) and the port as 993.

    2. Use the traffic group defined in Step 1 and the pool defined in Step 3 for the virtual server.

  5. Start the virtual server.