6.3 Completing the Cluster Configuration for Identity Governance

The Tomcat cluster needs to know the unique runtime identifier for each node. Also, to use ActiveMQ in a Tomcat cluster, Identity Governance needs the host name or IP address and port for each ActiveMQ server.

6.3.1 Configuring the Nodes in the Tomcat Cluster

To run Identity Governance in a Tomcat cluster, each node in the cluster must have a unique runtime identifier. Also, the Tomcat instance should run on the same port as the port exposed by the load balancer. However, the instance might need to use a different port.

NOTE: It is possible for two clustered nodes to simultaneously attempt to claim a data processing task. When this occurs, one of the nodes will report a “stale object” exception, which you can ignore since the work will still be carried out.

For more information, see Section 1.7.5, Ensuring High Availability for Identity Governance.

  1. Stop Tomcat, if the application server is running. For examples, see Stopping, Starting, and Restarting Tomcat.

  2. To specify a unique runtime identifier, complete the following steps:

    1. Log in to primary node in the cluster.

    2. In a text editor, open the ism-configuration.properties file.

      • Linux: Default location in /opt/netiq/idm/apps/tomcat/conf

      • Windows: Default location in c:\netiq\idm\apps\tomcat\conf

    3. Ensure that com.netiq.iac.runtime.id is a unique value that represents the node.

      For example, node1 or ProdNode1.

    4. Save and close the file.

    5. Repeat this procedure for each node in the cluster.

  3. To specify a different port for a node than the port exposed by the load balancer, complete the following steps:

    1. Log in to the node where you want to change the port.

    2. In a text editor, open the ism-configuration.properties file.

      • Linux: Default location in /opt/netiq/idm/apps/tomcat/conf

      • Windows: Default location in c:\netiq\idm\apps\tomcat\conf

    3. For com.netiq.iac.url.local.port, specify the Tomcat port for the local node.

    4. Save and close the file.

6.3.2 Configuring ActiveMQ Failover in the Tomcat Cluster

To represent the host name and port for the ActiveMQ server, the installation process creates the JMS broker URI parameter in the Identity Governance Configuration Utility. This parameter has a tcp:// prefix by default. However, in a clustered environment, the parameter needs a failover prefix and a comma-separated list of the ActiveMQ hosts.

For more information, see the ActiveMQ documentation, such as The Failover Transport and Introduction to Master/Slave.

  1. For each instance of Identity Governance, run the Identity Governance Configuration utility. The default installation location is .

    • Linux: Default location in /opt/netiq/idm/apps/idgov/bin/

      • Console mode: ./bin/configutil.sh -password db_password -console

      • GUI mode: ./bin/configutil.sh -password db_password

    • Windows: Default location in c:\netiq\idm\apps\idgov\bin\

      • Console mode: configutil.bat -password db_password -console

      • GUI mode: configutil.bat -password db_password

    For more information, see Section A.0, Running the Identity Governance Configuration Utility.

  2. Select Workflow Settings.

  3. (Conditional) Select Enable persistent notification message queue to ensure guaranteed message delivery.

    If you specified ActiveMQ during installation, this setting should already be enabled.

  4. For JMS broker URI, add failover: to the prefix, then add the host name or IP address and port for each ActiveMQ server.

    Use commas to separate the server values. For example:

    failover:tcp://amq1.mycompany.com:61616,tcp://amq2.mycompany.com:61616
  5. Save the changes then close the utility.

6.3.3 Cleaning Up Unfinished Data Production Jobs

When running IG in a clustered environment, a node could go down while a data production job is running on it. In some configurations, these jobs could become orphaned processes that do not complete. When this happens, you might need to clean up these processes to ensure health and performance of your system.

Data production jobs are tied to specific runtime instances, identified by their runtime_identitifer. Do not use a hostname or other identifier that might change if a runtime instance is restarted so that jobs do not become orphaned. When you start a new instance and control the identifier it is using, you can use a previously used identifier to make sure IG can clean up jobs correctly. If you do not have an option to start a new node with the same identifier, you can reassign data production jobs through the following manual process.

  1. Find the node identifier from the local configuration property file on a node. Look for the line property key is: to locate the identifier.

  2. Run a SQL statement against the arops database to retrieve the production records you want to clean up. For example:

    select * from data_production where runtime_identifier = '<node runtime identifier>' and status != 'COMPLETED'  and status != 'ERROR' 
  3. For each production record from the SQL statement results do the following:

    1. Execute a REST API call GET /dataprod/mgt/id using the production ID.

    2. Modify the payload by setting the runtime identifier in the payload to the node identifier where you want to reassign the production process.

    3. Execute a REST API call PUT /dataprod/mgt/id using the production ID and modified payload from step 3b.