IDM Clustering – no SAN needed


  • Two or more SLES 10 SP X servers with Heartbeat / HA Packages installed – (
  • eDirectory and IDM (3.5.1 or later) installed on each server
  • The eDirectory replicas must be the same on each node
  • All servers are assigned to a common Driver Set
  • All drivers that will be clustered have been run and tested independently on each server for full functionality (double check the driver and driver set GCVs on each server)
  • An IP address for each eDirectory Driver ( or other driver that uses a driver on each side of the connected systems)
  • A machine with a working X server. This could be one of the cluster nodes, a Linux workstation with X (using ‘ssh -X’ for redirecting X output), or a Windows workstation with a local X server installed using ssh with redirected X output.
  • A user with sufficient rights to monitor, start, stop, enable and disable your IDM driver. Details on limiting access to certain IDM functions can be found here:


edir1 – Node 1 in CLUSTER-TREE –
edir2 – Node 2 in CLUSTER-TREE –
edir3 – Server in EDIR-TREE –
eDirDriver resource IP Address –
IDM OCF Script – the monitoring script used by linux-ha / heartbeat to monitor IDM resources
SLES 10 Heartbeat documentation:

Configuring the HA cluster:

Start YaST | System | High Availability

Add the nodes that will be a apart of the cluster. In this case, edir2. Select Add for each node entered then select Next.

Select the Authentication method. Select SHA1 and enter in an Authentication Key unique to this cluster. Select Next.

Media Configuration – Select either Broadcast or Multicast and choose the network interface and any additional parameters. For this example we are using Broadcast. For multiple clusters on the same network, select a different UDP port for each cluster. To isolate traffic further when using multiple clusters, the multicast option might be more efficient. Select Edit then next when done.

Start-up configuration – Select the “On” option then select Finish.

To configure Heartbeat on the other nodes in the cluster run /usr/lib/heartbeat/ha_propagate on the heartbeat node you just configured. On 64-bit systems, the command ha_propagate is located below /usr/lib64/heartbeat/.

Configuration of the IDM resource (this step will need to be performed for each driver being clustered):

  • Copy the IDM script to /usr/lib/ocf/resource.d/heartbeat on each node
  • Set the password for the hacluster user on each node ( i.e. as root – passwd hacluster )
  • Add the hacluster user to the haclient group on each node ( i.e. as root – groupmod -A hacluster haclient )
  • Set your driver to Manual start mode on the server you are currently running the driver on. The IDM script will control the state and start/stop action for the driver.

Run hb_gui from one of the cluster nodes (this can be done with ssh connecting remotely with the -X option if you have a local X server)

Select Connection | Login or click the icon furthest to the left.

Enter in the password you set for the hacluster user.

If you want the resources to not automatically fail back to the node they were on(auto failback is the default), set the “Default Resource Stickiness” on the Cluster Configurations tab from 0 to INFINITY and click Apply. If you would like to apply this setting at a resource or resource group level please see:

The next step is to create a resource group. A resource group isn’t necessary if you are only going to create an IDM resource without any dependent resources (like IPaddr2 or MailTo resources). Using resource groups does make future additions easier and doesn’t add any extra configuration complexity. On the hb_gui interface select Resources | Add New Item | group. Click OK.

Give the Resource group a descriptive name. For this example, we will give it the name of the IDM driver we are going to cluster. Click OK.

Once the Resource group is created, hb_gui will launch the create native resource menu. Enter in a descriptive resource name, select the IDM resource and enter values for all IDM resource parameters (all of the parameters are mandatory).

The IDM_driverdn parameter is the driver you are going to monitor. The value is the period delimited, non typeful distinguished name.

The IDM_user_dn parameter is the user that will be used to monitor, start, stop, disable and enable the driver. The value is the period delimited, non typeful distinguished name.

The IDM_user_pwd parameter is the password for the IDM_user_dn value.

The IDM_maint_file parameter is used to place the IDM resource in a “Maintenence mode” on a server. When the file exists on the server, the IDM resource script will report the driver as running. This will allow for driver testing or other driver related operations. The file should not exist during normal operations. If you are planning on applying patches or operating system updates, it is best to place the node in standby and one of the other nodes will run the resource.

Click Add when done

In hb_gui select the IDM resource you just created. On the right window pane select the Operations tab. Create three operations: start, stop, and monitor. You can use the following values as a baseline, but these can be adjusted depending on your environment.

For an eDirectory driver an IPaddr2 resource must be added to the resource group to allow the remote tree to automatically reconnect to the driver if the resource moves to another node. On the hb_gui interface select Resources | Add New Item | native.

Enter in a descriptive name for the resource (this is for an IP Address). Make sure the resource belongs to the same resource group you created earlier. Select the IPaddr2 resource from the list. Enter in a value for the ip parameter, in this case we are using There are additional optional parameters for the IPaddr2 resource. Please see for more information. Click Add when done.

(Optional) If you would like to get an email on the status change of this resource group, add another native resource called MailTo to the resource group. The only mandatory parameter for this resource is the email address to send the alerts to. This resource uses the local MTA of the cluster nodes, so if you do use this resource, please make sure postfix is configured correctly and that you can receive emails from these servers.

You can now start the resource. In hb_gui right click on the resource group and select start. The resources should start and your drivers will be running. Verify that everything is working correctly by looking at /var/log/messages on all nodes and with dstrace (with +DXML and DVRS flags enabled). You can place your active server in standby to have the resources fail over to another node to verify the resource runs correctly on all nodes.

0 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 5 (0 votes, average: 0.00 out of 5)
You need to be a registered member to rate this post.

Disclaimer: As with everything else at NetIQ Cool Solutions, this content is definitely not supported by NetIQ, so Customer Support will not be able to help you if it has any adverse effect on your environment.  It just worked for at least one person, and perhaps it will be useful for you too.  Be sure to test in a non-production environment.

Leave a Reply

One Comment

By: rridley
May 29, 2009
11:50 am
Active Directory Authentication Automation Cloud Computing Cloud Security Configuration Customizing Data Breach DirXML Drivers End User Management Identity Manager Importing-Exporting / ICE/ LDIF Intelligent Workload Management IT Security Knowledge Depot LDAP Monitoring Open Enterprise Server Passwords Reporting Secure Access Supported Troubleshooting Workflow