Identity Manager supports SSPR configuration in a Tomcat cluster environment.
To update the SSPR information in the first node of the cluster, launch the Configuration utility from /opt/netiq/idm/apps/configupdate/configupdate.sh.
In the window that opens, click SSO clients > Self Service Password Reset and enter values for Client ID, Password, and OSP Auth redirect URL parameters.
Perform the following configuration tasks on the cluster nodes:
To update the Forgotten Password link with the SSPR IP address, log in to the User Application on the first node and click Administration > Forgot Password.
For more information on SSPR configuration, see Configuring Forgotten Password Management.
To change the Change my password link, see Updating SSPR Links in the Dashboard for a Distributed or Clustered Environment.
Verify that the Forgot Password link and Change my password links are updated with the SSPR IP address on the other nodes in the cluster.
NOTE:If the Change Password and Forgot Password links are already updated with the SSPR IP address, no changes are required.
In the first node, stop Tomcat and generate a new osp.jks file by specifying the DNS name of the load balancer server by using the following command:
/opt/netiq/common/jre/bin/keytool -genkey -keyalg RSA -keysize 2048 -keystore osp.jks -storepass <password> -keypass <password> -alias osp -validity 1800 -dname "cn=<loadbalancer IP/DNS>"
For example : /opt/netiq/common/jre/bin/keytool -genkey -keyalg RSA -keysize 2048 -keystore osp.jks -storepass changeit -keypass changeit -alias osp -validity 1800 -dname "cn=mydnsname"
NOTE:Ensure that the key password is the same as the one provided during OSP installation. Alternatively, this can also be changed using Configuration Update utility including the keystore password.
(Conditional) To verify if the osp.jks file is updated with the changes, run the following command:
/opt/netiq/common/jre/bin/keytool -list -v -keystore osp.jks -storepass changeit
Take backup of the original osp.jks file located at /opt/netiq/idm/apps/osp and copy the new osp.jks file to this location. The new osp.jks file was created in Step 4.
Copy the new osp.jks file located at /opt/netiq/idm/apps/osp from the first node to all other User Application nodes in the cluster.
Launch the Configuration utility in the first node and change all of the URL settings, such as URL link to landing page and OAuth redirect URL to the load balancer DNS name under the SSO Client tab.
Save the changes in the Configuration utility.
To reflect this change in all other nodes of the cluster, copy the ism-configuration properties file located in /TOMCAT_INSTALLED_HOME/conf from the first node to all other User Application nodes.
NOTE:You copied the ism.properties file from the first node to the other nodes in the cluster. If you specified custom installation paths during User Application installation, ensure that referential paths are corrected by using Configuration update utility in the cluster nodes.
In this scenario, both OSP and User Application are installed on the same server; therefore, the same DNS name is used for redirect URLs.
If OSP and User Application are installed on separate servers, change the OSP URLs to a different DNS name pointing to load balancer. Do this for all the servers where OSP is installed. This ensures that all OSP requests are dispatched through load balancer to the OSP cluster DNS name. This involves having a separate cluster for OSP nodes.
Perform the following actions in the setenv.sh file in the /TOMCAT_INSTALLED_HOME/bin/ directory:
To ensure that the mcast_addr binding is successful, JGroups requires that the preferIPv4Stack property be set to true. To do so, add the JVM property "-Djava.net.preferIPv4Stack=true" in the setenv.sh file in all nodes.
Add “-Dcom.novell.afw.wf.engine-id=Engine" in the setenv.sh file in the first node.
The engine name should be unique. Provide the name that was given during the installation of the first node. The default name is “Engine” in case no name was specified.
Similarly, add a unique engine name for other nodes in the cluster. For example, for second node, the engine name can be Engine2.