6.4 Installing Identity Intelligence

This section provides information about installing Identity Intelligence.

6.4.1 Configuring the Cluster

  1. Launch the CDF Management Portal using the link (https://master_FQDN:3000) displayed after CDF installation in Step 7.

    Ensure that the browser is not using proxy to access CDF because this might result in inaccessible web pages.

    NOTE:Use port 3000 when you are setting up the CDF for the first time. After the initial setup, use port 5443 to access the CDF Management Portal.

  2. Log in with the following credentials:

    User name: admin

    Password: Use the password that you provided during CDF installation.

  3. Select the metadata file version in version and click Next.

  4. Read the license agreement and select I agree. and I authorize.

  5. Click Next.

  6. In the Capabilities page, select the following and click Next:

    • Transformation Hub

    • Identity Intelligence

    • Analytics. Prerequisite for ArcSight Investigate and Identity Intelligence.

  7. In the Database page, retain the default values, select Out-of-the-box PostgreSAL, and click Next.

  8. In the Deployment Size page, select the required cluster and click Next.

    1. (Conditional) For worker node configuration, select Medium Cluster.

    The installation will not proceed if the minimal hardware requirements are not met. For information about the hardware requirements, see Hardware Requirements and Tuning Guidelines.

  9. In the Connection page, an external host name is automatically populated. This is resolved from the virtual IP (VIP) specified during the CDF installation (--ha-virtual-ip parameter). Confirm that the VIP is correct and then click Next.

  10. (Conditional) If you want to set up high availability, select Make master highly available and add at least two additional master nodes in the Master High Availability page.

    NOTE:If you do not configure high availability in this step, you cannot add master nodes and configure high availability after installation.

    In the Add Master Node page, specify the following details:

    • Host: Fully qualified domain name (FQDN) of the node you are adding.

    • Ignore Warnings: If selected, the installer will ignore any warnings that occur during the pre-checks on the server. If deselected, the add node process will stop and a window will display any warning messages. We recommend that you start with Ignore Warnings deselected in order to view any warnings displayed. You may then evaluate whether to ignore or rectify any warnings, clear the warning dialog, and then click Save again with the box selected to avoid stopping.

    • User Name: User credential for login to the node.

    • Verify Mode: Choose the verification mode as Password or Key-based, and then either enter your password or upload a private key file. If you choose Key-based, you must first enter a username and then upload a private key file when connecting the node with a private key file.

    • Thinpool Device: (optional) Enter the Thinpool Device path, that you configured for the master node (if any). For example: /dev/mapper/docker-thinpool. You must have already set up the Docker thin pool for all cluster nodes that need to use thinpools, as described in the CDF Planning Guide.

    • flannel IFace: (optional) Enter the flannel IFace value if the master node has more than one network adapter. This must be a single IPv4 address or name of the existing interface and will be used for Docker inter-host communication.

    Click Save. Repeat the same for other master nodes.

  11. Click Next.

  12. (Conditional) For multi-node deployment, add additional worker nodes in the Add Worker Node page and click Save. To add a worker node click + (Add) and enter the required configuration information . Repeat this process for each of the worker nodes.

  13. Click Next.

  14. (Conditional) If you want to run the worker node in the master node, then select Allow suite workload to be deployed on the master node and then click Next.

    NOTE:Before selecting this option, ensure that the master node meets the system requirements specified for the worker node.

  15. To configure each NFS volume, complete the following steps:

    1. Navigate to the File Storage page.

    2. For File System Type, select Self-Hosted NFS.

      Self-hosted NFS refers to the external NFS that you created while preparing the environment for CDF installation.

    3. For File Server, specify the IP address or FQDN of the NFS server.

    4. For Exported Path, specify the following paths for the NFS volumes:

      NFS Volume

      File Path

      arcsight-vol

      <NFS_ROOT_FOLDER>/arcsight-vol

      db-single-vol

      <NFS_ROOT_FOLDER>/db-single-vol

      itom-logging-vol

      <NFS_ROOT_FOLDER>/itom-logging-vol

      db-backup-vol

      <NFS_ROOT_FOLDER>/db-backup-vol

    5. Click Validate.

    Ensure that you have validated all NFS volumes successfully before continuing with the next step.

  16. Click Next.

  17. To start deploying master and worker nodes, click Yes in the Confirmation dialog box.

  18. Continue with Uploading Images to Local Registry.

6.4.2 Uploading Images to Local Registry

For the docker registry to deploy Identity Intelligence, it needs the following images associated with the deployment:

  • transformationhub-x.x.x.x

  • idi-x.x.x.x

  • analytics-x.x.x.x

You must upload those images to the local registry.

  1. Launch a terminal session, then log in to the master node as root or a sudo user.

  2. Change to the following directory:

    cd /<cdf_installer_directory>/kubernetes/scripts/

    For example:

    cd /opt/arcsight/kubernetes/scripts

  3. Upload each of the software images to the local registry:

    ./uploadimages.sh -d <downloaded_image_file_path> -u registry-admin -p <password>

    Example:

    ./uploadimages.sh -d <download_directory>/identityintelligence-x.x.x./suite_images/analytics-x.x.x.x -u registry-admin -p password

  4. Continue with Deploying Transformation Hub and Identity Intelligence.

6.4.3 Deploying Transformation Hub and Identity Intelligence

After you upload the images to the local directory, CDF uses these images to deploy the respective software in the cluster.

  1. Switch back to the CDF Management Portal.

  2. Click Next in the Download Images page because all the required packages are already downloaded and uncompressed.

  3. After the Check Image Availability page displays All images are available in the registry, click Next.

    If the page displays any missing image error, upload the missing image.

  4. After the Deployment of Infrastructure Nodes page displays the status of the node in green, click Next.

    The deployment process can take up to 15 minutes to complete.

  5. (Conditional) If any of the nodes show a red icon in the Deployment of Infrastructure Nodes page, click the retry icon.

    CDF might display the red icon if the process times out for a node. Because the retry operation executes the script again on that node, ensure that you click retry only once.

  6. After the Deployment of Infrastructure Services page indicates that all the services are deployed and the status indicates green, click Next.

    The Preparation Complete message appears after the deployment process is complete and it can take up to 15 minutes to complete.

    (Optional) To monitor the progress of service deployment, complete the following steps:

    1. Launch a terminal session.

    2. Log in to the master node as root.

    3. Execute the command:

      watch 'kubectl get pods --all-namespaces'

  7. (Conditional) If you want to use mutual SSL authentication between Transformation Hub and its clients by enabling client authentication, you must change the default CA that is generated during the installation. For steps to change the CDF CA, see Changing the Certificate Authority of CDF.

  8. Click Next.

  9. To configure pre-deploy settings for all the following software, complete the following steps:

    1. In the Transformation Hub tab:

      • Set the values based on the workload or high availability configuration. For information about this value for your deployment, see the Transformation Hub Tuning section in the Hardware Requirements and Tuning Guidelines.

      • Set Allow plain text (non-TLS) connection to Kafka to False to disable plain text communication between Transformation Hub (Kafka) and all the components outside the Kubernetes cluster.

        When you set this option to False, ensure to configure SSL between Transformation Hub (Kafka) and all the components outside the Kubernetes cluster, such as Identity Governance, Identity Manager Driver for Entity Data Model, Vertica, and so on.

      • Enable Connection to Kafka uses TLS Client Authentication: This option is used to enable client authentication between Transformation Hub and all the components outside the Kubernetes cluster. When you enable this option, ensure that you configure mutual SSL authentication between Transformation Hub (Kafka) and all the components outside the Kubernetes cluster, such as Identity Governance, Identity Manager Driver for Entity Data Model, Vertica, and so on.

    2. In the Analytics tab:

      • Specify Vertica connection details.

      • (Optional) Specify SMTP server details to enable users of Identity Intelligence to receive email notification.

      • Specify the values for Client ID and Client Secret for Single Sign-On.

  10. To finish the deployment, click Next.

  11. Copy the Management portal link displayed in the Configuration Complete page.

    Some of the pods in the Configuration Complete page might remain in a pending status until the product labels are applied on worker nodes. To label the nodes, see Labeling Nodes.

  12. (Conditional) For high availability and multi-master deployment, after the deployment has been completed, manually restart the keepalive process.

    1. Log in to the master node.

    2. Change to the directory:

      cd /<k8S_HOME>/bin/

      For example:

      cd /opt/arcsight/kubernetes/bin/

    3. Run the script:

      ./start_lb.sh

  13. Continue with the following activities: