6.2 Preparing Your Environment for CDF

6.2.1 Configuring the Nodes

For multi-node deployment, consider the following when configuring master and worker nodes:

  • Deploy master and worker nodes on virtual machines. Since most of the processing occurs on worker nodes, we recommend you to deploy worker nodes on physical servers.

  • You must keep the host system configuration identical across master and worker nodes.

  • When using virtual machines, ensure:

    • Resources are reserved and not shared.

    • UUID and MAC addresses are static because dynamic addresses cause the Kubernetes cluster to fail.

  • Install all master and worker nodes in the same subnet.

  • Adding more worker nodes is typically more effective than installing bigger and faster hardware.Using more worker nodes enables you to perform maintenance on your cluster nodes with minimal impact to uptime. Adding more nodes also helps with predicting costs due to new hardware.

For high availability, consider the following when configuring master and worker nodes:

  • Create a virtual IP that is shared by all master nodes and ensure that virtual IP is under the same subnet. The VIP must not respond when pinged before you install Identity Intelligence.

  • All master and worker nodes must be installed in the same subnet.

6.2.2 Set System Parameters (Network Bridging)

Ensure that the br_netfilter module is installed on all master and worker nodes before changing system settings.

You can either run the following scripts that set system parameters automatically or you can set the system parameters manually:

  • /opt/<Identity_Intelligence_Installer>/scripts/prereq_sysctl_conf.sh

  • /opt/<Identity_Intelligence_Installer>/scripts/prereq_rc_local.sh

Perform the following steps on all the master and worker nodes to set the system parameters manually:

  1. Log in to the node.

  2. Check whether the br_netfilter module is enabled:

    lsmod |grep br_netfilter

  3. If there is no return value and the br_netfilter module is not installed, then install it:

    modprobe br_netfilter

    echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf

  4. Open the /etc/sysctl.conf file.

  5. Ensure the following system parameters are set:

    net.bridge.bridge-nf-call-iptables=1

    net.bridge.bridge-nf-call-ip6tables=1

    net.ipv4.ip_forward = 1

    net.ipv4.tcp_tw_recycle = 0

    kernel.sem=50100 128256000 50100 2560

  6. Save the /etc/sysctl.conf file.

  7. Apply the updates to the node:

    /sbin/sysctl -p

6.2.3 Check MAC and Cipher Algorithms

To configure MAC and Cipher algorithms manually, ensure the /etc/ssh/sshd_config files on each and every master and worker nodes are configured with at least one of the following values, which lists all supported algorithms. Add only the algorithms that meet the security policy of your organization.

  • For MAC algorithms: hmac-sha1,hmac-sha2-256,hmac-sha2-512,hmac-sha1-96

  • For Cipher algorithms: 3des-cbc,aes128-cbc,aes192-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,blowfish-cbc

For example, you could add the following lines to the /etc/ssh/sshd_config file on all master and worker nodes:

MACs hmac-sha2-256,hmac-sha2-512

Ciphers aes128-cbc,aes192-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr

6.2.4 Check Password Authentication Settings

If you will use a user name and password authentication for adding cluster nodes during the installation, make sure the PasswordAuthentication parameter in the /etc/ssh/sshd_config file is set to yes. There is no need to check the password authentication setting when you add cluster nodes using a user name and key authentication.

To ensure the password authentication is enabled, perform the following steps on every master and worker nodes:

  1. Log in to the cluster node.

  2. Open the /etc/ssh/sshd_config file.

  3. Check if the parameter PasswordAuthentication is set to yes. If not, set the parameter to yes as below:

    PasswordAuthentication yes

  4. Restart the sshd service:

    systemctl restart sshd.service

6.2.5 Installing the Required Operating System Packages

Ensure that the packages listed in the following table are installed on appropriate nodes. These packages are available in the standard yum repository.

Package

Nodes

bind-utils

Master and worker

device-mapper-libs

Master and worker

java-1.8.0-openjdk

Master

libgcrypt

Master and worker

libseccomp

Master and worker

libtool-ltdl

Master and worker

net-tools

Master and worker

nfs-utils

Master and worker

rpcbind

Master node, worker node, and NFS server

systemd-libs (version >= 219)

Master and worker

unzip

Master and worker

httpd-tools

Master and worker

conntrack-tools

Master and worker

lvm2

Master and worker

curl

Master and worker

libtool-libs

Master and worker

openssl

Master and worker

socat

Master and worker

container-selinux

Master and worker

You can either run the /opt/<Identity_Intelligence_Installer>/scripts/prereq_1_required_packages.sh script that installs the required OS packages automatically or install the required OS packages manually.

To install the packages manually:

  1. Log in to the master or worker nodes.

  2. Verify whether the package exists:

    yum list installed <package name>

  3. (Conditional) If the package is not installed, install the required package:

    yum -y install <package name>

6.2.6 Remove Libraries

Remove libraries that prevents Ingress from starting and confirm the removal when prompted:

yum remove rsh rsh-server vsftpd

6.2.7 Configuring Time Synchronization

You must implement a Network Time Protocol (NTP) to synchronize time of all nodes in the cluster. To implement this protocol, use chrony. Ensure that chrony is running on all nodes of the cluster. By default chrony is installed on some versions of RHEL.

You can either run the /opt/<Identity_Intelligence_Installer>/scripts/prereq_synchronize_time.sh script that synchronizes time automatically or configure the time synchronization manually.

To configure the time synchronization manually:

  1. Verify chrony configuration:

    chronyc tracking

  2. (Conditional) If chrony is not installed, install chrony:

    yum install chrony

  3. Start and enable chrony:

    systemctl start chronyd

    systemctl enable chronyd

  4. Synchronize the operating system time with the NTP server:

    chronyc makestep

  5. Restart the chronynd daemon:

    systemctl restart chronyd

  6. Check the server time synchronization:

    timedatectl

  7. Synchronize the hardware time:

    hwclock -w

6.2.8 Configuring Firewall

Ensure that the firewalld.service is enabled and running on all nodes. Execute the systemctl status firewalld command to check the firewall status.

To enable the firewall:

systemctl unmask firewalld
systemctl start firewalld
systemctl enable firewalld

You can either run the /opt/<Identity_Intelligence_Installer>/scripts/prereq_firewall.sh script that configures the firewall automatically or configure the firewall manually.

When the firewall is enabled, you must also enable the masquerade settings. To enable masquerade settings:

  1. Verify whether the masquerade setting is enabled:

    firewall-cmd --query-masquerade

    If the command returns yes, then masquerade is enabled.

    If the command returns no, then masquerade is disabled.

  2. (Conditional) If masquerade setting is not enabled, enable masquerade:

    firewall-cmd --add-masquerade --permanent

    firewall-cmd --reload

6.2.9 Configuring Proxy

The cluster should have no access to the Internet and proxy settings (http_proxy, https_proxy, and no_proxy) are not set. However, if a connection with the Internet is needed and you already specified a proxy server for http and https connection, you must correctly configure no_proxy.

If you have the http_proxy or https_proxy set, then no_proxy definitions must contain at least the following values:

no_proxy=localhost, 127.0.0.1, <all Master and Worker cluster node IP addresses>,<all cluster node FQDNs>,<HA virtual IP Address>,<FQDN for the HA Virtual IP address>

For example:

  • export http_proxy="http://web-proxy.example.net:8080"

    export https_proxy="http://web-proxy.example.net:8080"

    export no_proxy="localhost,127.0.0.1,node1.swinfra.net,10.94.235.231,node2.swinfra.net,10.94.235.232,node3.swinfra.net,10.94.235.233,node3.swinfra.net,10.94.235.233,node4.swinfra.net,10.94.235.234,node5.swinfra.net,10.94.235.235,node6.swinfra.net,10.94.235.236,ha.swinfra.net 10.94.235.200"

  • export http_proxy="http://web-proxy.eu.example.net:8080"

    export https_proxy="localhost,127.0.0.1,swinfra.net,10.94.235.231,10.94.235.232,10.94.235.233,10.94.235.233,10.94.235.234,10.94.235.235,10.94.235.236,10.94.235.200"

NOTE: Incorrect configuration of proxy settings has proven to be a frequent installation troubleshooting problem. To verify that proxy settings are configured properly on all master and worker nodes, run the following command and ensure the output corresponds to the recommendations:

echo $http_proxy, $https_proxy, $no_proxy

If the firewall is turned off, the install process will generate a warning. To prevent the warning, the CDF install parameter --auto-configure-firewall should be set to true.

6.2.10 Configuring DNS

Ensure that the host name resolution through Domain Name Services (DNS) is working across all nodes in the cluster, including correct forward and reverse DNS lookups. Host name resolution must not be performed through /etc/hosts file settings.

You can either run the <download_directory>/scripts/prereq_disable_ipv6.sh script that configures DNS automatically or configure DNS manually.

Ensure that all nodes are configured with a Fully Qualified Domain Name (FQDN) and are in the same subnet. Transformation Hub uses the host system FQDN as its Kafka advertised.host.name. If the FQDN resolves successfully in the Network Address Translation (NAT) environment, then Producers and consumers will function correctly. If there are network-specific issues resolving FQDN through NAT, then DNS will need to be updated to resolve these issues.

  • Transformation Hub supports ingestion of event data that contains both IPv4 and IPv6 addresses. However, its infrastructure cannot be installed in an IPv6-only network.

  • localhost must not resolve to an IPv6 address.

    For example, open the /etc/hosts file. Reference: ::1 – this is the default state. The install process expects only IPv4 resolution to IP address 127.0.0.1.

    Comment out any one of the following:

    • 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

    • ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

  • The initial master node host name must not resolve to multiple IPv4 addresses and this includes lookup in /etc/hosts.

Test Forward and Reverse DNS Lookup

Test that the forward and reverse lookup records for all servers were properly configured.

To test the forward lookup, run the following commands on every master and worker nodes in the cluster and on every producer and consumer host system, including:

  • All master nodes: master1.yourcompany.com, …, mastern.yourcompany.com

  • All worker nodes: worker1.yourcompany.com, …, workern.yourcompany.com

  • Your ArcMC nodes: arcmc1.yourcompany.com, ..., arcmcn.yourcompany.com

Use the nslookup or host commands to verify your DNS configuration.

NOTE:Do not use the ping command.

You must run the nslookup commands on every server specified in your/etc/resolv.conf file. Every server must be able to forward and reverse lookup properly and return the exact same results.

If you have a public DNS server specified in your /etc/resolv.conf file, such as the Google public DNS servers 8.8.8.8 or 8.8.4.4, you must remove this from your DNS configuration.

Run the commands as follows. Expected sample output is shown below each command.

  • hostname

    master1
  • hostname -s

    master1
  • hostname -f

    master1.yourcompany.com
  • hostname -d

    yourcompany.com
  • nslookup master1.yourcompany.com

    Server: 192.168.0.53
    Address: 192.168.0.53#53
    Address: 192.168.0.1
    Name: master1.example.com
  • nslookup master1

    Server: 192.168.0.53
    Address: 192.168.0.53#53
    Name: master1.example.com
    Address: 192.168.0.1
  • nslookup 192.168.0.1

    Server: 192.168.0.53
    Address: 192.168.0.53#53
    1.0.168.192.in-addr.arpa name = master1.example.com.

Kubernetes Network Subnet Settings

The Kubernetes network subnet is controlled by the --POD_CIDR and –SERVICE_CIDR parameters to the CDF Installer.

The --POD_CIDR parameter specifies the network address range for Kubernetes pods. The address range specified in the --POD_CIDR parameter must not overlap with the IP range assigned for Kubernetes services, which is specified in the –SERVICE_CIDR parameter. The expected value is a Classless Inter-Domain Routing (CIDR) format IP address. CIDR notation comprises an IP address, a slash ('/') character, and a network prefix (a decimal number). The minimum useful network prefix is /24 and the maximum useful network prefix is /8. The default value is 172.16.0.0/16.

For example:

POD_CIDR=172.16.0.0/16

The CIDR_SUBNETLEN parameter specifies the size of the subnet allocated to each host for Kubernetes pod network addresses. The default value is dependent on the value of the POD_CIDR parameter, as described in the following table.

POD_CIDR Prefix

POD_CIDR_SUBNETLEN defaults

POD_CIDR_SUBNETLEN allowed values

/8 to /21

/24

/(POD_CIDR prefix + 3) to /27

/22 to /24

/(POD_CIDR prefix + 3)

/(POD_CIDR prefix + 3) to /27

Smaller prefix values indicate a larger number of available addresses. The minimum useful network prefix is /27 and the maximum useful network prefix is /12. The default value is 172.17.17.0/24.

Change the default POD_CIDR or CIDR_SUBNETLEN values only when your network configuration requires you to do so. You must also ensure that you have sufficient understanding of the flannel network fabric configuration requirements before you make any changes.

6.2.11 Configuring the NFS Server

The CDF Installer platform requires a NFS server to maintain state information about the infrastructure and to store other pertinent data.

For high availability, NFS must run on a highly available external server in the case of a dedicated master deployment having a minimum of three master nodes. For optimal security, secure all NFS settings to allow only required hosts to connect to the NFS server.

For more information on external server, see External NFS Server.

Prerequisites:

  • Ensure that the ports 111, 2049, and 20048 are open on the NFS server for communication.

  • Enable the rpcbind and nfs-server package by executing the following commands on your NFS server:

    systemctl enable rpcbind

    systemctl start rpcbind

    systemctl enable nfs-server

    systemctl start nfs-server

  • The following are the shared directories that you must create and configure. For information about the minimum memory requirement for each directory, see Identity Intelligence 1.1 System Requirements.

    Directory

    Description

    <NFS_ROOT_DIRECTORY>/itom-vol

    This is the CDF NFS root folder, which contains the CDF database and files. The disk usage will grow gradually.

    <NFS_ROOT_DIRECTORY>/db-single-vol

    This volume is only available when you did not choose PostgreSQL High Availability (HA) for CDF database setting. It is for CDF database.

    During the install you will not choose the Postgres database HA option.

    <NFS_ROOT_DIRECTORY>/db-backup-vol

    This volume is used for backup and restore of the CDF PostgreSQL database. Its sizing is dependent on the implementation’s processing requirements and data volumes.

    <NFS_ROOT_DIRECTORY>/itom-logging-vol

    This volume stores the log output files of CDF components. The required size depends on how long the log will be kept.

    <NFS_ROOT_DIRECTORY>/arcsight-vol

    This volume stores the component installation packages.

Creating NFS Shared Directories

  1. Log in to the NFS server as root.

  2. Create the following:

    • Group: arcsight with a GID 1999

      Example: groupadd -g 1999 arcsight

    • User: arcsight with a UID 1999

      Example: useradd -g 1999 arcsight

    • NFS root directory: Root director under which you can create all NFS shared directories.

      Example (NFS_Root_Directory): /opt/NFS_Volume

  3. (Conditional) If you have previously installed any version of CDF, you must remove all NFS directories by using the following command for each directory:

    rm -rf <path to NFS directory>

    Example:

    rm -rf /opt/NFS_Volume/itom-vol

  4. Create each NFS shared directory using the command:

    mkdir -p <path to NFS directory>

    Example:

    mkdir -p /opt/NFS_Volume/itom-vol

  5. For each NFS directory, set the permission to 755 by using the command:

    chmod -R 755 <path to NFS directory>

    Example:

    chmod -R 755 /opt/NFS_Volume/itom-vol

  6. For each NFS directory, set the ownership to UID 1999 and GID 1999 using the command:

    chown -R 1999:1999 <path to NFS directory>

    Example:

    chown -R 1999:1999 /opt/NFS_Volume/itom-vol

    If you use a UID/GID different than 1999/1999, then provide it during the CDF installation in the install script arguments--system-group-id and --system-user-id.

Exporting the NFS Configuration

For every NFS volume, run the following set of commands on the External NFS server based on the IP address. You will need to export the NFS configuration with the appropriate IP address in order for the NFS mount to work properly.

  1. Navigate to /etc/ and open the exports file.

  2. For every node in the cluster, you must update the configuration to grant the node access to the NFS volume shares.

    For example:

    /opt/NFS_Volume/arcsight-vol 192.168.1.0/24(rw,sync,anonuid=1999,anongid=1999,all_squash)

  3. Save the /etc/exports file and run the following command:

    exportfs -ra

    If you add more NFS shared directories later, you must restart the NFS service.

Verifying NFS Configuration

  1. Create the NFS directory under /mnt.

    For example,

    cd /mnt

    mkdir nfs

  2. Mount the NFS directory on your local system.

    Example:

    • NFS v3: mount -t nfs 192.168.1.25:/opt/NFS_Volume/arcsight-vol /mnt/nfs

    • NFS v4: mount -t nfs4 192.168.1.25:/opt/NFS_Volume/arcsight-vol /mnt/nfs

  3. After creating all the directories, run the following commands on the NFS server:

    exportfs -ra

    systemctl restart rpcbind

    systemctl enable rpcbind

    systemctl restart nfs-server

    systemctl enable nfs-server

Setting Up NFS By Using the Script

Applicable only for non-high-availability and single-node deployments.

You can either run the /opt/<Identity_Intelligence_Installer>/scripts/preinstall_create_nfs_share.sh script that sets up the NFS automatically or set up the NFS manually.

To set up NFS manually:

  1. Copy setupNFS.sh to the NFS server.

    The setupNFS.sh is located on the master node in the <download_directory>/identityintelligence-x.x.x/installers/cdf-x.x.x/cdf/scripts folder.

  2. (Conditional) If you are using the default UID/GID, then use the command:

    sh setupNFS.sh <path_to_nfs_directory>/volumes/volume_name

    Example, sh setupNFS.sh /opt/NFS_Volume/itom-vol

  3. (Conditional) If you are using a non-default UID/GID, then use the command:

    sh setupNFS.sh <path_to_nfs_directory>/volumes/volume_name true <uid> <gid>

  4. Restart the NFS service:

    systemctl restart nfs

6.2.12 Disabling Swap Space

You must disable swap space on all master and worker nodes excluding the node which has database.

  1. Log in to the node where you want to disable swap space.

  2. Run the following command:

    swapoff -a

  3. In the /etc/fstab file, comment out the lines that contain swap as the disk type and save the file.

    For example:

    #/dev/mapper/centos_shcentos72x64-swap swap

6.2.13 (Optional) Create Docker Thinpools

Optionally, to improve performance of Docker processing, set up a thinpool on each master and worker node. Before setting up a thinpool on each node, create a single disk partition on the node, as explained below.

For the thinpool device for Docker (for example, sdb1): the minimum physical volume size is 30GB.

Creating a New Partition

  1. Log in to the node.

  2. Run the command:

    fdisk <name of the new disk device that was added>

    Example:

    # fdisk /dev/sdb1

  3. Enter n to create a new partition.

  4. When prompted, enter partition number, sector, type (Linux LVM), and size for the first partition. To select Linux LVM partition type:

    • Enter t to change the default partition type to Linux LVM

    • Type L to list the supported partition types

    • Type 8e to select Linux LVM type

  5. When prompted, enter partition number, sector, type (Linux LVM), and size for the second partition.

  6. Type p to view the partition table.

  7. Type w to save the partition table to disk.

  8. Type partprobe.

Setting Up a Thinpool for Docker

  1. Create a physical volume with the following command:

    # pvcreate [physical device name]

    Example:

    # pvcreate /dev/sdb1

  2. Create a volume group with the following command:

    # vgcreate [volume group name] [logical volume name]

    Example:

    # vgcreate docker /dev/sdb1

  3. Create a logical volume (LV) for the thinpool and bootstrap with the following command:

    # lvcreate [logical volume name] [volume group name]

    For example: the data LV is 95% of the 'Docker' volume group size (leaving free space allows for automatic expanding of either the data or metadata if space is running low, as a temporary stopgap):

    # lvcreate --wipesignatures y -n thinpool docker -l 95%VG

    # lvcreate --wipesignatures y -n thinpoolmeta docker -l 1%VG

  4. Convert the pool to a thinpool with the following command:

    # lvconvert -y --zero n -c 512K --thinpool docker/thinpool --poolmetadata docker/thinpoolmeta

    Optionally, you can configure the auto-extension of thinpools using an lvm profile.

    1. Open the lvm profile.

    2. Specify a value for the parameters thin_pool_autoextend_threshold and thin_pool_autoextend_percent, each of which represents a percentage of the space used.

      For example:

      activation { thin_pool_autoextend_threshold=80 thin_pool_autoextend_percent=20 }

    3. Apply the lvm profile with the following command:

      # lvchange --metadataprofile docker-thinpool docker/thinpool

    4. Verify that the lvm profile is monitored with the following command:

      # lvs -o+seg_monitor

    5. Clear the graph driver directory with the following command, if Docker was previously started:

      # rm -rf /var/lib/docker/*

    6. Monitor the thinpool and volume group free space with the following commands:

      # lvs

      # lvs -a

      # vgs

    7. Check logs to see the auto-extension of the thinpool when it hits the threshold:

      # journalctl -fu dm-event.service

6.2.14 Enabling Installation Permissions for a sudo User

If you choose to install the Installer as a sudo user, the root user must grant non-root (sudo) users installation permission before they can perform the installation. Please make sure the provided user has permission to execute scripts under temporary directory /tmp on all master and worker nodes.

There are two distinct file edits that need to be performed: first on the Initial master node only, and then on all remaining master and worker nodes.

Edit the sudoers File on the Initial Master Node

Make the following modifications on the initial master node only.

WARNING:In the following commands you must ensure there is, at most, a single space character after each comma that delimits parameters. Otherwise, you may get an error similar to this when you attempt to save the file.

>>> /etc/sudoers: syntax error near line nn<<<

  1. Log in to the Initial master node as the root.

  2. Open the /etc/sudoers file using Visudo.

  3. Add the following Cmnd_Alias line to the command aliases group in the sudoers file.

    Cmnd_Alias CDFINSTALL = <CDF_installation_package_directory>/scripts/precheck.sh, <CDF_installation_package_directory>/install, <K8S_HOME>/uninstall.sh, /usr/bin/kubectl, /usr/bin/docker, /usr/bin/mkdir,/bin/rm, /bin/su, /bin/chmod, /bin/tar, <K8S_HOME>/scripts/uploadimages.sh,/bin/chown

    1. Replace <CDF_installation_package_directory> with the directory where you unzipped the installation package.

      For example:/tmp/cdf-2019.05.0xxx.

    2. Replace <K8S_HOME> with the value defined from a command line. By default, <K8S_HOME> is /opt/arcsight/kubernetes.

  4. Add the following lines to the wheel users group, replacing <username> with your sudo user password:

    %wheel ALL=(ALL) ALL

    cdfuser ALL=NOPASSWD: CDFINSTALL

    Defaults: <username>!requiretty

    Defaults: root !requiretty

  5. Locate the secure_path line in the sudoers file and ensure the following paths are present:

    Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin

    By doing this, the sudo user can execute the showmount, curl, ifconfig, and unzip commands when installing the CDF Installer.

  6. Save the file.

Installing Components Using the sudo User

After completing the modifications to the sudoers files as described above, perform the following steps.

  1. Log in to the initial master node as the non-root sudo user to perform the installation

  2. Download the installer files to a directory where the non-root sudo user has write permissions

  3. Run the CDF Installer using the sudo command (for more details, refer to the your product's Deployment Guide)

Edit the sudoers File on the Remaining Master and Worker Nodes

Make the following modifications only on the remaining master and worker nodes.

WARNING:In the following commands you must ensure there is, at most, a single space character after each comma that delimits parameters. Otherwise, you may get an error similar to this when you attempt to save the file.

>>> /etc/sudoers: syntax error near line nn<<<

  1. Log in to each master and worker node.

  2. Open the /etc/sudoers file.

  3. Add the following Cmnd_Alias line to the command aliases group in the sudoers file.

    Cmnd_Alias CDFINSTALL = /tmp/scripts/pre-check.sh, <ITOM_Suite_Foundation_Node>/install, <K8S_HOME>/uninstall.sh, /usr/bin/kubectl, /usr/bin/docker,/usr/bin/mkdir, /bin/rm, /bin/su, /bin/chmod, /bin/tar, <K8S_HOME>/scripts/uploadimages.sh, /bin/chown

    1. Replace <ITOM_Suite_Foundation_Node> with the directory where you unzipped the installation package.

      For example: /tmp/ITOM_Suite_Foundation_2019.05.0xxx

    2. Replace <K8S_HOME> with the value defined from a command line. By default, <K8S_HOME> is /opt/arcsight/kubernetes.

  4. Add the following lines to the wheel users group, replacing <username> with your sudo user password:

    %wheel ALL=(ALL) ALL

    cdfuser ALL=NOPASSWD: CDFINSTALL

    Defaults: <username> !requiretty

    Defaults: root !requiretty

  5. Locate the secure_path line in the sudoers file and ensure the following paths are present:

    Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin

    By doing this, the sudo user can execute the showmount, curl, ifconfig, and unzip commands when installing the CDF Installer.

  6. Save the file.

Repeat the process for each remaining master and worker node.