2.4 Configuring Access and Communication Settings across your Protection Network

2.4.1 Open Port Requirements for the PlateSpin Server HostForge VM

Table 2-2 describes the ports that must be open for on the Forge VM to allow access to the PlateSpin Forge Web Interface.

Table 2-2 Open Port Requirements for the PlateSpin Server HostForge VM

Port (Default)

Remarks

TCP 80

For HTTP communication

TCP 443

For HTTPS communication (if SSL is enabled)

2.4.2 Access and Communication Requirements for Workloads

Table 2-3 describes the software, network, and firewall requirements for workloads that you intend to protect by using PlateSpin Forge.

Table 2-3 Access and Communication Requirements for Workloads

Workload Type

Prerequisites

Required Ports (Defaults)

All workloads

Ping (ICMP echo request and response) support

 

All Windows workloads. See Supported Windows Workloads.

  • Microsoft .NET Framework 3.5 Service Pack 1

  • Microsoft .NET Framework 4.0

 

All Windows workloads. See Supported Windows Workloads.

  • Built-in Administrator or domain administrator account credentials (membership only in the local Administrators group is insufficient).

  • The Windows Firewall configured to allow File and Printer Sharing. Use one of these options:

    • Option 1, using Windows Firewall: Use the basic Windows Firewall Control Panel item (firewall.cpl) and select File and printer Sharing in the list of exceptions.

      - OR -

    • Option 2, using Firewall with Advanced Security: Use the Windows Firewall with Advanced Security utility (wf.msc) with the following Inbound Rules enabled and set to Allow:

      • File and Printer Sharing (Echo Request - ICMPv4In)

      • File and Printer Sharing (Echo Request - ICMPv6In)

      • File and Printer Sharing (NB-Datagram-In)

      • File and Printer Sharing (NB-Name-In)

      • File and Printer Sharing (NB-Session-In)

      • File and Printer Sharing (SMB-In)

      • File and Printer Sharing (Spooler Service - RPC)

      • File and Printer Sharing (Spooler Service - RPC-EPMAP)

TCP 3725

NetBIOS (TCP 137 - 139)

SMB (TCP 139, 445 and UDP 137, 138)

RPC (TCP 135, 445)

Windows Server 2003 (including SP1 Standard, SP2 Enterprise, and R2 SP2 Enterprise).

NOTE:After enabling the required ports, run the following command at the server prompt to enable PlateSpin remote administration:

netsh firewall set service RemoteAdmin enable

For more information about netsh, see the Microsoft TechNet article, The Netsh Command Line Utility.

TCP 3725, 135, 139, 445

UDP 137, 138, 139

All Linux workloads. See Supported Linux Workloads.

Secure Shell (SSH) server

TCP 22, 3725

2.4.3 Access and Communication Requirements for Containers

Table 2-4 describes the software, network, and firewall requirements for the supported workload containers.

Table 2-4 Access and Communication Requirements for Containers

System

Prerequisites

Required Ports (Defaults)

All containers

Ping (ICMP echo request and response) capability.

 

All VMware containers. See Supported VM Containers.

  • VMware account with an Administrator role

  • VMware Web services API and file management API

HTTPS (TCP 443)

vCenter Server

The user with access must be assigned the appropriate roles and permissions. Refer to the pertinent release of VMware documentation for more information.

HTTPS (TCP 443)

2.4.4 Protection Across Public and Private Networks Through NAT

In some cases, a source, a target, or PlateSpin Forge itself, might be located in an internal (private) network behind a network address translator (NAT) device, unable to communicate with its counterpart during protection.

PlateSpin Forge enables you to address this issue, depending on which of the following hosts is located behind the NAT device:

  • PlateSpin Server: In your server’s PlateSpin Server Configuration tool, record the additional IP addresses assigned to that host. See Configuring the Application to Function through NAT.

  • Workload: When you are attempting to add a workload, specify the public (external) IP address of that workload in the discovery parameters.

  • Failed-over VM: During failback, you can specify an alternative IP address for the failed-over workload in Failback Details (Workload to VM).

  • Failback Target: During an attempt to register a failback target, when prompted to provide the IP address of the PlateSpin Server, provide either the local address of the Forge VM or one of its public (external) addresses recorded in the server’s PlateSpin Server Configuration tool (see PlateSpin Server above).

Configuring the Application to Function through NAT

To enable the PlateSpin Server to function across NAT-enabled environments, you must record additional IP addresses of your PlateSpin Server in the PlateSpin Server Configuration tool’s database that the server reads upon startup.

For information on the update procedure, see Configuring PlateSpin Server Behavior through XML Configuration Parameters.

2.4.5 Overriding the Default bash Shell for Executing Commands on Linux Workloads

By default, the PlateSpin Server uses the /bin/bash shell when executing commands on a Linux source workload.

If required, you can override the default shell by modifying the corresponding registry key on the PlateSpin Server.

See Knowledgebase Article 7010676.

2.4.6 Requirements for VMware DRS Clusters as Containers

To be a valid protection target, your VMware DRS cluster must be added to the set of containers (inventoried) as a VMware Cluster. You should not attempt to add a DRS Cluster as a set of individual ESX servers. See Adding Containers (Protection Targets).

In addition, your VMware DRS cluster must meet the following configuration requirements:

  • DRS is enabled and set to either Partially Automated or Fully Automated.

  • At least one datastore is shared among all the ESX servers in the VMware Cluster.

  • At least one vSwitch and virtual port-group, or vNetwork Distributed Switch, is common to all the ESX servers in the VMware Cluster.

  • The failover workloads (VMs) for each Protection contract is placed exclusively on datastores, vSwitches and virtual port-groups that are shared among all the ESX servers in the VMware Cluster.