2.3 Access and Communication Requirements across your Protection Network

2.3.1 Access and Communication Requirements for Workloads

The following software, network, and firewall requirements are for workloads that you intend to protect by using PlateSpin Protect.

Table 2-2 Access and Communication Requirements for Workloads

Workload Type

Prerequisites

Required Ports (Defaults)

All workloads

Ping (ICMP echo request and response) support

 

All Windows workloads

Microsoft .NET Framework version 2.0 or 3.5 SP1

 

Windows Server 2008

  • Built-in Administrator or domain administrator account credentials (membership only in the local Administrators group is insufficient).

  • The Windows Firewall configured to allow File and Printer Sharing. Use one of these options:

    • Option 1, using Windows Firewall: Use the basic Windows Firewall Control Panel item (firewall.cpl) and select File and printer Sharing in the list of exceptions.

      - OR -

    • Option 2, using Firewall with Advanced Security: Use the Windows Firewall with Advanced Security utility (wf.msc) with the following Inbound Rules enabled and set to Allow:

      • File and Printer Sharing (Echo Request - ICMPv4In)

      • File and Printer Sharing (Echo Request - ICMPv6In)

      • File and Printer Sharing (NB-Datagram-In)

      • File and Printer Sharing (NB-Name-In)

      • File and Printer Sharing (NB-Session-In)

      • File and Printer Sharing (SMB-In)

      • File and Printer Sharing (Spooler Service - RPC)

      • File and Printer Sharing (Spooler Service - RPC-EPMAP)

TCP 3725

NetBIOS 137 - 139

SMB (TCP 139, 445 and UDP 137, 138)

TCP 135/445

Windows Server 2003 (including SP1 Standard, SP2 Enterprise, and R2 SP2 Enterprise)

NOTE:After enabling the required ports, run the following command at the server prompt to enable PlateSpin remote administration:

netsh firewall set service RemoteAdmin enable

For more information about netsh, see the Microsoft TechNet article, http://technet.microsoft.com/en-us/library/cc785383%28v=ws.10%29.aspx..

  • TCP: 3725, 135, 139, 445

  • UDP: 137, 138, 139

All Linux workloads

Secure Shell (SSH) server

TCP 22, 3725

2.3.2 Access and Communication Requirements for Containers

The following software, network, and firewall requirements are for the supported workload containers.

Table 2-3 Access and Communication Requirements for Containers

System

Prerequisites

Required Ports (Defaults)

All containers

Ping (ICMP echo request and response) capability.

 

VMware ESX/ESXi 4.1

VMware ESXi 5.0

  • VMware account with an Administrator role

  • VMware Web services API and file management API

HTTPS (TCP 443)

vCenter Server

The user with access must be assigned the appropriate roles and permissions. Refer to the pertinent release of VMware documentation for more information.

HTTPS (TCP 443)

2.3.3 Open Port Requirements for PlateSpin Server Hosts

The following open port requirements are for PlateSpin Server hosts.

Table 2-4 Open Port Requirements for PlateSpin Server Hosts

Port (Default)

Remarks

TCP 80

For HTTP communication

TCP 443

For HTTPS communication (if SSL is enabled)

2.3.4 Protection Across Public and Private Networks Through NAT

In some cases, a source, a target, or PlateSpin Protect itself, might be located in an internal (private) network behind a network address translator (NAT) device, unable to communicate with its counterpart during protection.

PlateSpin Protect enables you to address this issue, depending on which of the following hosts is located behind the NAT device:

  • PlateSpin Server: In your server’s PlateSpin Server Configuration tool, record the additional IP addresses assigned to that host. See Configuring the Application to Function through NAT.

  • Target Container: When you are attempting to discover a container (such as VMware ESX), specify the public (or external) IP address of that host in the discovery parameters.

  • Workload: When you are attempting to add a workload, specify the public (external) IP address of that workload in the discovery parameters.

  • Failed-over VM: During failback, you can specify an alternative IP address for the failed-over workload in Failback Details (Workload to VM).

  • Failback Target: During an attempt to register a failback target, when prompted to provide the IP address of the PlateSpin Server, provide either the local address of the Protect Server host or one of its public (external) addresses recorded in the server’s PlateSpin Server Configuration tool (see PlateSpin Server above).

Configuring the Application to Function through NAT

To enable the PlateSpin Server to function across NAT-enabled environments, you must record additional IP addresses of your PlateSpin Server in the PlateSpin Server Configuration tool’s database that the server reads upon startup.

For information on the update procedure, see Configuring PlateSpin Server Behavior through XML Configuration Parameters.

2.3.5 Overriding the Default bash Shell for Executing Commands on Linux Workloads

By default, the PlateSpin Server uses the /bin/bash shell when executing commands on a Linux source workload.

If required, you can override the default shell by modifying the corresponding registry key on the PlateSpin Server.

See KB Article 7010676.

2.3.6 Requirements for VMware DRS Clusters as Containers

To be a valid protection target, your VMware DRS cluster must be added to the set of containers (inventoried) as a VMware Cluster. You should not attempt to add a DRS Cluster as a set of individual ESX servers. See Adding Containers (Protection Targets).

In addition, your VMware DRS cluster must meet the following configuration requirements:

  • DRS is enabled and set to either Partially Automated or Fully Automated.

  • At least one datastore is shared among all the ESX servers in the VMware Cluster.

  • At least one vSwitch and virtual port-group, or vNetwork Distributed Switch, is common to all the ESX servers in the VMware Cluster.

  • The failover workloads (VMs) for each Protection contract is placed exclusively on datastores, vSwitches and virtual port-groups that are shared among all the ESX servers in the VMware Cluster.