2.2 Hardware Recommendations

Sentinel has a highly scalable architecture, and if high event rates are expected, components can be distributed across several machines to achieve the best performance for the system. As you plan your system, make sure you take into account the following considerations:

2.2.1 Architecture Considerations

There are many factors that should be considered when designing a Sentinel system.:

  • Event rate (events per second, or EPS)

  • Geographic/network location of event sources, and bandwidth between networks

  • Available hardware

  • Preferred operating systems

  • Plans for future scalability

  • Amount of event filtering expected

  • Local data retention policies

  • Desired number and complexity of correlation rules

  • Expected number of incidents per day

  • Expected number of workflows to be managed per day

  • Number of users logging in to the system

  • Vulnerability and asset infrastructure

The most significant factor in the Sentinel system design is the event rate; almost every component of the Sentinel architecture is affected by increasing event rates. In a high-event-rate environment, the greatest demand is placed on the database, which is I/O-dependent and might be simultaneously handling inserts of hundreds or thousands of events per second, object creation by multiple users, workflow process updates, simple historical queries from the Sentinel Control Center, and long-term reports from the Crystal Reports Server. Therefore, Novell makes the following recommendations:

  • The database should be installed without any other Sentinel components.

  • The database server should be dedicated to Sentinel operations. Additional applications or Extract Transform Load (ETL) processes might impact database performance.

  • The database server should have a high-speed storage array that meets the I/O requirements based on the event insertion rates.

  • A dedicated database administrator should regularly evaluate and maintain the following aspects of the database:

    • Size

    • I/O operations

    • Disk space

    • Memory

    • Indexing

    • Transaction logs

In low-event-rate environments (for example, EPS < 25), these recommendations can be relaxed, because the database and other components use fewer resources.

This section includes some general hardware recommendations as guidance for Sentinel system design. In general, design recommendations are based on event rate ranges. However, these recommendations are based on the following assumptions:

  • The event rate is at the high end of the EPS range.

  • The average event size is 600 bytes.

  • All events are stored in the database (that is, there are no filters to drop events).

  • Thirty days worth of data is stored online in the database.

  • Storage space for Advisor data is not included in the specifications mentioned in the tables later in this section.

  • The Sentinel Server has a default 5 GB of disk space for temporarily caching event data that fails to insert into the database.

  • The Sentinel Server also has a default 5 GB of disk space for events that fail to be written to aggregation event files.

  • The optional Advisor subscription requires an additional 50 GB of disk space on the database server.

The hardware recommendations for a Sentinel implementation can vary based on the individual implementation, so you should consult Novell Consulting Services prior to finalizing the Sentinel architecture. The recommendations in this section can be used as guidelines.

NOTE:The Sentinel Server machine with Data Access Server (DAS) must have a local or shared striped disk array (RAID) with a minimum of four disk spindles because of high event loads and local caching.

The distributed hosts must be connected to the other Sentinel Server hosts through a single high-speed switch (GigE) in order to prevent network traffic bottlenecks.

Novell recommends that the Crystal Reports Server be installed on its own dedicated machine, particularly if the database is large or reporting usage is heavy. Crystal can be installed on the same machine as the database if the database is small, the reporting usage is light, and the database is installed on either Windows or Linux and not Solaris.

2.2.2 Supported Hardware

When you install Sentinel on Linux or Windows, the Sentinel server and database components can run on x86 (32-bit) or x86-64 (64-bit) hardware, with some exceptions based on the operating system, as described in Section 2.2.1, Architecture Considerations. Sentinel is certified on AMD Opteron and Intel Xeon hardware. Itanium servers are not supported.

For Solaris, the SPARC architecture is supported.

2.2.3 Proof of Concept Configuration

The proof of concept configuration supports up to 1350 events per second (EPS). This configuration is suitable for demonstrations or limited proofs of concept and can be installed by using the Simple option in the Sentinel installer. This configuration is not recommended for use in a production system and has been tested only with the configuration described below.

Table 2-4 Hardware for Proof of Concept

Function

RAM

Model

Sentinel Server + Database (Oracle)

5 GB, Software RAID 5 with 5 SATA hard drives

SLES 10 SP1, two 64-bit dual core processors (tested with two Intel Xeon 5160s, 3.00 GHz)

Collector Manager, Correlation Engine, and Sentinel Control Center

4 GB RAM

Windows 2003 SP2, two 32-bit single-core processors (tested with Intel Xeon, 2.4 GHz)

Crystal Reports Server

4 GB RAM

40 GB disk space

One 32-bit dual core processor (tested with Intel Xeon 5150, 2.66 GHz)

Table 2-5 System Setup for Proof of Concept

Attribute

Rating

Comments

Collectors deployed per Collector Manager

3

Rules deployed per correlation engine

10

Active Views running

10

 

Number of simultaneous users

3

Number of maps deployed

5

The largest map is 40 KB with over 800 rows.

2.2.4 Production Configuration

This production configuration supports up to 3200 EPS. The Sentinel components are distributed to enable a higher event rate than the proof of concept configuration.

  • To achieve optimal performance, the Oracle database uses a StorCase disk array (16 disks) to store data files, and a separate local drive to hold the Oracle Redo log.

  • To achieve optimal performance on the Sentinel server, the file directory that holds DAS aggregation data and insertErrorBuffer was pointed to a separate local hard drive.

Table 2-6 Hardware for Production Configuration

Function

RAM

Model

Sentinel Server and Correlation Engine

4 GB RAM

90 GB disk space

SLES 10 SP1, two 64-bit dual core processors (tested with two Intel Xeon 5160s, 3.00 GHz)

Database (Oracle)

4 GB RAM

3 TB+ disk space

SLES 10 SP1, two 64-bit dual core processors (tested with two Opteron 275s, 2.2 GHz), StorCase disk array, and software RAID 5

Collector Manager 1

4 GB RAM

20 GB disk space

SLES 10 SP1, two 64-bit dual core processors (tested with two Opteron 275s, 2.2 GHz)

Collector Manager 2

4 GB RAM

20 GB disk space

Windows 2003, one dual core processor (tested with dual core Intel Xeon, 2.50 GHz)

Crystal Reports Server

4 GB RAM

40 GB disk space

One 32-bit dual core processor (tested with Intel Xeon 5150, 2.66 GHz)

Table 2-7 System Setup for Production Configuration

Attribute

Rating

Comments

Collectors deployed per Collector Manager

10

The Collector Manager 1 configuration handles up to 1750 EPS; the Collector Manager 2 configuration handles up to 850 EPS. A typical collector running alone can output up to 600 EPS, but adding more collectors to a Collector Manager or using collectors with more complex parsing will reduce the per-collector output.

Rules deployed per Correlation Engine

20

 

Active Views running

20

 

Number of simultaneous users

5

 

Number of maps deployed

5

The largest map is 40 KB with over 800 rows.

2.2.5 High-Performance Production Configuration

The high-performance production configuration supports up to 5000 EPS.

  • To achieve optimal performance, the Oracle database uses a StorCase disk array (16 disks) to store data files and a separate local hard drive to hold the Oracle Redo log.

  • A secondary DAS_Binary process (which is responsible for event inserts into the database) is installed on a dedicated machine to reduce the CPU utilization on the primary server.

  • To achieve optimal performance on both DAS machines, the file directory that holds DAS aggregation data and insertErrorBuffer was pointed to a separate local hard drive.

Table 2-8 Hardware for High-Performance Production Configuration

Function

Sizing

Model

Sentinel Server (including primary DAS_Binary process) and Correlation Engine

4 GB RAM

90 GB disk space

SLES 10 SP1, two 64-bit dual core processors (tested with two Intel Xeon 5160s, 3.00 GHz)

Database (Oracle)

4 GB RAM

4 TB+ disk space

SLES 10 SP1, two 64-bit dual core processors (tested with two Opteron 275s, 2.2 GHz), StorCase disk array, and software RAID 5

Collector Manager 1 and secondary DAS_Binary process

4 GB RAM

40 GB disk space

SLES 10 SP1, two 64-bit dual core processors (tested with two Opteron 275s, 2,2 GHz)

Collector Manager 2

4 GB RAM

20 GB disk space

Windows 2003, one dual core processor (tested with dual core Intel Xeon, 2.50 GHz)

Crystal Reports Server

4 GB RAM

40 GB disk space

One 32-bit dual core processor (tested with Intel Xeon 5150, 2.66 GHz)

Table 2-9 System Setup for High-Performance Production Configuration

Attribute

Rating

Comments

Collectors deployed per Collector Manager

10

The Collector Manager 1 configuration handles up to 1750 EPS; the Collector Manager 2 configuration handles up to 850 EPS. A typical collector running alone can output up to 600 EPS, but adding more collectors to a Collector Manager or using collectors with more complex parsing will reduce the per-collector output.

Rules deployed per correlation Engine

20

 

Active Views running

20

 

Number of simultaneous users

4

 

Number of maps deployed

5

The largest map is 40 KB with over 800 rows.

2.2.6 Virtual Environments

Sentinel 6.1 has been tested extensively on VMware ESX Server, and Novell fully supports Sentinel running in this environment. Performance results in a virtual environment can be comparable to the results achieved in tests on a physical machine, the virtual environment should provide the same memory, CPU, disk space, and I/O as the physical machine recommendations.