The disk subsystem is the most common bottleneck. The I/O takes a relatively long time with longer queues, resulting in high disk utilization and idle CPU cycles. Use the iostat tool during expected peak loads to determine the average response time indicators.
Disk read, write, and update operations can be sequential or random. Random reads and updates is the most common access pattern in eDirectory deployments.
Some solutions for random workloads:
Increase the RAM. This allows caching frequently used data or read-ahead data at the filesystem layer. It also allows caching the DIB within the FLAIM subsystem.
Use dedicated volumes for the DIB. Filesystem performance improves for volumes created closer to the spindle. Use dedicated volumes for RFL and other logs.
As disks develop increasing latency over a period of time because of fragmentation, they should be defragmented.
Add separate disk drives for FLAIM RFL. This type of logging can be performed on high-speed disks.
Use a RAID 10(1+0) environment with more disk drives.
Files created by eDirectory can grow to 4 GB. Filesystems that are optimized to handle large files work efficiently with eDirectory.
For Solaris™, the Veritas* VxFS filesystem is an extent-based file system where the file system metadata is optimized for large files. The UFS filesystem is indirectly block-based, where the filesystem metadata is stored in larger number of blocks. It can even be scattered for large files, which makes UFS slower for larger files.
For Linux™, the Reiser filesystem is a fast journaling file system and performs better than the ext3 filesystem on large DIB sets. However, the write back journaling mode of ext3 is known to match the performance of the Reiser filesystem although the default ordered mode provides better data consistency. XFS is a high-performance journaling file system, capable of handling large files and offering smooth data transfers. eDirectory 9.1 is supported on SLES 11 32 and 64-bit platforms having XFS file system.
FLAIM supports a block size of 4 KB and 8 KB. By default, it is 4 KB. This is same as the default block size on Linux (tune2fs -l device). However, on Solaris, the UFS filesystem is created with a default block size of 8 KB (df -g mountpoint). If the FLAIM block size is smaller than the filesystem block size, partial block writes can happen. If the database block size is larger than the filesystem block size, individual block reads and writes are split into a series of distinct physical I/O operations. Therefore, you should always keep the FLAIM block size the same as the filesystem block size.
Block sizes can be controlled only during the creation of the DIB. Add a line “blocksize=8192” to _ndsdb.ini to create the DIB with 8K block size.
Choosing the right block size depends on the average size of the FLAIM record on your deployments. Empirical testing is required on the right set of test data to determine which block size is better for your deployment.