34.0 Migrating Data

You can use the data_uploader.sh script to migrate data to one of the following data storage components:

  • Kafka: You can migrate both event and raw data to Kafka. Run the script individually for event data and raw data. The script migrates the data to the Kafka topics.

    You can specify customizations such as compressing data during the migration, sending data in batches, and so on. To specify these customizations, create a properties file and add the required properties in key-value format. For example, you can add properties as follows:

    compression.type=lz4

    batch.size=20000

    For information about Kafka properties, see Kafka documentation. Set the properties and their values at your discretion because the script does not validate these properties.

    NOTE:Ensure that the Sentinel server is able to resolve all Kafka broker hostnames to valid IP addresses for the entire Kafka cluster. If DNS is not setup to enable this, add the Kafka broker hostnames to the /etc/hosts file of the Sentinel server.

  • Elasticsearch: You can migrate only event data to Elasticsearch. Before you migrate the data, ensure that you have enabled event visualization. For more information, see Enabling Event Visualization.

The script transfers data for the date range (from and to) you specify. When you run the script, it displays the mandatory and optional parameters you should specify to initiate the data migration and also the information about the relevant properties to use for the desired data storage component.

The script must be run as novell user. Therefore, ensure that the data directories and any files you specify have appropriate permissions for novell user. By default, the script migrates data from primary storage. If you want to migrate data from secondary storage, specify the appropriate path for secondary storage when running the script.

To migrate data:

  1. Log in to the Sentinel server as the novell user.

  2. Run the following script:

    /opt/novell/sentinel/bin/data_uploader.sh

  3. Follow the on-screen instructions and run the script again with the required parameters.

The migrated data will have the retention period as set in the target server.

Once the data migration is done, the script records the status such as partitions migrated successfully, partitions failed to migrate, number of events migrated, and so on. For partitions with previous day and current day’s date, the data transfer status will show IN_PROGRESS considering events that may come in late.

Run the script again in scenarios where the data migration did not complete successfully or where the data migration status for partitions still indicate IN_PROGRESS. When you re-run the script, it first checks the status file to understand the partitions that were already migrated and then continues to migrate only the remaining ones. The script maintains the logs in the /var/opt/novell/sentinel/log/data_uploader.log directory for troubleshooting purposes.