The NetIQ Import Conversion Export utility lets you
Import data from LDIF files to an LDAP directory.
Export data from the LDAP directory to an LDIF file.
Migrate data between LDAP servers.
Perform a schema compare and update.
Load information into eDirectory using a template.
Import schema from SCH files to an LDAP directory.
The NetIQ Import Conversion Export utility manages a collection of handlers that read or write data in a variety of formats. Source handlers read data, and destination handlers write data. A single executable module can be both a source and a destination handler. The engine receives data from a source handler, processes the data, then passes the data to a destination handler.
For example, if you want to import LDIF data into an LDAP directory, the NetIQ Import Conversion Export engine uses an LDIF source handler to read an LDIF file and an LDAP destination handler to send the data to the LDAP directory server. See Troubleshooting LDIF Files
for more information on LDIF file syntax, structure, and debugging.
You can run the NetIQ Import Conversion Export client utility from the command line or from the Import Convert Export Wizard in NetIQ iManager. The comma-delimited data handler, however, is available only in the command line utility and NetIQ iManager.
You can use the NetIQ Import Conversion Export utility in any of the following ways:
Both the wizard and the command line interface give you access to the NetIQ Import Conversion Export engine, but the command line interface gives you greater options for combining source and destination handlers.
The NetIQ Import Conversion Export utility replaces both the BULKLOAD and ZONEIMPORT utilities included with previous versions of NDS and eDirectory.
The Import Convert Export Wizard lets you
For information on using and accessing NetIQ iManager, see the NetIQ iManager 2.7 Administration Guide .
In eDirectory 8.8, iManager provides you with options for adding missing schema to a server's schema. This process involves comparing a source and a destination. If there is additional schema in the source schema, this is added to the destination schema. The source can be either a file or an LDAP server, and the destination should be an LDAP server.
Through the ICE wizard in iManager, you can add the missing schema using the following options:
ICE can compare the schema in the source and destination. The source is a file or LDAP Server, and the destination is an LDAP server. The source schema file can be in either the LDIF or SCH format.
Figure 7-1 Compare and Add the Schema from a File
If you want to only compare the schema and not add the additional schema to the destination server, select the Do Not Add but Compare option. In this case, the additional schema is not added to the destination server but the differences between the schema are available to you as a link at the end of the operation.
Figure 7-2 Compare Schema and Add the Results to an Output File
The source and destination are LDAP servers.
If you want to only compare the schema and not add the additional schema to the destination server, select the Do Not Add but Compare option. In this case, the additional schema is not added to the destination server, but the differences between the schema are available to you as a link at the end of the operation.
In NetIQ iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Click Import Data from File on Disk, then click Next.
Select the type of file you want to import.
Specify the name of the file containing the data you want to import, specify the appropriate options, then click Next.
The options on this page depend on the type of file you selected. Click Help for more information on the available options.
Specify the LDAP server where the data will be imported.
Add the appropriate options, as described in the following table:
Option |
Description |
---|---|
Server DNS name/IP address |
DNS name or IP address of the destination LDAP server |
Port |
Integer port number of the destination LDAP server |
DER File |
Name of the DER file containing a server key used for SSL authentication |
Login method |
Authenticated Login or Anonymous Login (for the entry specified in the User DN field) |
User DN |
Distinguished name of the entry that should be used when binding to the server-specified bind operation |
Password |
Password attribute of the entry specified in the User DN field |
Click Next, then click Finish.
In NetIQ iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Click Export Data to a File on Disk, then click Next.
Specify the LDAP server holding the entries you want to export.
Use the Advanced Settings to configure additional options for the LDAP source handler. Click Help for more information on the available options.
Add the appropriate options, as described in the following table:
Option |
Description |
---|---|
Server DNS name/IP address |
DNS name or IP address of the source LDAP server |
Port |
Integer port number of the source LDAP server |
DER File |
Name of the DER file containing a server key used for SSL authentication |
Login method |
Authenticated Login or Anonymous Login (for the entry specified in the User DN field) |
User DN |
Distinguished name of the entry that should be used when binding to the server-specified bind operation |
Password |
Password attribute of the entry specified in the User DN field |
Click Next.
Specify the search criteria (described below) for the entries you want to export.
Option |
Description |
---|---|
Base DN |
Base distinguished name for the search request If this field is left empty, the base DN defaults to “ ” (empty string). |
Scope |
Scope of the search request |
Filter |
RFC 1558-compliant search filter The default is objectclass=*. |
Attributes |
Attributes you want returned for each search entry |
Click Next.
Select the export file type.
The exported file is saved in a temporary location. You can download this file at the conclusion of the Wizard.
Click Next, then click Finish.
In NetIQ iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Click Migrate Data Between Servers, then click Next.
Specify the LDAP server holding the entries you want to migrate.
Use the Advanced Settings to configure additional options for the LDAP source handler. Click Help for more information on the available options.
Add the appropriate options, as described in the following table:
Option |
Description |
---|---|
Server DNS name/IP address |
DNS name or IP address of the source LDAP server |
Port |
Integer port number of the source LDAP server |
DER file |
Name of the DER file containing a server key used for SSL authentication |
Login method |
Authenticated Login or Anonymous Login (for the entry specified in the User DN field) |
User DN |
Distinguished name of the entry that should be used when binding to the server-specified bind operation |
Password |
Password attribute of the entry specified in the User DN field |
Click Next.
Specify the search criteria (described below) for the entries you want to migrate:
Option |
Description |
---|---|
Base DN |
Base distinguished name for the search request If this field is left empty, the base DN defaults to " " (empty string). |
Scope |
Scope of the search request |
Filter |
RFC 2254-compliant search filter The default is objectclass=*. |
Attributes |
Attributes you want returned for each search entry |
Click Next.
Specify the LDAP server where the data will be migrated.
Click Next, then click Finish.
NOTE:Ensure that the schema is consistent across LDAP Services.
In NetIQ iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Click Add Schema from a File > Next.
Select the type of file you want to add.
You can choose between LDIF and schema file types.
Specify the name of the file containing the schema you want to add, specify the appropriate options, then click Next.
Select Do Not Add but Compare Schema if you want to only compare the schema and not add the additional schema to the destination server. The additional schema is not added to the destination server, but the differences between the schema is available to you in a link at the end of the operation.
The options on this page depend on the type of file you selected. Click Help for more information on the available options.
Specify the LDAP server where the schema is to be imported.
Add the appropriate options, described in the following table:
Option |
Description |
---|---|
Server DNS name/IP address |
DNS name or IP address of the destination LDAP server |
Port |
Integer port number of the destination LDAP server |
DER File |
Name of the DER file containing a server key used for SSL authentication |
Login method |
Authenticated Login or Anonymous Login (for the entry specified in the User DN field) |
User DN |
Distinguished name of the entry that should be used when binding to the server-specified bind operation |
Password |
Password attribute of the entry specified in the User DN field |
Click Next > Finish.
In NetIQ iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Click Add Schema from a Server > Next.
Specify the LDAP server that the schema is to be added from.
Add the appropriate options, described in the following table:
Option |
Description |
---|---|
Server DNS name/IP address |
DNS name or IP address of the destination LDAP server |
Port |
Integer port number of the destination LDAP server |
DER File |
Name of the DER file containing a server key used for SSL authentication |
Login method |
Authenticated Login or Anonymous Login (for the entry specified in the User DN field) |
User DN |
Distinguished name of the entry that should be used when binding to the server-specified bind operation |
Password |
Password attribute of the entry specified in the User DN field |
Select Do Not Add but Compare Schema if you want to only compare the schema and not add the additional schema to the destination server. The additional schema is not added to the destination server, but the differences between the schema is available to you in a link at the end of the operation.
Specify the LDAP server where the schema is to be added.
Add the appropriate options, described in the following table:
Option |
Description |
---|---|
Server DNS name/IP address |
DNS name or IP address of the destination LDAP server |
Port |
Integer port number of the destination LDAP server |
DER File |
Name of the DER file containing a server key used for SSL authentication |
Login method |
Authenticated Login or Anonymous Login (for the entry specified in the User DN field) |
User DN |
Distinguished name of the entry that should be used when binding to the server-specified bind operation |
Password |
Password attribute of the entry specified in the User DN field |
Click Next > Finish.
The Compare Schema Files option compares the schema between a source file and a destination file and then places the result in an output file. To add the missing schema to the destination file, apply the records of the output file to the destination file.
In NetIQ iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Click Compare Schema Files > Next.
Select the type of file you want to compare.
You can choose between LDIF and schema file formats.
Specify the name of the file containing the schema you want to compare, specify the appropriate options, then click Next.
The options on this page depend on the type of file you selected. Click Help for more information on the available options.
Specify the schema file you want to compare it with.
You can select only an LDIF file.
Click Next > Finish.
The differences between the two schema files are available to you in a link at the end of the operation.
The Compare Schema between a Server and a File option compares the schema between a source server and a destination file and then places the result in an output file. To add the missing schema to the destination file, apply the records of the output file to the destination file.
In NetIQ iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Click Compare Schema between Server and File > Next.
Specify the LDAP server that the schema is to be compared from.
Add the appropriate options, described in the following table:
Option |
Description |
---|---|
Server DNS name/IP address |
DNS name or IP address of the destination LDAP server |
Port |
Integer port number of the destination LDAP server |
DER File |
Name of the DER file containing a server key used for SSL authentication |
Login method |
Authenticated Login or Anonymous Login (for the entry specified in the User DN field) |
User DN |
Distinguished name of the entry that should be used when binding to the server-specified bind operation |
Password |
Password attribute of the entry specified in the User DN field |
Select the type of file you want to compare with.
Specify the name of the file containing the data you want to compare, specify the appropriate options, then click Next.
The options on this page depend on the type of file you selected. Click Help for more information on the available options.
Click Next > Finish.
The differences between the server's schema and the schema file are available to you in a link at the end of the operation.
This option creates an order file for use with the DELIM handler to import data from a delimited data file. The wizard helps you to create this order file that contains a list of attributes for a specific object class.
In NetIQ iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Click Generate Order File, then click Next.
Select the class for which you want to generate the order file and click View.
Select the attributes you want add it to the Sequenced Attributes list.
Select the auxiliary class and add it to the Select Auxiliary Classes list.
For more information on Sequenced Attributes list and Auxiliary Classes list, refer to the iMonitor online help.
Click Next.
Add the appropriate options, as described in the following table:
Option |
Description |
---|---|
Context |
Context where the objects created would be associated to |
Select the Data File |
Location of the data file |
Select the Delimiter in the Data File |
Delimiter that would be used within the data file. The default delimiter is a comma ( , ) |
Select the Naming Attribute |
Naming attributes from the list of all available attributes from the selected class |
Use the Advanced Settings to configure additional options for the LDAP source handler. Click Help for more information on the available options.
Use the Records to Process to select the records to be processed in the data file. Click Help for more information on the available options.
Add the appropriate options, described in the following table:
Option |
Description |
---|---|
Server DNS name/IP address |
DNS name or IP address of the destination LDAP server |
Port |
Integer port number of the destination LDAP server |
DER File |
Name of the DER file containing a server key used for SSL authentication |
Login method |
Authenticated Login or Anonymous Login (for the entry specified in the User DN field) |
User DN |
Distinguished name of the entry that should be used when binding to the server-specified bind operation |
Password |
Password attribute of the entry specified in the User DN field |
Use the Advanced Settings to configure additional options for the LDAP source handler. Click Help for more information on the available options.
Click Next, then click Finish.
You can use the command line version of the NetIQ Import Conversion Export utility to perform the following:
LDIF imports
LDIF exports
Comma-delimited data imports
Comma-delimited data exports
Data migration between LDAP servers
Schema compare and update
Load information into eDirectory using a template
Schema imports
The NetIQ Import Convert Export Wizard is installed as part of NetIQ iManager. A Windows version (ice.exe) is included in the installation. On Linux computers, the Import/Export utility is included in the NOVLice package.
The NetIQ Import Conversion Export utility is launched with the following syntax:
ice general_options -S[LDIF | LDAP | DELIM | LOAD | SCH] source_options -D[LDIF | LDAP | DELIM] destination_options
or when using the schema cache:
ice -C schema_options -S[LDIF | LDAP] source_options -D[LDIF | LDAP] destination_options
When performing an update using the schema cache, an LDIF file is not a valid destination.
General options are optional and must come before any source or destination options. The -S (source) and -D (destination) handler sections can be placed in any order.
The following is a list of the available source and destination handlers:
General options affect the overall processing of the NetIQ Import Conversion Export engine.
Option |
Description |
---|---|
-C |
Specifies that you are using the schema cache to perform schema compare and update. |
-l log_file |
Specifies a filename where output messages (including error messages) are logged. If this option is not used, error messages are sent to ice.log. If you omit this option on Linux computers, error messages will not be logged. |
-o |
Overwrites an existing log file. If this flag is not set, messages are appended to the log file instead. |
-e LDIF_error_log_ file |
Specifies a filename where entries that fail are output in LDIF format. This file can be examined, modified to correct the errors, then reapplied to the directory. |
-p URL |
Specifies the location of an XML placement rule to be used by the engine. Placement rules let you change the placement of an entry. See Conversion Rules for more information. |
-c URL |
Specifies the location of an XML creation rule to be used by the engine. Creation rules let you supply missing information that might be needed to allow an entry to be created successfully on import. For more information, see Conversion Rules. |
-s URL |
Specifies the location of an XML schema mapping rule to be used by the engine. Schema mapping rules let you map a schema element on a source server to a different but equivalent schema element on a destination server. For more information, see Conversion Rules. |
-h or -? |
Displays command line help. |
The schema options let you use the schema cache to perform schema compare and update operations.
Option |
Description |
---|---|
-C -a |
Updates the destination schema (adds missing schema). |
-C -c filename |
Outputs the destination schema to the specified file. |
-C -n |
Disables schema pre-checking. |
The source handler option (-S) determines the source of the import data. Only one of the following can be specified on the command line.
Option |
Description |
---|---|
-SLDIF |
Specifies that the source is an LDIF file. For a list of supported LDIF options, see LDIF Source Handler Options. |
-SLDAP |
Specifies that the source is an LDAP server. For a list of supported LDAP options, see LDAP Source Handler Options |
-SDELIM |
Specifies that the source is a comma-delimited data file. NOTE:For better performance, import data by using NetIQ Import Conversion Export utility with LDIF file instead of DELIM. You can use a custom PERL script to generate the output into your desired format. For a list of supported DELIM options, see DELIM Source Handler Options. |
-SSCH |
Specifies that the source is a schema file. For a list of supported SCH options, see SCH Source Handler Options |
-SLOAD |
Specifies that the source is a DirLoad template. For a list of supported LOAD options, see LOAD Source Handler Options. |
The destination handler option (-D) specifies the destination of the export data. Only one of the following can be specified on the command line.
Option |
Description |
---|---|
-DLDIF |
Specifies that the destination is an LDIF file. For a list of supported options, see LDIF Destination Handler Options. |
-DLDAP |
Specifies that the destination is an LDAP server. For a list of supported options, see LDAP Destination Handler Options. |
-DDELIM |
Specifies that the destination is a comma-delimited file. For a list of supported options, see DELIM Destination Handler Options. |
The LDIF source handler reads data from an LDIF file, then sends it to the NetIQ Import Conversion Export engine.
Option |
Description |
---|---|
-f LDIF_file |
Specifies a filename containing LDIF records read by the LDIF source handler and sent to the engine. If you omit this option on Linux computers, the input will be taken from stdin. |
-a |
If the records in the LDIF file are content records (that is, they contain no changetypes), they will be treated as records with a changetype of add. |
-c |
Prevents the LDIF source handler from stopping on errors. This includes errors on parsing LDIF and errors sent back from the destination handler. When this option is set and an error occurs, the LDIF source handler reports the error, finds the next record in the LDIF file, then continues. |
-n |
Does not perform update operations, but prints what would be done. When this option is set, the LDIF source handler parses the LDIF file but does not send any records to the NetIQ Import Conversion Export engine (or to the destination handler). |
-m |
If the records in the LDIF file are content records (that is, they contain no changetypes), they will be treated as records with a changetype of modify. |
-x |
If the records in the LDIF file are content records (that is, they contain no changetypes), they will be treated as records with a changetype of delete. |
-R value |
Specifies the range of records to be processed. |
-v |
Enables the verbose mode of the handler. |
-e value |
Scheme to be used for decrypting the attribute values present in the LDIF file. [des/3des]. |
-E value |
Password for decryption of attributes. |
The LDIF destination handler receives data from the NetIQ Import Conversion Export engine and writes it to an LDIF file.
Option |
Description |
---|---|
-f LDIF_file |
Specifies the filename where LDIF records can be written. If you omit this option on Linux computers, the output will go to stdout. |
-B |
Do not suppress printing of binary values. |
-b |
Do not base64 encode LDIF data. |
-e value |
Scheme to be used for encrypting the attribute values received from the LDAP server.[des/3des]. |
-E value |
Password for encryption of attributes. |
The LDAP source handler reads data from an LDAP server by sending a search request to the server. It then sends the search entries it receives from the search operation to the NetIQ Import Conversion Export engine.
Option |
Description |
---|---|
-s server_name |
Specifies the DNS name or IP address of the LDAP server that the handler will send a search request to. The default is the local host. |
-p port |
Specifies the integer port number of the LDAP server specified by server_name. The default is 389. For secure operations, the default port is 636. When ICE is communicating with an LDAP server on the SSL port (default 636) without a certificate, it chooses to accept any server certificate and assumes it to be a trusted one. This should only be used in controlled environments where encrypted communication between servers and clients is desired but server verification is not necessary. |
-d DN |
Specifies the distinguished name of the entry that should be used when binding to the server-specified bind operation. |
-w password |
Specifies the password attribute of the entry specified by DN. |
-W |
Prompts for the password of the entry specified by DN. This option is applicable only for Linux. |
-F filter |
Specifies an RFC 1558-compliant search filter. If you omit this option, the search filter defaults to objectclass=*. |
-n |
Does not actually perform a search, but shows what search would be performed. |
-a attribute_list |
Specifies a comma-separated list of attributes to retrieve as part of the search. In addition to attribute names, there are three other values:
If you omit this option, the attribute list defaults to the empty list. |
-o attribute_list |
Specifies a comma-separated list of attributes to be omitted from the search results received from the LDAP server before they are sent to the engine. This option is useful in cases where you want to use a wildcard with the -a option to get all attributes of a class and then remove a few of them from the search results before passing the data on to the engine. For example, -a* -o telephoneNumber searches for all user-level attributes and filters the telephone number from the results. |
-R |
Specifies to not automatically follow referrals. The default is to follow referrals with the name and password given with the -d and -w options. |
-e value |
Specifies which debugging flags should be enabled in the LDAP client SDK. For more information, see |
-b base_DN |
Specifies the base distinguished name for the search request. If this option is omitted, the base DN defaults to " " (empty string). |
-c search_scope |
Specifies the scope of the search request. Valid values are the following:
If you omit this option, the search scope defaults to Sub. |
-r deref_aliases |
Specifies the way aliases should be dereferenced during the search operation. Values include the following:
If you omit this option, the alias dereferencing behavior defaults to Never. |
-l time_limit |
Specifies a time limit (in seconds) for the search. |
-z size _limit |
Specifies the maximum number of entries to be returned by the search. |
-V version |
Specifies the LDAP protocol version to be used for the connection. It must be 2 or 3. If this option is omitted, the default is 3. |
-v |
Enables verbose mode of the handler. |
-L filename |
Specifies a file in DER format containing a server key used for SSL authentication. Filename is optional on Linux, with default value /etc/opt/novell/certs/SSCert.der. |
-A |
Retrieves attribute names only. Attribute values are not returned by the search operation. |
-t |
Prevents the LDAP handler from stopping on errors. |
-m |
LDAP operations will be modifies. |
-x |
LDAP operations will be deletes. |
-k |
This option is no longer supported. To use SSL, specify a valid certificate using the -L option. |
-M |
Enables the Manage DSA IT control. |
-MM |
Enables the Manage DSA IT control, and makes it critical. |
The LDAP destination handler receives data from the NetIQ Import Conversion Export engine and sends it to an LDAP server in the form of update operations to be performed by the server.
For information about hashed password in an LDIF file, see Hashed Password Representation in LDIF Files
.
Option |
Description |
---|---|
-s server_name |
Specifies the DNS name or IP address of the LDAP server that the handler will send a search request to. The default is the local host. |
-p port |
Specifies the integer port number of the LDAP server specified by server_name. The default is 389. For secure operations, the default port is 636. |
-d DN |
Specifies the distinguished name of the entry that should be used when binding to the server-specified bind operation. |
-w password |
Specifies the password attribute of the entry specified by DN. |
-W |
Prompts for the password of the entry specified by DN. This option is applicable only for Linux. |
-B |
Use this option if you do not want to use asynchronous LDAP Bulk Update/Replication Protocol (LBURP) requests for transferring update operations to the server. Instead, use standard synchronous LDAP update operation requests. For more information, see LDAP Bulk Update/Replication Protocol. |
-F |
Allows the creation of forward references. When an entry is going to be created before its parent exists, a placeholder called a forward reference is created for the entry’s parent to allow the entry to be successfully created. If a later operation creates the parent, the forward reference is changed into a normal entry. |
-l |
Stores password values using the simple password method of the NetIQ Modular Authentication Service (NMAS). Passwords are kept in a secure location in the directory, but key pairs are not generated until they are actually needed for authentication between servers. |
-e value |
Specifies which debugging flags should be enabled in the LDAP client SDK. For more information, see |
-V version |
Specifies the LDAP protocol version to be used for the connection. It must be 2 or 3. If this option is omitted, the default is 3. |
-L filename |
Specifies a file in DER format containing a server key used for SSL authentication. Filename is optional on Linux with default value /etc/opt/novell/certs/SSCert.der. |
-k |
This option is no longer supported. To use SSL, specify a valid certificate using the -L option. |
-M |
Enables the Manage DSA IT control. |
-MM |
Enables the Manage DSA IT control, and makes it critical. |
-P |
Enables concurrent LBURP processing. This option is enabled only if all the operations in the LDIF are add. When you use the -F option, -P is enabled by default. |
-Z |
Specifies the number of asynchronous requests. This indicates the number of entries the ICE client can send to the LDAP server asynchronously before waiting for any result back from the server. |
The DELIM source handler reads data from a comma-delimited data file, then sends it to the destination handler.
Option |
Description |
---|---|
-f filename |
Specifies a filename containing comma-delimited records read by the DELIM source handler and sent to the destination handler. |
-F value |
Specifies a file containing the attribute data order for the file specified by -f. By default, the number of columns for an attribute in the delimited file equals maximum number of values for the attribute. If an attribute is repeated, the number of columns equals the number of times the attribute repeats in the template. If this option is not specified, enter this information directly using -t. See Performing a Comma-Delimited Import for more information. |
-t value |
The comma-delimited list of attributes specifying the attribute data order for the file specified by -f. By default, the number of columns for an attribute in the delimited file equals maximum number of values for the attribute. If an attribute is repeated, the number of columns equals the number of times the attribute repeats in the template. Either this option or -F must be specified. See Performing a Comma-Delimited Import for more information. |
-c |
Prevents the DELIM source handler from stopping on errors. This includes errors on parsing comma-delimited data files and errors sent back from the destination handler. When this option is set and an error occurs, the DELIM source handler reports the error, finds the next record in the comma-delimited data file, then continues. |
-n value |
Specifies the LDAP naming attribute for the new object. This attribute must be contained in the attribute data you specify using -F or -t. |
-l value |
Specifies the path to append the RDN to (such as o=myCompany). If you are passing the DN, this value is not necessary. |
-o value |
Comma-delimited list of object classes (if none is contained in your input file) or additional object classes such as auxiliary classes. The default value is inetorgperson. |
-i value |
Comma-delimited list of columns to skip. This value is an integer specifying the number of the column to skip. For example, to skip the third and fifth columns, specify i3,5. |
-d value |
Specifies the delimiter. The default delimiter is a comma ( , ). The following values are special case delimiters:
For example, to specify a tab as a delimiter, you would pass -d[t]. |
-q value |
Specifies the secondary delimiter. The default secondary delimiter is single quotes (' '). The following values are special case delimiters:
For example, to specify a tab as a delimiter, you would pass -q[t]. |
-v |
Runs in verbose mode. |
-k value |
Specifies the first line in the delimited file is the template. If this option is used with -t or -F, the template specified is checked for consistency with that in the delimited file. |
The DELIM destination handler receives data from the source handler and writes it to a comma-delimited data file.
Option |
Description |
---|---|
-f filename |
Specifies the filename where comma-delimited records can be written. |
-F value |
Specifies a file containing the attribute data order for the file specified by -f. By default, the number of columns for an attribute in the delimited file equals maximum number of values for the attribute. If an attribute is repeated, the number of columns equals the number of times the attribute repeats in the template. If this option is not specified, enter this information directly using -t. |
-t value |
The comma-delimited list of attributes specifying the attribute data order for the file specified by -f. By default, the number of columns for an attribute in the delimited file equals maximum number of values for the attribute. If an attribute is repeated, the number of columns equals the number of times the attribute repeats in the template. Either this option or -F must be specified. |
-l value |
Can be either RDN or DN. Specifies whether the driver should place the entire DN or just the RDN in the data. RDN is the default value. |
-d value |
Specifies the delimiter. The default delimiter is a comma ( , ). The following values are special case delimiters:
For example, to specify a tab as a delimiter, you would pass -d[t]. |
-q value |
Specifies the secondary delimiter. The default secondary delimiter is single quotes (' '). The following values are special case delimiters:
For example, to specify a tab as a delimiter, you would pass -q[t]. |
-n value |
Specifies a naming attribute to be appended during import, for example, cn. |
The SCH handler reads data from a legacy NDS or eDirectory schema file (files with a *.sch extension), then sends it to the NetIQ Import Conversion Export engine. You can use this handler to implement schema-related operations on an LDAP Server, such as extensions using a *.sch file as input.
The SCH handler is a source handler only. You can use it to import *.sch files into an LDAP server, but you cannot export *.sch files.
The options supported by the SCH handler are shown in the following table.
Option |
Description |
---|---|
-f filename |
Specifies the full path name of the *.sch file. |
-v |
(Optional) Run in verbose mode. |
The DirLoad handler generates eDirectory information from commands in a template. This template file is specified with the -f argument and contains the attribute specification information and the program control information.
Option |
Description |
---|---|
-f filename |
Specifies the template file containing all attribute specification and all control information for running the program. |
-c |
Continues to the next record if an error is reported. |
-v |
Runs in verbose mode. |
-r |
Changes the request to a delete request so the data is deleted instead of added. This allows you to remove records that were added using a DirLoad template. |
-m |
Indicates that modify requests will be in the template file. |
Attribute Specifications determines the context of new objects.
See the following sample attribute specification file:
givenname: $R(first) initial: $R(initial) sn: $R(last) dn:cn=$A(givenname,%.1s)$A(initial,%.1s)$A(sn),ou=dev,ou=ds,o=novell objectclass: inetorgperson telephonenumber: 1-800-$N(1-999,%03d)-$C(%04d) title: $R(titles) locality: Our location
The format of the attribute specification file resembles an LDIF file, but allows some powerful constructs to be used to specify additional details and relationships between the attributes.
Unique Numeric Value inserts a numeric value that is unique for a given object into an attribute value.
Syntax: $C[(<format)]
The optional <format specifies a print format that is to be applied to the value. Note that if no format is specified, the parenthesis cannot be used either:
$C $C(%d) $C(%04d)
The plain $C inserts the current numeric value into an attribute value. This is the same as $C(%d) because “%d” is the default format that the program uses if none was specified. The numeric value is incremented after each object, so if you use $C multiple times in the attribute specification, the value is the same within a single object. The starting value can be specified in the settings file by using the !COUNTER=value syntax.
Random Numeric Value inserts a random numeric value into an attribute value using the following syntax:
$N(<low-<high[,<format])]
<low and <high specify the lower and upper bounds, respectively, that are used as a random number is generated. The optional <format specifies a print format that is to be applied to a value from the list.
$N(1-999) $N(1-999,%d) $N(1-999,%03d)
Random String Value From a List inserts a randomly selected string from a specified list into an attribute value using the following syntax:
$R(<filename[,<format])]
The <filename specifies a file that contains a list of values. This can be an absolute or relative path to a file. Several files containing the lists are included with this package. The values are expected to be separated by a newline character.
The optional <format specifies a print format that is to be applied to a value from the list.
$A(givenname) $A(givenname,%s) $A(givenname,%.1s)
It is important to note that no forward references are allowed. Any attribute whose value you are going to use must precede the current attribute in the attribute specification file. In the example below, the cn as part of the DN is constructed from givenname, initial, and sn. Therefore, these attributes must precede the DN in the settings file.
givenname: $R(first) initial: $R(initial) sn: $R(last) dn:o=novell,ou=dev,ou=ds,cn=$A(givenname,%.1s)$A(initial,%.1s)$A(sn)
The DN receives special handling in the LDIF file: no matter what the location of DN is in the settings, it will be written first (as per LDIF syntax) to the LDIF file. All other attributes are written in the order they appear.
Control Settings provide some additional controls for the object creation. All controls have an exclamation point (!) as the first character on the line to separate them from attribute settings. The controls can be placed anywhere in the file.
!COUNTER=300 !OBJECTCOUNT=2 !CYCLE=title !UNICYCLE=first,last !CYCLE=ou,BLOCK=10
Counter
Provides the starting value for the unique counter value. The counter value is inserted to any attribute with the $C syntax.
Object Count
OBJECTCOUNT determines how many objects are created from the template.
Cycle
CYCLE can be used to modify the behavior of pulling random values from the files ($R-syntax). This setting has three different values.
!CYCLE=title
Anytime the list named “title” is used, the next value from the list is pulled rather than randomly selecting a value. After all values have been consumed in order, the list starts from the beginning again.
!CYCLE=ou,BLOCK=10
Each value from list “ou” is to be used 10 times before moving to the next value.
The most interesting variant of the CYCLE control setting is UNICYCLE. It specifies a list of sources that are cycled through in left-to-right order, allowing you to create guaranteed unique values if desired. If this control is used, the OBJECTCOUNT control is used only to limit the number of objects to the maximum number of unique objects that can be created from the lists. In other words, if the lists that are part of UNICYCLE can produce 15000 objects, then OBJECTCOUNT can be used to reduce that number, but not to increase it.
For example, assume that the givenname file contains two values (Doug and Karl) and the sn file contains three values (Hoffman, Schultz, and Grieger).With the control setting !UNICYCLE=givenname,sn and attribute definition cn: $R(givenname) $R(sn), the following cns are created:
cn: Doug Hoffmancn cn: Karl Hoffmancn cn: Doug Schultzcn cn: Karl Schultzcn cn: Doug Griegercn cn: Karl Grieger
Listed below are sample commands that can be used with the NetIQ Import Conversion Export command line utility for the following functions:
To perform an LDIF import, combine the LDIF source and LDAP destination handlers, for example:
ice -S LDIF -f entries.ldif -D LDAP ‑s server1.acme.com -p 389 -d cn=admin,c=us -w secret
This command reads LDIF data from entries.ldif and sends it to the LDAP server server1.acme.com at port 389 using the identity cn=admin,c=us, and the password “secret.”
To perform an LDIF export, combine the LDAP source and LDIF destination handlers. For example:
ice -S LDAP -s server1.acme.com -p 389 -d cn=admin,c=us ‑w password -F objectClass=* -c sub -D LDIF ‑f server1.ldif
This command performs a subtree search for all objects in the server server1.acme.com at port 389 using the identity cn=admin,c=us and the password “password” and outputs the data in LDIF format to server1.ldif.
To perform a comma-delimited import, use a command similar to the following:
ice -S DELIM -f/tmp/in.csv -F /tmp/order.csv -ncn -lo=acme -D LDAP -s server1.acme.com -p389 -d cn=admin,c=us -w secret
This command reads comma-delimited values from the /tmp/in.csv file and reads the attribute order from the /tmp/order.csv file. For each attribute entry in in.csv, the attribute type is specified in order.csv. For example, if in.csv contains
pat,pat,engineer,john
then order.csv would contain
dn,cn,title,sn
The information in order.csv could be input directly using the -t option.
The data is then sent to the LDAP server server1.acme.com at port 389 using the identity cn=admin,c=us, and password “secret”.
This example specifies that cn should become the new DN for this object using the -n option, and this object was added to the organization container acme using the -l option.
Comma-delimited files generated using NetIQ Import Conversion Export utility have the template used for generating them in the first line. To specify that first line in the delimited file is the template, use the -k option. If -F or -t is used with -k, the template specified should be consistent with that in the delimited file, where both have exactly the same attributes. However, the number of occurrences and the order of appearance of each attribute can differ. In the above example, in.csv contains
dn,cn,title,title,title,sn in the first line. The following templates are consistent and can be used with -t or -F when -k is used:
dn,cn,title,sn (number of repetitions of attribute title differs)
dn,sn,title,cn (order of attributes differ)
However, the following are not consistent with the template in in.csv and hence cannot be specified with -t or -F when -k is used:
dn,cn,title,sn,objectclass (new attribute objectclass)
dn,cn,title (missing attribute sn)
To perform a comma-delimited export, use a command similar to the following:
ice -S LDAP -s server1.acme.com -p 389 -d cn=admin,c=us ‑w password -F objectClass=* -c sub -D DELIM -f /tmp/server1.csv -F order.csv
This command performs a subtree search for all objects in the server server1.acme.com at port 389 using the identity cn=admin,c=us and the password “password” and outputs the data in comma-delimited format to the /tmp/server1.csv file.
If any attribute in the order.csv has multiple values, /tmp/server1.csv, the number of columns for this attribute equals maximum number of values for the attribute. If an attribute repeats in order.csv, the number of columns for this attribute equals the number of times the attribute repeats.
For example, if order.csv contains dn,sn,objectclass, and objectclass has 4 values, whereas dn and sn have only 1 value for all the entries exported, dn and sn would have 1 column each, whereas objectclass would have 4 columns. If you want only 2 values for objectclass to be output to the delimited file, order.csv should contain dn,sn,objectclass,objectclass.
In both cases the attributes are written to the /tmp/server1.csv in the first line. In the first case, /tmp/server1.csv would have dn,sn,objectclass,objectclass,objectclass,objectclasss in the first line of /tmp/server1.csv, and in the second case, it would have dn,sn,objectclass,objectclass.
To prevent the first line to be treated as a sequence of attributes during a subsequent import, use the -k option. See Performing a Comma-Delimited Import for more information.
To perform a data migration between LDAP servers, combine the LDAP source and LDAP destination handlers. For example:
ice -S LDAP -s server1.acme.com -p 389 -d cn=admin,c=us ‑w password -F objectClass=* -c sub -D LDAP ‑s server2.acme.com -p 389 -d cn=admin,c=us -w secret
This command performs a subtree search for all objects in the server server1.acme.com at port 389 using the identity cn=admin,c=us and the password “password” and sends it to the LDAP server server2.acme.com at port 389 using the identity cn=admin,c=us and the password “secret.”
To perform a schema file import, use a command similar to the following:
ice -S SCH -f $HOME/myfile.sch -D LDAP -s myserver -d cn=admin,o=novell -w passwd
This command reads schema data from myfile.sch and sends it to the LDAP server myserver using the identity cn=admin,o=novell and the password “passwd.”
To perform a LOAD file import, use a command similar to the following:
ice -S LOAD -f attrs -D LDIF -f new.ldf
In this example, the contents of the attribute file attrs is as follows:
#=====================================================================
# DirLoad 1.00 #=====================================================================
!COUNTER=300
!OBJECTCOUNT=2 #-----------------------------------------------------------------------
# ATTRIBUTE TEMPLATE # --------------------------------------------------------------------
objectclass: inetorgperson
givenname: $R(first)
initials: $R(initial)
sn: $R(last)
dn: cn=$A(givenname,%.1s)$A(initial,%.1s)$A(sn),ou=$R(ou),ou=dev,o=novell,
telephonenumber: 1-800-$N(1-999,%03d)-$C(%04d)
title: $R(titles)
Running the previous command from a command prompt produces the following LDIF file:
version: 1
dn: cn=JohnBBill,ou=ds,ou=dev,o=novell
changetype: add
objectclass: inetorgperson
givenname: John
initials: B
sn: Bill
telephonenumber: 1-800-290-0300
title: Amigo
dn: cn=BobJAmy,ou=ds,ou=dev,o=novell
changetype: add
objectclass: inetorgperson
givenname: Bob
initials: J
sn: Amy
telephonenumber: 1-800-486-0301
title: Pomo
Running the following command from a command prompt sends the data to an LDAP server via the LDAP Handler:
ice -S LOAD -f attrs -D LDAP -s www.novell.com -d cn=admin,o=novell -w admin
If the previous template file is used, but the following command is used, all of the records that were added with the above command will be deleted.
ice -S LOAD -f attrs -r -D LDAP -s www.novell.com -d cn=admin,o=novell -w admin
If you want to use -m to modify, the following is an example of how to modify records:
# ======================================================================
# DirLoad 1.00
# ====================================================================== !COUNTER=300
!OBJECTCOUNT=2
#----------------------------------------------------------------------
# ATTRIBUTE TEMPLATE
# ----------------------------------------------------------------------
dn: cn=$R(first),%.1s)($R(initial),%.1s)$R(last),ou=$R(ou),ou=dev,o=novell
delete: givenname
add: givenname
givenname: test1
replace: givenname
givenname: test2
givenname: test3
If the following command is used where the attrs file contains the data above:
ice -S LOAD -f attrs -m -D LDIF -f new.ldf
then the results would be the following LDIF data:
version: 1
dn: cn=BillTSmith,ou=ds,ou=dev,o=novell
changetype: modify
delete: givenname
-
add: givenname
givenname: test1
-
replace: givenname
givenname: test2
givenname: test3
-
dn: cn=JohnAWilliams,ou=ldap,ou=dev,o=novell
changetype: modify
delete: givenname
-
add: givenname
givenname: test1
-
replace: givenname
givenname: test2
givenname: test3
-
To perform an LDIF export from LDAP server having encrypted attributes, combine the LDAP source and LDIF destination handlers along with the scheme and password for encryption, for example:
ice -S LDAP -s server1.acme.com -p 636 -L cert-server1.der -d cn=admin,c=us -w password -F objectClass=* -c sub -D LDIF -f server1.ldif -e des -E secret
To perform an LDIF import of a file having attributes encrypted by ICE previously, combine the LDIF source with the scheme and password used previously for exporting the file and LDAP destination handlers, for example:
ice -S LDIF -f server1.ldif -e des -E secret -D LDAP -s server2.acme.com -p 636 -L cert-server2.der -d cn=admin,c=us -w password
The NetIQ Import Conversion Export engine lets you specify a set of rules that describe processing actions to be taken on each record received from the source handler and before the record is sent on to the destination handler. These rules are specified in XML (either in the form of an XML file or XML data stored in the directory) and solve the following problems when importing entries from one LDAP directory to another:
Missing information
Hierarchical differences
Schema differences
There are three types of conversion rules:
Rule |
Description |
---|---|
Placement |
Changes the placement of an entry. For example, if you are importing a group of users in the l=San Francisco, c=US container but you want them to be in the l=Los Angeles, c=US container when the import is complete, you could use a placement rule to do this. For information on the format of these rules, see Placement Rules. |
Creation |
Supplies missing information that might be needed to allow an entry to be created successfully on import. For example, assume that you have exported LDIF data from a server whose schema requires only the cn (commonName) attribute for user entries, but the server that you are importing the LDIF data to requires both the cn and sn (surname) attributes. You could use the creation rule to supply a default sn value, (such as " ") for each entry, as it is processed by the engine. When the entry is sent to the destination server, it will have the required sn attribute and can be added successfully. For information on the format of these rules, see Create Rules. |
Schema Mapping |
If, when you are transferring data between servers (either directly or using LDIF), there are schema differences in the servers, you can use Schema Mapping to
For information on the format of these rules, see Schema Mapping Rules. |
You can enable conversion rules in both the NetIQ eDirectory Import/Export Wizard and the command line interface. For more information on XML rules, see Using XML Rules.
In NetIQ iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Select the task you want to perform.
Under Advanced Settings, choose from the following options:
Option |
Description |
---|---|
Schema Rules |
Specifies the location of an XML schema mapping rule to be used by the engine. |
Placement Rules |
Specifies the location of an XML placement rule to be used by the engine. |
Creation Rules |
Specifies the location of an XML creation rule to be used by the engine. |
Click Next.
Follow the online instructions to finish your selected task.
You can enable conversion rules with the -p, -c, and -s general options on the NetIQ Import Conversion Export executable. For more information, see General Options.
Option |
Description |
---|---|
-p URL |
Location of an XML placement rule to be used by the engine. |
-c URL |
Location of an XML creation rule to be used by the engine. |
-s URL |
Location of an XML schema mapping rule to be used by the engine. |
For all three options, URL must be one of the following:
A URL of the following format:
file://[path/]filename
The file must be on the local file system.
An RFC 2255-compliant LDAP URL that specifies a base-level search and an attribute list consisting of a single attribute description for a singled-valued attribute type.
The NetIQ Import Conversion Export conversion rules use the same XML format as NetIQ Identity Manager. For more information on NetIQ Identity Manager, see the NetIQ Identity Manager 4.0.2 Administration Guide.
The <attr-name-map> element is the top-level element for the schema mapping rules. Mapping rules determine how the import schema interacts with the export schema. They associate specified import class definitions and attributes with corresponding definitions in the export schema.
Mapping rules can be set up for attribute names or class names.
For an attribute mapping, the rule must specify that it is an attribute mapping, a name space (nds-name is the tag for the source name), the name in the eDirectory name space, then the other name space (app-name is the tag for the destination name) and the name in that name space. It can specify that the mapping applies to a specific class or it can be applied to all classes with the attribute.
For a class mapping, the rule must specify that it is a class mapping rule, a name space (eDirectory or the application), the name in that name space, then the other name space and the name in that name space.
The following is the formal DTD definition of schema mapping rules:
<!ELEMENT attr-name-map (attr-name | class-name)*> <!ELEMENT attr-name (nds-name, app-name)> <!ATTLIST attr-name class-name CDATA #IMPLIED> <!ELEMENT class-name (nds-name, app-name)> <!ELEMENT nds-name (#PCDATA)> <!ELEMENT app-name (#PCDATA)>
You can have multiple mapping elements in the file. Each element is processed in the order that it appears in the file. If you map the same class or attribute more than once, the first mapping takes precedence.
The following examples illustrate how to create a schema mapping rule.
Schema Rule 1: The following rule maps the source's surname attribute to the destination's sn attribute for the inetOrgPerson class.
<attr-name-map> <attr-name class-name="inetOrgPperson"> <nds-name>surname</nds-name> <app-name>sn</app-name> </attr-name> </attr-name-map>
Schema Rule 2: The following rule maps the source's inetOrgPerson class definition to the destination's User class definition.
<attr-name-map> <class-name> <nds-name>inetOrgPerson</nds-name> <app-name>User</app-name> </class-name> </attr-name-map>
Schema Rule 3: The following example contains two rules. The first rule maps the source's Surname attribute to the destination's sn attribute for all classes that use these attributes. The second rule maps the source's inetOrgPerson class definition to the destination's User class definition.
<attr-name-map> <attr-name> <nds-name>surname</nds-name> <app-name>sn</app-name> </attr-name> <class-name> <nds-name>inetOrgPerson</nds-name> <app-name>User</app-name> </class-name> </attr-name-map>
Example Command: If the schema rules are saved to an sr1.xml file, the following command instructs the utility to use the rules while processing the 1entry.ldf file and to send the results to a destination file, outt1.ldf.
ice -o -sfile://sr1.xml -SLDIF -f1entry.ldf -c -DLDIF -foutt1.ldf
Create rules specify the conditions for creating a new entry in the destination directory. They support the following elements:
Required Attributes specifies that an add record must have values for all of the required attributes, or else the add fails. The rule can supply a default value for a required attribute. If a record does not have a value for the attribute, the entry is given the default value. If the record has a value, the record value is used.
Matching Attributes specifies that an add record must have the specific attributes and match the specified values, or else the add fails.
Templates specifies the distinguished name of a Template object in eDirectory. The NetIQ Import Conversion Export utility does not currently support specifying templates in create rules.
The following is the formal DTD definition for create rules:
<!ELEMENT create-rules (create-rule)*> <!ELEMENT create-rule (match-attr*, required-attr*, template?) > <!ATTLIST create-rule class-name CDATA #IMPLIED description CDATA #IMPLIED> <!ELEMENT match-attr (value)+ > <!ATTLIST match-attr attr-name CDATA #REQUIRED> <!ELEMENT required-attr (value)*> <!ATTLIST required-attr attr-name CDATA #REQUIRED> <!ELEMENT template EMPTY> <!ATTLIST template template-dn CDATA #REQUIRED>
You can have multiple create rule elements in the file. Each rule is processed in the order that it appears in the file. If a record does not match any of the rules, that record is skipped and the skipping does not generate an error.
The following examples illustrate how to format create rules.
Create Rule 1: The following rule places three conditions on add records that belong to the inetOrgPerson class. These records must have givenName and Surname attributes. They should have an L attribute, but if they don't, the create rule supplies a default value of Provo for them.
<create-rules> <create-rule class-name="inetOrgPerson"> <required-attr attr-name="givenName"/> <required-attr attr-name="surname"/> <required-attr attr-name="L"> <value>Provo</value> </required-attr> </create-rule> </create-rules>
Create Rule 2: The following create rule places three conditions on all add records, regardless of their base class:
The record must contain a givenName attribute. If it doesn’t, the add fails.
The record must contain a Surname attribute. If it doesn’t, the add fails.
The record must contain an L attribute. If it doesn't, the attribute is set to a value of Provo.
<create-rules> <create-rule> <required-attr attr-name="givenName"/> <required-attr attr-name="Surname"/> <required-attr attr-name="L"> <value>Provo</value> </required-attr> </create-rule> </create-rules>
Create Rule 3: The following create rule places two conditions on all records, regardless of base class:
The rule checks to see if the record has a uid attribute with a value of ratuid. If it doesn't, the add fails.
The rule checks to see if the record has an L attribute. If it does not have this attribute, the L attribute is set to a value of Provo.
<create-rules> <create-rule> <match-attr attr-name="uid"> <value>cn=ratuid</value> </match-attr> <required-attr attr-name="L"> <value>Provo</value> </required-attr> </create-rule> </create-rules>
Example Command: If the create rules are saved to an crl.xml file, the following command instructs the utility to use the rules while processing the 1entry.ldf file and to send the results to a destination file, outt1.ldf.
ice -o -cfile://cr1.xml -SLDIF -f1entry.ldf -c -DLDIF -foutt1.ldf
Placement rules determine where an entry is created in the destination directory. They support the following conditions for determining whether the rule should be used to place an entry:
Match Class: If the rule contains any match class elements, an objectClass specified in the record must match the class-name attribute in the rule. If the match fails, the placement rule is not used for that record.
Match Attribute: If the rule contains any match attribute elements, the record must contain an attribute value for each of the attributes specified in the match attribute element. If the match fails, the placement rule is not used for that record.
Match Path: If the rule contains any match path elements, a portion of the record's DN must match the prefix specified in the match path element. If the match fails, the placement rule is not used for that record.
The last element in the rule specifies where to place the entry. The placement rule can use zero or more of the following:
PCDATA uses parsed character data to specify the DN of a container for the entries.
Copy the Name specifies that the naming attribute of the old DN is used in the entry's new DN.
Copy the Attribute specifies the naming attribute to use in the entry's new DN. The specified naming attribute must be a valid naming attribute for the entry's base class.
Copy the Path specifies that the source DN should be used as the destination DN.
Copy the Path Suffix specifies that the source DN, or a portion of its path, should be used as the destination DN. If a match-path element is specified, only the part of the old DN that does not match the prefix attribute of the match-path element is used as part of the entry's DN.
The following is the formal DTD definition for the placement rule:
<!ELEMENT placement-rules (placement-rule*)> <!ATTLIST placement-rules src-dn-format (%dn-format;) "slash" dest-dn-format (%dn-format;) "slash" src-dn-delims CDATA #IMPLIED dest-dn-delims CDATA #IMPLIED> <!ELEMENT placement-rule (match-class*, match-path*, match-attr*, placement)> <!ATTLIST placement-rule description CDATA #IMPLIED> <!ELEMENT match-class EMPTY> <!ATTLIST match-class class-name CDATA #REQUIRED> <!ELEMENT match-path EMPTY> <!ATTLIST match-path prefix CDATA #REQUIRED> <!ELEMENT match-attr (value)+ > <!ATTLIST match-attr attr-name CDATA #REQUIRED> <!ELEMENT placement (#PCDATA | copy-name | copy-attr | copy-path | copy-path-suffix)* >
You can have multiple placement-rule elements in the file. Each rule is processed in the order that it appears in the file. If a record does not match any of the rules, that record is skipped and the skipping does not generate an error.
The following examples illustrate how to format placement rules. The scr-dn-format="ldap" and dest-dn-format="ldap" attributes set the rule so that the name space for the DN in the source and destination is LDAP format.
The NetIQ Import Conversion Export utility supports source and destination names only in LDAP format.
Placement Example 1: The following placement rule requires that the record have a base class of inetOrgPerson. If the record matches this condition, the entry is placed immediately subordinate to the test container and the left-most component of its source DN is used as part of its DN.
<placement-rules src-dn-format="ldap" dest-dn-format="ldap"> <placement-rule> <match-class class-name="inetOrgPerson"></match-class> <placement>cn=<copy-name/>,o=test</placement> </placement-rule> </placement-rules>
With this rule, a record with a base class of inetOrgPerson and with the following DN:
dn: cn=Kim Jones, ou=English, ou=Humanities, o=UofZ
would have the following DN in the destination directory:
dn: cn=Kim Jones, o=test
Placement Example 2: The following placement rule requires that the record have an sn attribute. If the record matches this condition, the entry is placed immediately subordinate to the test container and the left-most component of its source DN is used as part of its DN.
<placement-rules src-dn-format="ldap" dest-dn-format="ldap"> <placement-rule> <match-attr attr-name="sn"></match-attr> <placement>cn=<copy-name/>,o=test</placement> </placement-rule> </placement-rules>
With this rule, a record with the following dn and sn attribute:
dn: cn=Kim Jones, ou=English, ou=Humanities, o=UofZ sn: Jones
would have the following DN in the destination directory:
dn: cn=Kim Jones, o=test
Placement Example 3: The following placement rule requires the record to have an sn attribute. If the record matches this condition, the entry is placed immediately subordinate to the test container and its sn attribute is used as part of its DN. The specified attribute in the copy-attr element must be a naming attribute of the entry's base class.
<placement-rules src-dn-format="ldap" dest-dn-format="ldap"> <placement-rule> <match-attr attr-name="sn"></match-attr> <placement>cn=<copy-attr attr-name="sn"/>,o=test</placement> </placement-rule> </placement-rules>
With this rule, a record with the following dn and sn attribute:
dn: cn=Kim Jones, ou=English, ou=Humanities, o=UofZ sn: Jones
would have the following DN in the destination directory:
dn: cn=Jones, o=test
Placement Example 4: The following placement rule requires the record to have an sn attribute. If the record matches this condition, the source DN is used as the destination DN.
<placement-rules src-dn-format="ldap" dest-dn-format="ldap"> <placement-rule> <match-attr attr-name="sn"></match-attr> <placement><copy-path/></placement> </placement-rule> </placement-rules>
Placement Example 5: The following placement rule requires the record to have an sn attribute. If the record matches this condition, the entry's entire DN is copied to the test container.
<placement-rules src-dn-format="ldap" dest-dn-format="ldap"> <placement-rule> <match-attr attr-name="sn"></match-attr> <placement><copy-path-suffix/>,o=test</placement> </placement-rule> </placement-rules>
With this rule, a record with the following dn and sn attribute:
dn: cn=Kim Jones, ou=English, ou=Humanities, o=UofZ sn: Jones
would have the following DN in the destination directory:
dn: cn=Kim Jones, ou=English, ou=Humanities, o=UofZ, o=test
Placement Example 6: The following placement rule requires the record to have an sn attribute. If the record matches this condition, the entry's entire DN is copied to the neworg container.
<placement-rules> <placement-rule> <match-path prefix="o=engineering"/> <placement><copy-path-suffix/>o=neworg</placement> </placement-rule> </placement-rules>
For example:
dn: cn=bob,o=engineering
becomes
dn: cn=bob,o=neworg
Example Command: If the placement rules are saved to a pr1.xml file, the following command instructs the utility to use the rules while processing the 1entry.ldf file and to send the results to a destination file, foutt1.ldf.
ice -o -pfile://pr1.xml -SLDIF -f1entry.ldf -c -DLDIF -foutt1.ldf
The NetIQ Import Conversion Export utility uses the LDAP Bulk Update/Replication Protocol (LBURP) to send asynchronous requests to an LDAP server. This guarantees that the requests are processed in the order specified by the protocol and not in an arbitrary order influenced by multiprocessor interactions or the operating system’s scheduler.
LBURP also lets the NetIQ Import Conversion Export utility send several update operations in a single request and receive the response for all of those update operations in a single response. This adds to the network efficiency of the protocol.
LBURP works as follows:
The NetIQ Import Conversion Export utility binds to an LDAP server.
The server sends a bind response to the client.
The client sends a start LBURP extended request to the server.
The server sends a start LBURP extended response to the client.
The client sends zero or more LBURP operation extended requests to the server.
These requests can be sent asynchronously. Each request contains a sequence number identifying the order of this request relative to other requests sent by the client over the same connection. Each request also contains at least one LDAP update operation.
The server processes each of the LBURP operation extended requests in the order specified by the sequence number and sends an LBURP operation extended response for each request.
After all of the updates have been sent to the server, the client sends an end LBURP extended request to the server.
The server sends an end LBURP extended response to the client.
The LBURP protocol lets NetIQ Import Conversion Export present data to the server as fast as the network connection between the two will allow. If the network connection is fast enough, this lets the server stay busy processing update operations 100% of the time because it never has to wait for NetIQ Import Conversion Export to give it more work to do.
The LBURP processor in eDirectory also commits update operations to the database in groups to gain further efficiency in processing the update operations. LBURP can greatly improve the efficiency of your LDIF imports over a traditional synchronous approach.
LBURP is enabled by default, but you can choose to disable it during an LDIF import.
To enable or disable LBURP during an LDIF import:
In NetIQ iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Click Import Data From File on Disk, then click Next.
Select LDIF from the File Type drop-down list, then specify the name of the LDIF file containing the data you want to import.
Click Next.
Specify the LDAP server where the data will be imported and the type of login (anonymous or authenticated).
Under Advanced Setting, select Use LBURP.
Click Next, then follow the online instructions to complete the remainder of the LDIF Import Wizard.
IMPORTANT:Because LBURP is a relatively new protocol, eDirectory servers earlier than version 8.5 (and most non-eDirectory servers) do not support it. If you are using the NetIQ eDirectory Import/Export Wizard to import an LDIF file to one of these servers, you must disable the LBURP option for the LDIF import to work.
You can use the command line option to enable or disable LBURP during an LDIF import. For more information, see -B.
In cases where you have thousands or even millions of records in a single LDIF file you are importing, consider the following:
If it’s possible to do so, select a destination server for your LDIF import that has read/write replicas containing all the entries represented in the LDIF file. This will maximize network efficiency.
Avoid having the destination server chain to other eDirectory servers for updates. This can severely reduce performance. However, if some of the entries to be updated are only on eDirectory servers that are not running LDAP, you might need to allow chaining to import the LDIF file.
For more information on replicas and partition management, see Section 6.0, Managing Partitions and Replicas.
NetIQ Import Conversion Export maximizes network and eDirectory server processing efficiency by using LBURP to transfer data between the wizard and the server. Using LBURP during an LDIF import greatly improves the speed of your LDIF import.
For more information on LBURP, see LDAP Bulk Update/Replication Protocol.
The amount of database cache available for use by eDirectory has a direct bearing on the speed of LDIF imports, especially as the total number of entries on the server increases. When doing an LDIF import, you might want to allocate the maximum memory possible to eDirectory during the import. After the import is complete and the server is handling an average load, you can restore your previous memory settings. This is particularly important if the import is the only activity taking place on the eDirectory server.
For more information on configuring the eDirectory database cache, see Section 19.0, Maintaining NetIQ eDirectory.
NetIQ eDirectory uses public and private key pairs for authentication. Generating these keys is a very CPU-intensive process. With eDirectory 8.7.3 onwards, you can choose to store passwords using the simple password feature of NetIQ Modular Authentication Service (NMAS). When you do this, passwords are kept in a secure location in the directory, but key pairs are not generated until they are actually needed for authentication between servers. This greatly improves the speed for loading an object that has password information.
To enable simple passwords during an LDIF import:
In NetIQ iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Click Import Data From File on Disk, then click Next.
Select LDIF from the File Type drop-down list, then enter the name of the LDIF file containing the data you want to import.
Click Next.
Specify the LDAP server where the data will be imported and the type of login (anonymous or authenticated).
Under Advanced Setting, select Store NMAS Simple Passwords/Hashed Passwords.
Click Next, then follow the online instructions to complete the remainder of the LDIF import wizard.
If you choose to store passwords using simple passwords, you must use an NMAS-aware Novell Client to log in to the eDirectory tree and access traditional file and print services. NMAS must also be installed on the server. LDAP applications binding with name and password will work seamlessly with the simple password feature.
For more information on NMAS, see the NetIQ Modular Authentication Services Administration Guide.
Having unnecessary indexes can slow down your LDIF import because each defined index requires additional processing for each entry having attribute values stored in that index. You should make sure that you don’t have unnecessary indexes before you do an LDIF import, and you might want to consider creating some of your indexes after you have finished loading the data reviewed predicate statistics to see where they are really needed.
For more information on tuning indexes, see Index Manager.