In the introduction to this series, SAP HR CMP Integration Driver I discussed the plan of working through the SAP HR driver that comes with the Compliance Management Platform (CMP) to give a better understanding of its inner workings.
You can read more about the new family of SAP drivers that were released with Identity Manager 3.6 and the Compliance Management Platform in this article on the SAP family of drivers:
As this series develops, I will update the links to the rest of the series here:
The plan is to try and get more articles of this nature, which walk through a default driver configuration, explaining WHAT is going on, and when possible WHY it is done that way, in order to make troubleshooting and modifications easier and safer. If you do not know WHY something is being done, it is often hard to work with it.
There are all sorts of interesting little tidbits scattered throughout the various driver configurations that are of interest, and it would be great to have them all in one location as a reference.
Thus I started this Wiki page: http://wiki.novell.com/index.php/Detailed_driver_walk_through_collection to try and pull it all together.
If you have the time, consider looking at a driver configuration you are very familiar with and try writing up a channel (Publisher or Subscriber), a policy set (Say the Publisher Event Transform, or the Subscriber Command Transform, or whatever tickles your fancy), or if you can, the entire driver.
The more we get written, the better it is for everyone. This is also of interest for the older and newer driver configurations, as they change from version to version, and it is important to be able to notice the differences between the two, if we are to ever have a hope of doing a meaningful upgrade.
The hope is to get as much content, even duplicate content, as different perspectives are of interest, together to make it better for everyone.
A quick recap of the SAP HR driver then. The current shipping driver handles the relationships between Organizations, Positions, Jobs, and Persons in SAP’s HR module in a somewhat simple fashion, and if your SAP OM (Organizational Management) module is used in any a somewhat complex fashion, then the driver may have problems with it.
This was recognized, and for backwards compatibility reasons, the Novell Identity Manager 3.6 product comes with two different versions of the SAP HR driver. There is the previous version, updated for Identity Manager 3.6 and there is the version called the CMP SAP HR driver.
This second driver is the one under discussion, and it requires a second driver to work hand in hand with, the SAP Business Logic driver.
Lets work through the new CMP SAP HR driver first, then on to the SAP Business Logic.
In the first article (SAP HR CMP Integration Driver Walkthrough – Part 1) I discussed the driver configuration settings.
In the second article (SAP HR CMP Integration Driver Walkthrough – Part 2) I discussed the Global Configuration Values.
In the third article (SAP HR CMP Integration Driver Walkthrough – Part 3) I started on the Input transform. and got barely into the Publisher, Event Transform rule set.
In the fourth article (SAP HR CMP Integration Driver Walkthrough – Part 4)I got part way through the Publisher Event transform.
In the fifth article (SAP HR CMP Integration Driver Walkthrough – Part 5) I finally finished the Publisher Event Transform.
In the sixth article (SAP HR CMP Integration Driver Walkthrough – Part 6) I made it through the Match and Create rules.
Lets continue then from where I left off and work through the Publisher Placement rules.
As usual, a little logging.
Break if not a user
Makes sure the next few rules only apply to a User. This could also have been handled by having an additional test, if class name equal to user in all the coming conditions, but this works too. I wonder which is a more efficient approach.
Employee-Name-Based (Variable Length)
Now we face a tough choice. How are we going to name users. Back in the earlier parts (SAP HR CMP Integration Driver Walkthrough – Part 2
) of this series we looked at the GCV’s and one of them was drv.new.user.naming.option. Back there I said that there are two options that are employee name based, fixed length and variable length. Then there is an Attribute based value where you then get an option to select a User Naming Attribute. The default suggestion is based on workforceID, which would give you an all numeric user name.
The GCV displays an enumeration list, which gives you three pretty names (to be human readable), but really stores integer values (1,2,or 3) for brevity.
The three values are:
Employee-Name-Based (Variable Length) which is 1
Employee-Name-Based (Fixed Length) which is 2
Attribute-Value-Based which is 3
Then the next GCV on display varies depending on the value selected. For variable or fixed length, it wants a maximum length specified, and for Attribute based it wants the name of an attribute.
In this rule, we come to the implementation of these choices. This specific rule, running first, tests for condition 1, which is Employee name based with a variable length.
Here the Comments have useful information telling us the pattern it is going to try, using the Unique Name token. It will try them in order until the first one succeeds.
First character of Given Name + Surname
First character of Given Name + first character of Initials + Surname
First two characters of Given Name + Surname
First three characters of Given Name + Surname
First character of Given Name + Surname + digit starting with 1 incremented until name is unique within eDirectory
The last one is a fallback, so that we always get a name, even if it is pretty ugly.
The Unique Name token is pretty cool, and works really well for just this use case. In one token call, define 5 different patterns, and you are done! To code this by hand the old way would take a lot more work. Thanks Novell for this great token! Still one of my favorite tokens.
Employee-Name-Based (Fixed Length)
This is basically the same approach as the previous rule, where the GCV is set to the value 2, but what differs is that to keep the length fixed, they store the surname value in a variable, and in each Unique Name token pattern substring it to the correct length to keep the total length fixed.
Here the patterns are slightly different due to the fixed length, and are:
First character of Given Name + up to (length-1) characters of Surname
First character of Given Name + first character of Initials + up to (length-2) characters of Surname
First two characters of Given Name + up to (length-2) characters of Surname
First three characters of Given Name + up to (length-3) characters of Surname
First character of Given Name + up to (length-4) characters of Surname + three digits padded with zeros if necessary starting with 001 and incremented until name is unique within eDirectory
For fun, lets pick apart the last pattern.
<arg-string> <token-lower-case> <token-substring length="1"> <token-attr name="Given Name"/> </token-substring> <token-xpath expression="substring($surname,1,number(~drv.new.user.naming.2.length~)-4)"/> </token-lower-case> </arg-string>
This uses a neat trick, of using a GCV via the ~GCVName~ notation to get the maximum length of the name allowed, into an XPATH that is used to do the substringing. In this case, it uses the regular substring verb token on the Given Name attribute to get the first letter of the first name, and then the XPATH for the part of the last name to be allowed. Then it lower cases all of it, just to keep naming easy.
This tests for the GCV set to three, and then quite simply sets the name to the value of specified attribute.
Note there is no Unique Name token used here, which means you need to take care in your choice of attributes. WorkforceID, the default makes sense in some ways, since it should mostly be globally unique (embarrassing paying two people the same or some other such mixup) but in the case of SAP HR, there are a number of cases where WorkforceID might change, and I did not yet see any code to handle the case to rename such users in that eventuality. Nor am I sure if that is even a good idea.
Name and Place Object
Finally, since each prior rule had been setting a value into obj-name, we test for the availability of the variable. Now personally I would have used a match (aka equals but using regular expressions (regex) instead of case insensitive) for .+ which would tell us that the obj-name is not just available, but has an actual value of some kind.
I imagine that one of the rules should always provide a value, but it seems odd not to test that way here as well, as is done in other places.
If there is such a value, then the set operation destination DN is set to the string of the GCV for user placement (idv.dit.data.users) a backslash and then the generate obj-name value.
There is a break() token here that makes no sense to me, since it ends this policy object and not the Placement rule set of policies so it looks like a leftover hanging on to me.
In this rule the other supporting objects get placed.
I happen to dislike the approach taken by this driver, which way back in the Publisher Event transform created a structure for these objects, buried under the Driver object, if it did not exist, and now is using it here for placement.
In a typical SAP shop, since the overhead for SAP is so high, you would have thousands of employees, and often thousands of positions, organizations, and jobs One client I have worked with has 3000 users, but 8000 supporting objects in SAP HR!
Thus from a partitioning perspective, I would prefer to place my SAP objects in a location that makes sense for my tree, rather in the driver set partition.
Mostly this is neither here nor there, and comes down to personal taste, since eDirectory is so good about partitioning that I could partition off the sap subtree from under the driver object and place it wherever I want. But I find this approach distasteful, as only a DirXML Script snob can! Yes I admit, I am a snob about this stuff.
For each type of object, under the driver (referenced via the dirxml.auto.driverdn GCV) there is an sap container, then an om container, and finally a set of o, s, and c containers for each object type.
The three object types are thus placed accordingly, using their object ID (OBJID) as the naming attribute (DirXML-SAPXID where X is O, S, or C).
The last rule is for Work Order objects, and they go in the SAP Business Logic driver (specified by the dvr.sapbul.driverdn GCV) container.
These two rules are actually rules from the Event transform, being run again, as you can see from the names. Pub-etX which means Publisher channel (pub), event transform (et) and Stylesheet of Policy object (X, where X is S or p).
We need to run through these again, since in the case of a modify for a nonexistent object, the modify would have come through the event transform, hit the association test, seen it was not associated, and converted to a synthetic add. In that case, the work done to process the relationships value would have been lost and thus need to be redone here.
Often this is why you see stuff like this done in the Command transform, since that handles it both on the modify of an associated object case, and the add new object case. But this way works as well.
By reusing the same rule in both locations, it mostly keeps it simple. Of course it means you need to test the same rule in different locations twice. which is most of the complexity.
As usual a log entry when done. While at Brainshare 2010 this year, I got to meet the author of this configuration, Holder Dopp, and he showed a more complete output of the logging, and I have to admit, it is growing on me. When you consider how hard it is to read DSTrace, and figure out what happened, where it happened, and when it happened, and the skill it takes to read it, the logging in this driver starts to make more sense. You get a nice logged output that shows you policy set by policy set, where you got up too in any given transaction.
Additionally, leaving Dstrace running is probably a bad plan, if you have a busy system, as it REALLY hurts engine performance! Especially if you are tracing large local variables or node sets to the trace. I noticed this in some toolkit rules I worked on, in this series:
These rules are a different use case for Identity Manager, where instead of working on one event at a time, after a trigger event, these rules grab all values of the Groups in your tree, and try to make sure that Group Membership attributes, and on the User the Member attribute are correctly set.
In that case, you would be showing a large set of values to the trace, which really really hurts performance. What I found was turning trace for the driver running the rule, from 3 down to 0 could shave hours off processing time! Usually there is a factor of 4-10 times improvement! Now if you are only running low event counts, it is not a big deal at all. Novell officially recommends that you run with DSTrace off for reasons such as this.
However, you still have errors, and odd events occurring in your system, so you need to have some mechanism to track down what happened and what went wrong.
After seeing what the entire log output for an event might look like, I begin to see the wisdom of having this log. While I initially thought it would be quite verbose and wasteful of disk space, in reality it is very readable, useful, and much less space intensive than the comparable DSTrace.
I asked Holger for a sample of the Log trace, for say a user create, but he is a busy guy, and I do not have a test system with SAP HR available to try generating it. If I get a sample, I will post an article showing it, since I think it is quite interesting to see the whole shebang at once.
Disclaimer: As with everything else at NetIQ Cool Solutions, this content is definitely not supported by NetIQ, so Customer Support will not be able to help you if it has any adverse effect on your environment. It just worked for at least one person, and perhaps it will be useful for you too. Be sure to test in a non-production environment.