Identity Manager new feature: Jobs.

geoffc

By: geoffc

June 4, 2008 1:26 pm

Reads: 248

Comments:0

Rating:0

With the release of Novell Identity Manager 3.5 a number of new features were added. Big things like the Work Order Driver (http://www.novell.com/communities/node/2295/using-work-order-driver-idm http://www.novell.com/communities/node/1853/workorder-details) , the Null driver, and engine cache statistics (http://www.novell.com/communities/node/4777/driver-cache-stats-idm-36-imanager-plugins). Some more low level features were added, like the Unique Name token (http://www.novell.com/communities/node/2209/unique-name-token-functionality-idm-35 http://www.novell.com/communities/node/3002/things-i-think-idm-needs-support), the Time and Convert Time tokens (http://www.novell.com/communities/node/2572/using-time-tokens-idm-35), and many more.

Work Orders as a concept and the driver to process them were added to allow you to time delay events. That is, when some event happens, somehow create a Work Order that has a Due Date for it to be processed. That helps address the problem of I need to react to an event, but process the event in the future.

There is another aspect to time delaying events and reactions to events. What if you want to have something occur on a schedule? Well in the past, one easy way of doing this would have been to set up a CRON process that called a script that used a tool, say ldapmodify, to flip a bit on some specific object. Then you would have a rule watching for that specific event to kick off the work you need done.

Another approach would be the one taken in the Password Notifier driver by Lothar Haeger, http://www.novell.com/communities/node/1104/idm-2×3-password-notification-service-driver that basically heartbeats every couple of minutes, and checks the time, doing things at certain time intervals that you choose.

Ultimately this is what the Job process that is new in Novell Identity Manager 3.5 does. There is a new process bound to any driver, or driver set, that can process a job on a specific schedule.

In fact, when you set a schedule, and you look at the definition screens you will actually see the CRON command equivalent of what the schedule you defined would look like.

There are several types of Jobs included at this time:

  • Random Password Generator
  • Schedule Driver
  • Subscriber channel Trigger
  • Custom definition

The first one generates a random password for each of the objects in its scope, and I am not sure what the point of this Job is beyond demonstrating what can be done in a Job.

The second type of Job, Schedule Driver is designed to start or stop a driver at a particular time. Stopping a driver at a time could be easily done by sending an event that you watch for in a rule, and throw an error of Fatal. This would stop the driver. Starting it in policy is not really possible. (Well I guess you could have another driver call out to the DXCMD toolkit as a Java class to start the driver, but that is more than a bit contrived!)

The final class of Custom definition is probably where we will see Jobs shine. Just because there are only three definitions right now, does not mean more are not coming, and now our horizons are opened up the possibilities our imagination can deliver. I expect to see some truly silly uses for this concept in the near future. As I understand it, some of the cool stuff coming in Identity Manager 3.6 in the driver health statistics area will use Jobs to do things. I expect we will see much more diversity in Jobs in the coming release.

One of the neat features is that you can make a Job that deletes itself after being run (This message will self destruct in 5 seconds, I love it!), and it could potentially be a job that starts a driver. So imagine a monitoring tool that watches an Identity Manager driver to see if it is running, and if it goes down, creates a job to restart the driver, that then deletes itself. At Brainshare they were showing the new iManager plugins that had the ability to do something like this as an Action, when a driver hit a certain set of conditions.

The Job type I would like to focus on is the Subscriber channel trigger type.

This is a pretty powerful and flexible tool. It goes beyond the simple model I suggested before of using an ldapmodify command via cron, since it can be targeted at a single object, and return a single event for that one object, or targeted at a subtree (with the fidelity to only go one level deep, or go all the way down the levels), and return an event for every object it finds in the scope.

The event is of type trigger, just like we have add, modify, delete, we now have an trigger.

You can test for this in DirXML Script conditions, by an “if operation equals trigger” test to kick off your processing based on the event.

The trigger also has a name, stored in the operation data area, as an operation property, that you can detect with an if operation property equals test, so as to only start your process flow when your specific trigger happens.

A sample of what a trigger for a single object might look like is below:

<nds dtdversion="3.5" ndsversion="8.x">
  <source>
    <product version="3.5.10.20070918 ">DirXML</product>
    <contact>Novell, Inc.</contact>
  </source>
  <input>
    <trigger class-name="User" event-id="trigger-job:SampleTrigger#20080516200324#0#0" qualified-src-dn="O=acme\OU=Users" source="SampleTrigger" src-dn="\IDMTEST\acme\users" src-entry-id="33675">
      <association>{0052DA6C-4995-db01-80D2-A90003000000}</association>
      <operation-data source="SampleTrigger"/>
    </trigger>
  </input>
</nds>

Like the Work Order driver, the Job, and its generated Trigger document do not really do much of anything by themselves. Now the task is up to you, to do something with the event. All this Job does is get you an event at a scheduled time.

With two simple tests, you can kick off what you need to do based on this:

trigger
SampleTrigger

What you do next is up to you. You could use this to conduct an audit of last login times in your tree, maybe using the source DN specified in the document to decide where to look for users. This way you could split the load over time, such that you check the first container one night, the second container another night, and so on, just by defining different jobs to handle it.

One thing to be careful about is that your current operation is not something you can add too, as a Trigger is not going to do much of anything if it makes it through the rules, so remember to set all your writes either back to the source, or the destination direct.

You can get at the Job Definition in either iManager, with the newer plugins (at least the Identity Manager 3.5 plugins, for which there is a version for iManager 2.6 and for iManager 2.7. Make sure to use the correct plugin since the underlying engine, Tomcat is different between iManager 2.6 and 2.7 and the plugins are not compatible) or with Designer for Identity Manager. (I know it is in Designer 3.0, I assume it is supported in Designer 2.1 but I switched to Designer 3.0 a long time ago).

Alas, as is often the case, the first release of a new feature while supported by both iManager and Designer, usually Designer is a bit slower to get access to ‘live’ features. Thus while you can create, modify, and configure a Job in either Designer or iManager, as far as I know, you can only start a job live, via iManager.

Here is the iManager schedule page, where you can see the control you have over scheduling, note that my example is set to run manually.

Then in this next screen shot, you can see that from the iManager plugin, you can select a job from the list, and there is an option to Run Now. This can be very powerful during development, as you might imagine.

I usually build a trigger of some kind when testing a process, like a specific object, with a specific attribute, with a specific value will trigger my action on demand. Well the ability to select Run Now takes away the need to build that trigger mechanism into your rule. Then when you are ready for production, you would just set a schedule for the Job to run on.

Interestingly enough, the Job event has its own Trace level and log file (See the Misc tab on the Job definition page) and if you have ever noticed, trace events have a two character code that identify the channel they came across on. So the Subscriber channel events look like:

16:52:58 4CA39180 Drvrs: eDirectory Driver ST: Pumping XDS to eDirectory.
16:52:58 4CA39180 Drvrs: eDirectory Driver ST: Performing operation query for

Date stamp (16:52:58), an ID (4CA39180), the name of the flag showing in trace (Drvrs), the name of the driver, and then ST: where ST is for the Subscriber channel trace.

Events on the Publisher Channel would have a PT: instead in the trace. Turns out Job events come across with a JT: value for Job Trace I guess. See http://www.novell.com/communities/node/4428/guide-reading-dstrace-output-identity-manager for more notes on reading Dstrace.

Once you have your trigger configured and running on the schedule you want (or running manually when you need to kick off your process) you can detect the event in the Event Transformation policy set, and do whatever it is you need to do.

Do remember to veto the trigger operation early, once you are done with it, as the operation will just generate unneccasary processing as it traverses the rule sets of the Driver.

Like the Work Order driver, what you do next is up to you. Between the two types of approaches to time delayed, or time scheduled event processing you should be able to do much of what you need.

As always if you come up with a scenario that neither case is sufficient for, consider discussing it in the Support Forums as many of the Identity Manager team reads and will often respond.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Tags: ,
Categories: Uncategorized

Disclaimer: As with everything else at NetIQ Cool Solutions, this content is definitely not supported by NetIQ, so Customer Support will not be able to help you if it has any adverse effect on your environment.  It just worked for at least one person, and perhaps it will be useful for you too.  Be sure to test in a non-production environment.

Comment