Creating a Rules Engine
-
Posted on November 15, 2009 by Derek Dieter
-
0
A rules engine is a schedule based data validity application that typically runs as a meta-layer on top of an OLTP application. It fires a set of queries (defined as rules) which determine whether the underlying data elements comply with a specific rule’s definition. The compliance to the rule is then recorded indicating a pass or fail.
Determining the object to report on
The first step is to understand the main object of data you are monitoring. This could range from monitoring customers, loans, employees.. Basically any object that has data surrounding it can be monitored. This object will then be the focus of the rules engine report. Sometimes the data being monitored may be composed of more than one object. You may want to monitor the data surrounding customers at certain store locations. In which case, the granularity of your application will be customer-location combinations.
In our example we will monitor customers for our retail store AlphaBeta. The rule data we want to collect falls within 3 categories:
Failure to Pay
Significant Purchases
Returns
These 3 categories are parent categories for a series of “Rules”. These rules are actually stored procedures that check for specific conditions surrounding our customers. Here are the rules in each category:
- Failure to pay:
- Rule 1 – Has failed credit card authorizations within 30 days
- Rule 2 – Insufficient funds outstanding
- Significant purchases:
- Rule 1 – Order amount over $500 within 30 days
- Rule 2 – Alcohol purchase over $200 within 30 days
- Rule 3 – Drug purchases over 400 within 30 days
- Returns:
- Rule 1 – Returns over 15% of items
- Rule 2 – Returns over 200 dollars
Now that we have the rules defined, the stored procedures are then created. Each stored procedure queries the underlying data for all customers to determine the compliance for that customer. Rule 1 in significant purchases would look like the following:
[cc lang=”sql”]
CREATE spRule_20002
@JobID int
AS
BEGIN
INSERT INTO RuleStaging
(
CustomerID,
RuleID,
RuleStatus,
JobID
)
SELECT
CustomerID,
RuleID,
RuleStatus = 1,
JobID = @JobID
FROM Customers c
JOIN Orders o
ON o.CustomerID = c.CustomerID
WHERE o.OrderDate >= GETDATE() – 30
AND OrderAmount >= 500
END
[/cc]
You’ll notice the rule only captures customers that fail the condition. We do not care to trap all the customers that do not trip the rule. Once we capture the customers we place them in a RuleStaging table. This is an intermediary step prior to placing the results into our RuleFlags table. The RuleFlags table contains a separate row for each Rule/Customer combination. Once all our rules are finished running, we will update the RuleFlags table with all the updated rule statuses. When that is complete, the last step is to aggregate the number of failed rules in each category for each customer, then update the RuleReport table. The rule report table summarizes the data for each customer, providing the number of rules failed for each category. Our table structure, looks like the following:
In this model, the first step is to run all the rules against the customers. Once complete, the RuleStaging table is populated with all the customers that failed for each rule. The historical table RuleFlags is then updated with new statuses. If a status changes for a customer, then the old record is closed off and a new record is inserted. Lastly, the RuleReport table is updated with the last summary information indicating the number of failed rules in each category for each customer.
Additional Techniques
This is a basic rundown of the structure of a rules engine. While there are many variations in it’s design, the basic components tend not to change much. There are many other techniques however that can be implemented.
One such technique is to offer near-real time updating. Using SQL 2008, this can be implemented via change data capture. The implementation requires a cross reference of the table columns in which each rule is dependent upon. When a column gets updated for a particular customer, the customer is then flagged, along with the rules dependent upon the changed column. Change Data Capture can monitor the underlying data in order to provide this type of setup. To do so however, it is recommended to employ a Meta Data Driven Change Data Capture (CDC) Solution.
Another technique is to run the rules in parallel. To do so requires either using the service broker or using the CLR in order to call the procedures all at once. This helps to mitigate time between refreshes.
- Comments (RSS)
- Trackback
- Permalink