Skip to content
Ryan Brush edited this page Aug 29, 2013 · 2 revisions

Clara's rule engine is divided into distinct components:

  • The Rete network, an immutable directed acyclic graph that defines how data flows between units of logic.
  • The working memory, used to hold facts and indexed state that can be joined to other elements.
  • The transport, which transports facts and other state between nodes in the Rete network.

These items are loosely coupled can be replaced with independent implementations. For instance, the working memory is a simple persistent data structure by default, but could be a distributed hash table or other model. Similarly, the transport is done by local function calls by default, but could transport computation to remote machines.

Such decoupling wouldn't provide much value in most production systems because they offer functionality that can't easily be mapped to a parallel or distributed processing model. But Clara is designed to allow fully independent computation on incoming data, even within a single rule.

Consider the following simple Clara rule:

(defrule cold-and-windy
  "Find places where it's cold and windy."
  [Temperature (< temperature 20) (== ?loc location)]
  [WindSpeed (> windspeed 30) (== ?loc location)]
  =>
  (insert! (->ColdAndWindy ?loc)))

The semantics should be fairly clear: find cold temperatures and high wind speeds at a common location, and assert that it's cold and windy there. However, when this rule is run in Clara, the Temperature constraint and the WindSpeed constraint can be evaluated fully independently of one another. Temperatures and wind speeds that meet the given criteria are then routed to a common place defined by the hash value of the location that they bind to.

This has some important implications:

  • A system could be given millions of Temperature or WindSpeed readings, and these can be evaluated on separate threads or even on separate processes.
  • Items that have a common binding are routed to a common location by the transport, essentially doing a hash-based join. In fact, hash-based joins are the only joins supported by Clara, guaranteeing rules have efficient joins and can be parallelized like this.

In fact, this is exactly what the implementation of clara-storm does. Using the Storm processing system, it deals with large volumes of incoming data, finds facts that may be of interest, and routes them by hashing the joined fields to merge streams of facts to related ones. Other processing infrastructures are also possible.

This processing model has advantages even on a single machine. Rule evaluation spreads across multiple cores with minimal thread contention, an important feature as adding cores becomes the primary means of increasing processing power.

Tradeoffs

Of course, achieving these advantages requires some tradeoffs in Clara's design. These are listed below.

First, facts in Clara must be immutable. To change a value, the user must remove the previous fact and insert a new one.

Second, the requirement of hash-based joins eliminates a number of options when writing rules, such as doing arbitrary comparisons in-line between constraints. Fortunately, for most problems there are other ways of modeling rules so such comparisons aren't necessary. These include:

  • Using accumulators to reason over collections of related facts. Rather than comparing a fact to its siblings to find, say, the one with the newest timestamp, an accumulator can more efficiently locate it, resulting in improved efficiency.
  • Clara still offers the ability to run arbitrary tests between joined facts using the test facility. This is similar to the test expression in Jess or the eval expression in Drools, where arbitrary logic can be run to test previously bound variables.

It should be noted an additional advantage to the "hash join only" restriction is predictable performance. An arbitrary comparison in other rules engines can degrade to n-squared performance -- comparing one instance of a fact type to every other instance -- and it's not always clear to the user when such a degradation could occur.

Conclusion

Clara's architecture is really a combination of ideas borrowed from other domains: known algorithms for forward-chaining rules, immutable data, and parallel processing. However, this combination seems to be powerful, allowing us to express complicated business logic as simple rules, and apply it to our data at scale.

Clone this wiki locally