← Blog

trillions of evaluations under a sec

Most rule engines break down quietly. They don’t crash. They just get slow enough to be useless.

I recently had to validate 1 BILLION rows × 1,000 columns. Each field had its own rules and anti-rules. Every row needed to be flagged deterministically.

A traditional rule engine approach was obvious… and obviously wrong.

Why? Because rule engines evaluate cell by cell, rule by rule. At this scale, that means:

trillions of evaluations

branch-heavy execution

cache misses everywhere

hours of runtime (best case)

So I threw the model away.

Instead of evaluating rows, I flipped the problem and evaluated predicates.

Here’s the core idea that changed everything:

Compile every atomic rule into a column-level predicate

Evaluate predicates once over the entire column (vectorized)

Represent results as bitsets

Combine rules and anti-rules using bitwise algebra (AND / OR / NOT)

Produce per-field, per-row violations without branching

No row-by-row loops. No rule interpretation. No agenda, no firing, no guessing.

Just:

columnar execution

predicate reuse

bit-parallelism

memory-bandwidth–bound performance

Result:

What would take hours with a rule engine finished in seconds

Fully deterministic

Fully explainable

Linear scalability

Horizontal scaling becomes trivial

This is not “AI”. This is not “ML”. This is respecting how CPUs, memory, and data actually work.

Most validation systems are slow not because the problem is hard, but because the abstraction is wrong.

When you stop asking “Which rule applies to this row?” and instead ask “Which rows satisfy this predicate?”

the entire performance curve changes.

That shift is the difference between application-level thinking and database-engine-level thinking.

Scale doesn’t require magic. It requires the right mental model.