Differentiable Defeasible Deontic Logic
DiffDDL bridges formal legal reasoning and modern machine learning. It lets you encode rules about what must be done, what may be done, and what must not be done — and then train those rules from data, so your compliance or policy engine improves with experience while staying interpretable.
If you work in legal tech, compliance engineering, or policy automation: this gives you a way to turn written regulations (like GDPR or the EU AI Act) into living, learnable decision systems that can explain themselves.
Keywords: legal tech, regulatory compliance, explainable AI, neural-symbolic AI, deontic logic, differentiable logic, GDPR, AI Act, policy automation, computational law
In everyday legal language, we talk about three kinds of norms:
- Obligations — what someone must do
- Permissions — what someone may do
- Prohibitions — what someone must not do
Traditional logic systems can express these, but they are usually hand-coded and brittle: if a new court ruling shifts how a rule is interpreted, or if a regulation is amended, someone has to rewrite the code manually.
DiffDDL changes that. Built on PyTorch, it makes these legal-style operators differentiable, meaning you can:
- Learn rule weights from real compliance data — instead of guessing importance, let the model learn it.
- Adapt to new regulatory frameworks — fine-tune for GDPR, the EU AI Act, sector-specific rules, or internal policies.
- Combine neural pattern recognition with symbolic reasoning — use natural language models to extract facts, then reason over them with transparent rules.
- Explain every decision — the system can tell you which rules fired, why they fired, and how strongly they conflicted.
Clone the repository and install it locally:
git clone https://github.com/OsamaMoftah/diffddl.git
cd diffddl
pip install -e ".[dev]"This installs the core library along with development tools (tests, formatting, and notebook support).
import torch
from diffddl import DiffDDL, DeonticOp
# Create DDL operations
ddl = DiffDDL(fuzzy=True, temperature=2.0)
# Soft logical operations
x = torch.tensor([0.8])
y = torch.tensor([0.6])
and_result = ddl.soft_and(x, y) # 0.48
or_result = ddl.soft_or(x, y) # 0.92
not_result = ddl.soft_not(x) # 0.2
# Deontic operators: obligation, permission, prohibition
obligation = ddl.obligation(x) # O(x)
permission = ddl.permission(x) # P(x)
prohibition = ddl.prohibition(x) # F(x)Legal rules often involve combining conditions: "if A and B, then C." In DiffDDL, logical connectives are softened so they work with real-valued evidence (like confidence scores from a document classifier) rather than just true/false.
| Operation | Symbol | DiffDDL Method | Formula |
|---|---|---|---|
| AND | ∧ | soft_and(x, y) |
x × y |
| OR | ∨ | soft_or(x, y) |
x + y − xy |
| NOT | ¬ | soft_not(x) |
1 − x |
| IMPLIES | → | soft_implies(x, y) |
¬x ∨ y |
These are the heart of the system — they mirror how lawyers and regulators think about norms.
| Operator | Meaning | DiffDDL Method |
|---|---|---|
| O | Obligation (must do) | obligation(x) |
| P | Permission (may do) | permission(x) |
| F | Prohibition (must not do) | prohibition(x) |
┌─────────────────────────────────────────────────────────────┐
│ DiffDDL │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────┐ ┌─────────────────────────────┐ │
│ │ Predicate │ │ Differentiable DDL Ops │ │
│ │ Tensor │────▶│ │ │
│ │ (0.0 - 1.0) │ │ soft_and, soft_or │ │
│ └─────────────────┘ │ soft_not, soft_implies │ │
│ │ │ │
│ │ obligation(x) → O(x) │ │
│ │ permission(x) → P(x) │ │
│ │ prohibition(x) → F(x) │ │
│ └─────────────┬───────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────┐ │
│ │ Conflict Resolution │ │
│ │ │ │
│ │ resolve_conflict( │ │
│ │ oblig, perm, prob │ │
│ │ ) → decision │ │
│ └─────────────┬───────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────┐ │
│ │ Decision Output │ │
│ │ │ │
│ │ 0 = ALLOW │ │
│ │ 1 = ALLOW_WITH_OBLIGATIONS│ │
│ │ 2 = BLOCK │ │
│ └─────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
The DifferentiableReasoner is where DiffDDL becomes truly powerful. It contains two parallel pathways:
- A neural encoder that learns deep representations of your input predicates.
- A symbolic rule layer where each rule learns which predicates it cares about and what kind of norm it produces (obligation, permission, or prohibition).
These two pathways feed into a shared decision head. The result is a system that can both discover new patterns from data and express them in the language of rules.
import torch
from diffddl.reasoner import DifferentiableReasoner, TrainableDDLReasoner
# Create a trainable reasoner
reasoner = DifferentiableReasoner(
n_predicates=10,
hidden_dim=32,
n_rules=5,
)
# Forward pass
predicates = torch.rand(1, 10) # 10 predicates
result = reasoner.reason(predicates)
print(f"Decision: {result.decided_action}")
print(f"Obligation: {result.obligation_strength.item():.3f}")
print(f"Permission: {result.permission_strength.item():.3f}")
print(f"Prohibition: {result.prohibition_strength.item():.3f}")import torch
from diffddl.reasoner import TrainableDDLReasoner
reasoner = TrainableDDLReasoner(
n_predicates=10,
hidden_dim=32,
)
optimizer = torch.optim.Adam(reasoner.parameters(), lr=0.01)
for epoch in range(100):
optimizer.zero_grad()
predicates = torch.rand(8, 10)
target_decisions = torch.randint(0, 3, (8,))
loss = reasoner.loss(predicates, target_decisions)
loss.backward()
optimizer.step()
if epoch % 20 == 0:
print(f"Epoch {epoch}, Loss: {loss.item():.4f}")Every decision can be explained. Given a result, the reasoner can tell you:
- The overall decision and its confidence scores
- Which rules were active and how strongly they fired
- The top predicates driving each active rule
- Whether the rules were in conflict and, if so, how severely
This makes DiffDDL suitable for high-stakes domains where transparency is not optional — regulatory compliance, contract analysis, policy enforcement, and automated governance.
explanation = reasoner.explain(
result,
predicates=predicates,
predicate_names=["has_consent", "is_sensitive_data", ...],
)DiffDDL is designed to sit at the boundary between perception and reasoning. You can use any neural model (BERT, a custom classifier, a vision model) to extract predicates from unstructured input, and then let DiffDDL handle the normative reasoning.
import torch
import torch.nn as nn
from diffddl.reasoner import DifferentiableReasoner
class NeuralPredicateExtractor(nn.Module):
def __init__(self, input_dim=768, hidden_dim=64, n_predicates=10):
super().__init__()
self.encoder = nn.Sequential(
nn.Linear(input_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, n_predicates),
nn.Sigmoid(),
)
def forward(self, x):
return self.encoder(x)
# Combined neuro-symbolic model
extractor = NeuralPredicateExtractor()
reasoner = DifferentiableReasoner(n_predicates=10)
# Document embedding → predicates → DDL reasoning
embedding = torch.rand(1, 768) # From BERT, etc.
predicates = extractor(embedding)
decision = reasoner.reason(predicates)See examples/gdpr_compliance.py for a full walkthrough. It demonstrates how to:
- Encode GDPR-style conditions as predicates (consent, sensitive data, profiling, child data, etc.)
- Run scenarios through the reasoner
- Train the system on synthetic compliance data
- Save and reload learned rules
This is the kind of workflow DiffDDL was built for: turning written regulations into executable, trainable, explainable logic.
DiffDDL(temperature=1.0, fuzzy=True)Methods:
| Method | Description |
|---|---|
soft_not(x) |
Differentiable NOT |
soft_and(x, y) |
Differentiable AND (t-norm) |
soft_or(x, y) |
Differentiable OR (t-conorm) |
soft_implies(x, y) |
Differentiable implication |
obligation(x) |
O(x) operator |
permission(x) |
P(x) operator |
prohibition(x) |
F(x) operator |
conflict_score(o, p, f) |
Detect conflicts |
resolve_conflict(o, p, f) |
Priority-based resolution |
DifferentiableReasoner(
n_predicates=10,
hidden_dim=32,
n_rules=5,
fuzzy=True,
temperature=2.0,
)Forward:
result = reasoner.reason(predicates: torch.Tensor)Returns DDLResult:
{
"decision": [0, 1, or 2],
"obligation_strength": float,
"permission_strength": float,
"prohibition_strength": float,
"conflict_score": float,
"decided_action": ["ALLOW", ...],
}- Legal engineers building compliance or contract analysis tools
- Regulatory technologists automating policy checks against frameworks like GDPR, the EU AI Act, or SEC rules
- Researchers in AI & law, normative reasoning, or explainable AI
- Policy teams who need transparent, auditable decision systems rather than opaque black boxes
If you use DiffDDL in research, please cite:
@software{diffddl2024,
title = {DiffDDL: Differentiable Defeasible Deontic Logic},
author = {Moftah},
version = {0.1.0},
year = {2024},
}MIT