A cryptographic governance engine for AI safety and regulatory compliance.
Quick Demo:
./scripts/demo-setup.sh && ./scripts/demo-run.sh→ http://localhost:8000
pip install -e ".[dev]"
uvicorn lexecon.api.server:app --reload --port 8000
python3 -m pytest tests/ -q # 1,053 tests, 81% coverageStatus: v0.1.0 | 1,053 tests passing | 81% coverage | 17,882 LOC
Lexecon sits between AI agents and the systems they interact with, enforcing governance decisions in real time:
- Evaluates decisions in <10ms using a graph-based policy engine — no LLM in the loop
- Issues capability tokens — time-limited, Ed25519-signed authorization for approved actions
- Records everything in a tamper-evident SHA-256 hash-chained ledger (Ed25519/RSA-4096)
- Scores risk across 6 dimensions (security, privacy, compliance, operational, reputational, financial)
- Maps to compliance frameworks automatically — SOC 2, ISO 27001, GDPR, HIPAA, PCI-DSS, NIST CSF
- Automates EU AI Act compliance — Articles 11 (technical docs), 12 (10-year records), 14 (human oversight)
git clone https://github.com/Lexicoding-systems/Lexecon.git
cd Lexecon
python3 -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
uvicorn lexecon.api.server:app --reload --port 8000Interactive API docs: http://localhost:8000/docs
import requests
response = requests.post("http://localhost:8000/decide", json={
"actor": "ai_agent:customer_service",
"proposed_action": "read customer transaction history",
"tool": "database_query",
"user_intent": "answer customer support question",
"data_classes": ["pii", "financial"],
"risk_level": 2,
"policy_mode": "strict"
})
result = response.json()
# {
# "decision_id": "dec_01HQXYZ...",
# "outcome": "approved",
# "reasoning": "Policy permits support access to customer data",
# "risk_level": "medium",
# "risk_score": 42,
# "capability_token": "cap_...",
# "ledger_entry_id": "entry_5"
# }# Verify cryptographic chain integrity
response = requests.get("http://localhost:8000/ledger/verify")
print(response.json())
# {"valid": true, "entries_verified": 42, "chain_intact": true}lexecon init --node-id my-node # generate keys, create config
lexecon server --port 8000 # start API server
lexecon decide --actor "ai_agent:bot" \
--action "read:data" \
--tool "db_query" \
--intent "answer user question"
lexecon verify-ledger --ledger-file lexecon_ledger.dbAI Agents / Applications
│
▼
┌───────────────────────────────────────────────────────┐
│ REST API (FastAPI) │
│ Rate Limiting │ Security Headers │ Auth Middleware │
└───────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────────────────────────────────┐
│ SERVICE REGISTRY (DI) │
│ │
│ DecisionService ──► PolicyEngine (<10ms) │
│ │ │
│ ├──► RiskService (6 dimensions, auto-escalate) │
│ ├──► LedgerChain (SHA-256 hash chain) │
│ ├──► EvidenceService (immutable artifacts) │
│ └──► CapabilityToken (Ed25519-signed) │
│ │
│ EscalationService │ OverrideService │
│ ComplianceMappingService │ ResponsibilityTracker │
│ AuthService (MFA + RBAC + OIDC) │
└───────────────────────────────────────────────────────┘
│
▼
SQLite (dev) / PostgreSQL (prod) │ Redis (optional cache)
| Component | Purpose | Notes |
|---|---|---|
| PolicyEngine | Deterministic graph-based evaluation | <10ms, 3 modes |
| DecisionService | Request orchestration + token issuance | 10k+ req/sec |
| LedgerChain | SHA-256 hash-chained audit trail | Tamper-evident |
| RiskService | 6-dimension scoring (0–100) | Auto-escalate ≥80 |
| EvidenceService | Immutable artifact storage | 8 artifact types |
| EscalationService | High-risk safety valve | SLA-tracked |
| OverrideService | Human intervention | Role-gated, justification required |
| ComplianceMappingService | Framework alignment | 6 frameworks + EU AI Act |
| ResponsibilityTracker | WHO/WHY accountability | 4 responsibility levels |
| AuthService | RBAC + MFA + OIDC | 4 roles, TOTP, Google/Azure/Okta |
Graph-based, fully deterministic — no LLM dependency:
from lexecon.policy.engine import PolicyEngine, PolicyMode
from lexecon.policy.terms import PolicyTerm, TermType
from lexecon.policy.relations import PolicyRelation, RelationType
engine = PolicyEngine(mode=PolicyMode.STRICT)
# Define terms
actor = PolicyTerm(term_id="t_ai", term_type=TermType.ACTOR, value="ai_agent:*")
action = PolicyTerm(term_id="t_read", term_type=TermType.ACTION, value="read:customer_data")
# Define relation
engine.add_term(actor)
engine.add_term(action)
engine.add_relation(PolicyRelation(
relation_type=RelationType.PERMITS,
source_term_id="t_ai",
target_term_id="t_read",
))
result = engine.evaluate(actor="ai_agent:assistant", action="read:customer_data")
# result.outcome = "approved"Modes:
strict— deny by default, explicit permit requiredpermissive— allow unless explicitly forbiddenparanoid— deny high-risk without human confirmation
# Article 11 — Auto-generated technical documentation
GET /compliance/eu-ai-act/article-11
# Article 12 — 10-year record-keeping with legal hold
GET /compliance/eu-ai-act/article-12/status
POST /compliance/eu-ai-act/article-12/legal-hold
# Article 14 — Human oversight intervention logging
POST /compliance/eu-ai-act/article-14/intervention# Decisions & Ledger
POST /decide evaluate a governance decision
POST /decide/verify verify decision signature
GET /ledger/entries query ledger
GET /ledger/verify verify chain integrity
# Policies
GET /policies list loaded policies
POST /policies/load load a policy
# Auth
POST /auth/login login
GET /auth/me current user
POST /auth/users create user (admin)
GET /auth/oidc/providers list OIDC providers
# Risk, Escalation, Override
POST /api/governance/risk/assess 6-dimension risk score
POST /api/governance/escalation create escalation
POST /api/governance/override execute override
# Evidence & Compliance
POST /api/governance/evidence register artifact
GET /api/governance/compliance/{fw}/gaps gap analysis
GET /api/governance/audit-export/list list exports
# Observability
GET /health
GET /metrics Prometheus format
Full reference: docs/API_REFERENCE.md or http://localhost:8000/docs
Minimum for development (.env):
LEXECON_ENV=development
LEXECON_NODE_ID=dev-node
LEXECON_POLICY_MODE=strict
PORT=8000Production additions:
DATABASE_URL=postgresql+asyncpg://user:pass@host:5432/lexecon
LEXECON_MASTER_KEY=<64-char hex>
SESSION_SECRET_KEY=<64-char hex>
DB_ENCRYPTION_KEY=<base64-32-bytes>
LEXECON_CORS_ORIGINS=https://your-domain.comFull variable reference: docs/SETUP.md
# Run all tests
python3 -m pytest tests/ -q
# With coverage
python3 -m pytest tests/ --cov=src/lexecon --cov-report=term-missing
# Specific area
python3 -m pytest tests/test_decision_service.py tests/test_policy_engine.py -v
python3 -m pytest tests/test_security.py tests/test_compliance_mapping.py -v
# Security scan
bandit -r src/Test coverage by area:
| Area | Tests | Coverage |
|---|---|---|
| Security / Auth | ~100 | 90%+ |
| Compliance mapping | ~60 | 100% |
| EU AI Act | ~50 | 95%+ |
| Policy engine | ~100 | 90%+ |
| Decision service | ~200 | 82%+ |
| API endpoints | ~150 | 85%+ |
| Total | 1,053 | 81% |
# Docker
docker-compose up
# Kubernetes
kubectl apply -f deployment/kubernetes/
# Helm
helm install lexecon deployment/helm/ \
--set env.LEXECON_ENV=production \
--set env.DATABASE_URL=<your-db-url>See docs/SETUP.md for full production guide.
| Doc | Contents |
|---|---|
| docs/SETUP.md | Installation, environment variables, Docker, production setup |
| docs/API_REFERENCE.md | All 40+ endpoints with request/response examples |
| docs/ARCHITECTURE.md | System diagrams, data flows, component architecture |
| docs/DOCUMENTATION.md | Developer guide and module reference |
- No GraphQL — REST API only
- Frontend (React dashboard) not fully integrated with API
- Multi-tenancy uses logical separation, not database sharding
- Synchronous API only — no event streaming
- Fork → feature branch →
pip install -e ".[dev]"→pre-commit install - Write tests (coverage ≥80% required)
python3 -m pytest tests/ -q— all tests must passruff check src/ && mypy src/- Submit PR
See CONTRIBUTING.md.
MIT — see LICENSE.
Contact: contact@lexicodinglabs.com | security@lexicodinglabs.com
Version: 0.1.0 | Updated: February 2026