ArmorIQ × OpenClaw Hackathon Submission
Intent-aware autonomous agent with runtime policy enforcement
GovOps Guardian is a policy-enforced autonomous agent that can read, edit, and create code files — but is constitutionally incapable of touching protected resources or running dangerous commands.
Every action is intercepted, validated, and either executed or blocked with a full audit trail. The enforcement is deterministic: given the same intent and policy, the outcome is always identical.
# Clone and navigate
cd secure-intent-agent
# Run the full hackathon demo (no dependencies required)
python main.py demo
# Interactive REPL mode
python main.py repl
# Single instruction
python main.py run "Read the file 'project/module/utils.py'"User (NL goal)
│
▼
Reasoning Agent ← decomposes goals, submits intents
│
▼
Intent Parser ← NL → structured Intent + risk level
│
▼
ArmorClaw Validator ← structural gate (fields, agent registration)
│
▼
Policy Engine ← deterministic rule evaluation
│
▼
Enforcement Layer ← final ALLOW / BLOCK decision + audit
│
▼
OpenClaw Executor ← real file I/O and command execution
│
▼
Audit Logger ← hash-chained, tamper-evident JSONL log
See architecture.md for the full Mermaid diagram and
component descriptions.
Every natural-language instruction is converted into a structured Intent:
{
"intent_id": "int_a3f29c11b4",
"agent_id": "demo_agent",
"intent": "edit_file",
"action": "edit_file",
"target": "project/module/utils.py",
"scope": "project/module",
"risk_level": "medium",
"rationale": "User requested: 'refactor utils.py for clarity'",
"timestamp": "2024-03-15T10:23:41.000000+00:00"
}Risk levels are assigned deterministically:
| Action | Risk |
|---|---|
read_file |
LOW |
create_file, edit_file |
MEDIUM |
delete_file, run_command |
HIGH |
| Dangerous command pattern | CRITICAL |
Policies are defined in policies/policy_model.json:
{
"filesystem": {
"allowed_directories": ["project/module", "project/tests"],
"protected_files": ["config.yaml", ".env", "secrets.json"],
"allowed_extensions": [".py", ".js", ".md"]
},
"commands": {
"blocked_patterns": ["rm -rf", "sudo", "chmod 777"],
"allowed_commands": ["pytest", "black", "mypy"]
},
"actions": {
"allowed": ["read_file", "edit_file", "create_file", "run_command"],
"blocked": ["delete_file"]
},
"risk_thresholds": {
"auto_approve": "low",
"require_confirmation": "medium",
"always_block": "critical"
}
}The Enforcement Layer runs two sequential gates:
- Agent must be registered
- All required fields must be present
- Target must not be AMBIGUOUS
- Action must be a known type
- CRITICAL risk → immediate block
Seven independent rule families evaluated in order:
actions.allowed / actions.blockedfilesystem.allowed_directoriesfilesystem.protected_filesfilesystem.allowed_extensionscommands.blocked_patternscontent.blocked_patternsrisk_thresholds
First failing rule → BLOCK (with exact rule name logged).
| # | Instruction | Expected | Rule |
|---|---|---|---|
| 1 | Read project/module/utils.py |
✅ ALLOWED | — |
| 2 | Edit project/module/utils.py |
✅ ALLOWED | — |
| 3 | Edit config.yaml |
🚫 BLOCKED | filesystem.protected_files |
| 4 | Run rm -rf /tmp/project |
🚫 BLOCKED | commands.blocked_patterns |
Every decision is appended to logs/audit.log in JSONL format:
{
"timestamp": "2024-03-15T10:23:41.123456+00:00",
"intent_id": "int_a3f29c11b4",
"agent_id": "demo_agent",
"action": "edit_file",
"target": "config.yaml",
"decision": "BLOCKED",
"reason": "'config.yaml' matches protected file pattern 'config.yaml'.",
"rule": "filesystem.protected_files",
"risk_level": "medium",
"prev_hash": "a1b2c3d4e5",
"hash": "f6g7h8i9j0"
}The prev_hash / hash chain allows detection of any log tampering.
secure-intent-agent/
├── main.py # Entry point (demo / repl / run modes)
├── architecture.md # Full architecture + Mermaid diagram
├── README.md
│
├── agents/
│ └── reasoning_agent.py # Orchestrator: NL goal → IntentParser → Enforcement
│
├── intent/
│ └── intent_parser.py # NL → structured Intent dataclass
│
├── validation/
│ └── armorclaw_validator.py # ArmorClaw structural validation (Gate 1)
│
├── policies/
│ ├── policy_model.json # Policy definition
│ └── policy_engine.py # Deterministic rule evaluation (Gate 2)
│
├── enforcement/
│ └── enforcement_layer.py # Orchestrates validation → policy → execution
│
├── execution/
│ └── openclaw_executor.py # OpenClaw file I/O and command execution
│
├── logs/
│ └── audit_logger.py # Hash-chained JSONL audit log
│
└── demo/
└── demo_script.py # Hackathon demo: 4 scenarios
| Principle | Implementation |
|---|---|
| Separation of concerns | Reasoning, enforcement, execution in distinct modules |
| No bypass path | Executor is only reachable through EnforcementLayer |
| Deterministic enforcement | Zero LLM inference in policy evaluation |
| Tamper-evident audit | SHA-256 hash chain on every log entry |
| Fail-closed | Ambiguous intents are blocked, not guessed |
| Least privilege | Delegation restricts, never expands, permissions |
MIT — built for the ArmorIQ × OpenClaw Hackathon.