Motivation
When LAPP's agentic analyzer runs, it constructs a system prompt for the LLM to investigate logs. Currently, the only context the LLM receives is the log data and the user's question. There's no mechanism for users to inject persistent context about their environment, preferences, or domain knowledge.
Inspired by how OpenClaw handles this, the idea is to support a set of well-known markdown files in the workspace directory that get automatically injected into the LLM's system prompt.
Proposed Bootstrap Files
| File |
Purpose |
AGENTS.md |
Behavioral guidelines for the AI agent (how to approach problems, what tools to use, coding conventions) |
SOUL.md |
Persona and tone (e.g., "be concise", "prefer direct answers") |
USER.md |
Information about the user (role, expertise level, domain context) |
TOOLS.md |
Environment-specific notes (hostnames, service names, known infrastructure details) |
IDENTITY.md |
Agent identity metadata (name, avatar, etc.) |
How It Would Work
- When
lapp analyze or lapp debug run starts, scan the workspace directory for these well-known files
- If found, read their contents and inject them into the system prompt (before the log analysis instructions)
- Apply a per-file size cap (e.g., 20KB) to prevent prompt bloat
- Files are optional — LAPP works exactly as before if none exist
Use Cases
- SRE context:
TOOLS.md could describe the infrastructure ("this is a Kubernetes cluster running Istio, logs come from envoy sidecars") so the LLM doesn't have to guess
- Domain expertise:
USER.md could say "I'm an SRE, skip basic explanations" to get more targeted analysis
- Behavioral tuning:
SOUL.md could set preferences like "always suggest runbooks" or "output structured JSON"
- Team conventions:
AGENTS.md could encode team-specific log analysis patterns or known issues
References
Motivation
When LAPP's agentic analyzer runs, it constructs a system prompt for the LLM to investigate logs. Currently, the only context the LLM receives is the log data and the user's question. There's no mechanism for users to inject persistent context about their environment, preferences, or domain knowledge.
Inspired by how OpenClaw handles this, the idea is to support a set of well-known markdown files in the workspace directory that get automatically injected into the LLM's system prompt.
Proposed Bootstrap Files
AGENTS.mdSOUL.mdUSER.mdTOOLS.mdIDENTITY.mdHow It Would Work
lapp analyzeorlapp debug runstarts, scan the workspace directory for these well-known filesUse Cases
TOOLS.mdcould describe the infrastructure ("this is a Kubernetes cluster running Istio, logs come from envoy sidecars") so the LLM doesn't have to guessUSER.mdcould say "I'm an SRE, skip basic explanations" to get more targeted analysisSOUL.mdcould set preferences like "always suggest runbooks" or "output structured JSON"AGENTS.mdcould encode team-specific log analysis patterns or known issuesReferences