AI-assisted development is reshaping how software is written — but it carries a hidden cost. Prompting engines like Claude Code, Copilot, and ChatGPT are powerful accelerators, yet the code they produce is only as healthy as the intent behind each prompt. Partial mutations, incomplete refactors, and context-unaware edits accumulate silently, creating a growing debt of structural violations that no single prompt was designed to catch. The more you build with AI, the more this drift compounds.
Structa was built to confront this reality head-on. Rather than relying on human review or ad-hoc linting, Structa deterministically maps a Python repository into a graph — capturing every module, class, function, and their relationships — and evaluates code health against that structure. Violations surface as first-class Error nodes anchored to the exact entities they affect. An agentic workflow then traverses that graph, gathers precise context, and generates the exact correction prompt needed to fix each error. Not a general suggestion — a surgical instruction.
The insight driving this project: the quality of AI-generated code is bounded by the quality of the prompts used to fix it. Structa closes that loop. It makes code health observable, actionable, and prompt-ready — turning AI-induced drift back into a correctable signal.
Structa analyzes a Python repository into a deterministic graph, detects code-health violations as Error nodes, and provides an agent workflow that converts each error into a precise correction prompt for code fixing.
Core loop:
Detect -> Contextualize -> Generate Fix Prompt -> Verify
- Deterministic graph extraction from Python code (
python/analyzer/scan.py) - Deterministic health checks with stable
Errornode IDs (python/graph/health.py) - Module-scope call tracking (including
__main__entrypoint callsites) to reduce dead-code false positives - Entrypoint-aware orphan-module detection (
main/__main__excluded fromORPHAN_MODULE) - Graph persistence and query in SQLite (
python/graph/store.py,python/graph/queries.py) - Error-to-entity links via
VIOLATESedges (python/graph/ingest.py) - Agent tooling to inspect errors, traverse graph context, read source, and output correction prompts (
python/agents/*) - Live, non-blocking agent run indicators in the desktop UI (phase updates + progress bars while fix suggestions are generated, then auto-hidden when the run ends)
- Three-column desktop workspace layout: errors (left), graph (center, primary), and a suggestion panel (right) that appears only after an agent workflow run starts
- Graph force-layout simulation is intentionally reheated only on initial analyze and
Reset view; selecting nodes or left-panel errors preserves the current arrangement - Graph readability tuning in default view: stronger node separation and stricter smart-label thresholds to reduce cluster clutter
Reset viewalso zooms to fit the currently visible graph so all rendered nodes are brought back into frame- Error list cards emphasize severity, message, and entity label; internal rule codes are hidden from the left-panel card headers
- Suggestion lifecycle management across agent runs with a single active suggestion panel: close hides the panel, and per-error
Open Suggestionactions re-open the latest generated suggestion without re-running the agent - Settings page with a persisted dark mode toggle for the desktop UI
- Analyze repository and ingest graph/errors.
- Select one
Errornode. - Agent fetches context using tools:
get_error_contextget_neighborsget_noderead_filesearch_in_file
- Agent outputs a correction prompt for Claude Code.
- Apply fix in your coding workflow.
- Re-run analysis and verify error closure signal.
Implementation references:
- Agent loop:
python/agents/agent.py - Tool definitions and DB/file access:
python/agents/tools.py - Agent prompt policy:
python/agents/prompts.py - Demo runner:
python/agents/demo.py - Demo guide:
python/agents/AGENT_DEMO.md
npm install
uv syncuv sync installs the Python dependencies declared in pyproject.toml, including anthropic.
Configure API key for agent runs:
macOS/Linux (bash/zsh, current terminal session):
export ANTHROPIC_API_KEY=your_key_heremacOS/Linux (fish, current terminal session):
set -x ANTHROPIC_API_KEY your_key_hereWindows PowerShell (current terminal session):
$env:ANTHROPIC_API_KEY = "your_key_here"Persist for future sessions:
echo 'export ANTHROPIC_API_KEY=your_key_here' >> ~/.bashrc
# or ~/.zshrc for zsh, then restart shell or run: source ~/.bashrc./.venv/bin/python python/main.py analyze --project-path sample_data/small_python_project
./.venv/bin/python python/main.py summary --project-path sample_data/small_python_project
./.venv/bin/python python/main.py graph --project-path sample_data/small_python_project./.venv/bin/python -m unittest discover -s python/tests -vOffline/local check (no model call required):
./.venv/bin/python python/agents/demo.py --list-errorsLLM-backed run (requires ANTHROPIC_API_KEY):
./.venv/bin/python python/agents/demo.py --error-id error:DEAD_FUNCTION:abc123- Build + deterministic checks:
npm run build
./.venv/bin/python -m unittest discover -s python/tests -vExpected: build succeeds and all tests pass.
- Analyze + summarize:
./.venv/bin/python python/main.py analyze --project-path sample_data/small_python_project
./.venv/bin/python python/main.py summary --project-path sample_data/small_python_projectExpected: deterministic counts and an errors_by_rule breakdown.
- Agent readiness:
./.venv/bin/python python/agents/demo.py --list-errorsExpected: lists discovered Error nodes without launching a model call.
After applying one dead-function fix:
- Before:
errors_by_rule.DEAD_FUNCTION = N - After re-run (
analyze+summary):errors_by_rule.DEAD_FUNCTION = N - 1
This is the expected measurable signal for the Detect -> Fix -> Verify loop.
{
"project_name": "Structa",
"workflow": "Detect -> Contextualize -> Generate Fix Prompt -> Verify",
"stage_coverage": {
"idea": "implemented",
"vision": "implemented",
"requirements": "implemented",
"architecture": "implemented",
"backlog": "active",
"tests": "implemented"
},
"agentic_ai_usage": {
"error_control_plane": "Error nodes with VIOLATES edges",
"agent_context_tools": [
"get_error_context",
"get_neighbors",
"get_node",
"read_file",
"search_in_file"
],
"agent_output": "Correction prompt for code-fixing agent workflow"
},
"run_commands": {
"analyze": "./.venv/bin/python python/main.py analyze --project-path sample_data/small_python_project",
"summary": "./.venv/bin/python python/main.py summary --project-path sample_data/small_python_project",
"graph": "./.venv/bin/python python/main.py graph --project-path sample_data/small_python_project",
"tests": "./.venv/bin/python -m unittest discover -s python/tests -v",
"agent_list_errors": "./.venv/bin/python python/agents/demo.py --list-errors",
"agent_run_error": "./.venv/bin/python python/agents/demo.py --error-id error:<RULE_CODE>:<fingerprint>"
},
"evidence_paths": {
"deterministic_scan": "python/analyzer/scan.py",
"health_rules": "python/graph/health.py",
"graph_ingest": "python/graph/ingest.py",
"graph_queries": "python/graph/queries.py",
"agent_loop": "python/agents/agent.py",
"agent_tools": "python/agents/tools.py",
"tests_deterministic": "python/tests/test_deterministic_health_graph.py",
"tests_demo_behavior": "python/tests/test_agent_demo_behavior.py"
},
"known_limitations": [
"Agent currently outputs correction prompts; patch application is performed in external coding workflow",
"LLM-backed agent commands require ANTHROPIC_API_KEY and an available anthropic dependency in the runtime environment",
"Desktop runtime depends on Rust/Tauri system prerequisites"
],
"artifacts": [
"README.md",
"Pre-submission.md",
"docs/health-checks.md",
"docs/graph-construction-and-storage.md",
"python/agents/AGENT_DEMO.md"
]
}- Deterministic graph and stable IDs
- Typed nodes/edges with persisted local graph store
- Rule-based health checker with structured evidence
- Automated deterministic/integrity tests
- Agent consumes graph-native error/context data
- Tool-driven context retrieval over graph + source files
- Produces actionable correction prompts for remediation
- Health findings modeled as first-class graph entities
- Context management grounded in graph relationships, not flat lint output
- CLI-first runbook
- Agent demo commands included
- Clear evidence paths to implementation files
npm run tauri devEvery commit push triggers GitHub Actions workflow .github/workflows/bundle.yml to build desktop bundles for:
- macOS
- Linux
- Windows
Bundle artifacts are uploaded per platform from src-tauri/target/release/bundle/.
Desktop bundle icons are sourced from src-tauri/icons/ (including icon.ico and icon.icns).
- Primary source image:
src-tauri/icons/icon-source.png(1024x1024, transparent background) - Current icon styling: solid blue background (
#457BE8) with white graph mark - Regenerate all platform icon targets (macOS/Windows/Linux/iOS/Android):
npm run tauri -- icon src-tauri/icons/icon-source.png -o src-tauri/icons- Health checks and error properties:
docs/health-checks.md - Graph construction and storage internals:
docs/graph-construction-and-storage.md - Agent usage details:
python/agents/AGENT_DEMO.md - Submission checklist:
Pre-submission.md
