Skip to content

statsclaw/example-workspace

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

StatsClaw Example Workspace

This repository demonstrates the workspace that StatsClaw automatically generates during workflow runs. Every StatsClaw session produces structured process records — comprehension artifacts, specifications, audit trails, run logs, and handoff documents — all synced here, separate from the target codebase.

This is what StatsClaw's process recording looks like in practice.

What's Inside

example-workspace/
├── example-fect/           # fect R package (1→2 refactoring)
│   ├── CHANGELOG.md        # Version history across runs
│   ├── HANDOFF.md          # Cross-session continuity document
│   ├── docs.md             # Documentation index
│   └── runs/               # Per-run process records
│
├── example-R2PY/           # interflex Python package (0→1 greenfield)
│   ├── CHANGELOG.md
│   ├── HANDOFF.md
│   ├── context.md          # Repository metadata
│   ├── docs.md
│   └── runs/
│       ├── 2026-04-01-r2py-interflex-linear.md    # Run summary
│       └── R2PY-20260401-104010/                   # Full artifacts
│           ├── comprehension.md   # Deep comprehension record
│           ├── spec.md            # Implementation specification
│           ├── test-spec.md       # Test specification (independent)
│           ├── review.md          # Cross-pipeline convergence audit
│           ├── audit.md           # Detailed test results
│           ├── ARCHITECTURE.md    # System architecture diagram
│           └── ...
│
├── example-probit/         # Probit estimators (paper→package)
│   ├── CHANGELOG.md
│   ├── HANDOFF.md
│   ├── context.md
│   ├── docs.md
│   └── runs/
│       ├── 2026-04-01-exampleProbit-initial-build.md
│       └── probit-20260401-103705/
│           ├── comprehension.md
│           ├── spec.md
│           ├── test-spec.md
│           ├── sim-spec.md        # Simulation specification (3rd pipeline)
│           ├── simulation.md      # Monte Carlo design & results
│           ├── review.md
│           └── ...
│
└── example-panelView/      # panelView network visualization (paper→feature)
    ├── context.md
    ├── ref/
    │   └── correia2016-notes.md   # Reference notes from paper comprehension
    └── runs/
        └── REQ-20260401-network-viz/
            ├── comprehension.md
            ├── spec.md
            ├── test-spec.md
            ├── review.md
            └── ...

Key Artifacts Explained

Per-Project Files

File Purpose
CHANGELOG.md Accumulated version history across all runs — what changed, when, verdict
HANDOFF.md Cross-session continuity — current state, known issues, technical insights, next steps. Each new session's Leader reads this to resume with full context
context.md Repository metadata — URL, language, key functions, current branch
docs.md Documentation index — what was generated, where it lives

Per-Run Artifacts

Each run (e.g., R2PY-20260401-104010/) contains the complete process record:

Artifact Generated By Purpose
comprehension.md Planner Auditable evidence that the system understood the methodology before writing any code
spec.md Planner Implementation specification — sent to Builder only
test-spec.md Planner Test specification — sent to Tester only (Builder never sees this)
sim-spec.md Planner Simulation specification — sent to Simulator only (probit task)
review.md Reviewer Cross-pipeline convergence audit — tolerance integrity, isolation verification, ship/no-ship verdict
audit.md Tester Detailed per-test results with expected/actual/tolerance/verdict
simulation.md Simulator Monte Carlo design, seed strategy, results
ARCHITECTURE.md Scriber System architecture diagram and module map
implementation.md Builder What was built, files changed, design decisions
log-entry.md Leader Complete run log — timeline, problems, resolutions
status.md Leader Current workflow state machine position
request.md Leader Original user request that initiated the run
credentials.md Leader Credential verification record
mailbox.md Leader Inter-agent message log

Why a Separate Workspace?

StatsClaw stores all workflow artifacts in a dedicated workspace repository rather than in the target codebase. This keeps target repos clean (no .statsclaw/ directories cluttering your package) while preserving full traceability. The workspace is your project's institutional memory — every decision, every bug, every insight is recorded and survives across sessions.

How It Works

When you run StatsClaw, it automatically:

  1. Creates the workspace repo if it doesn't exist
  2. Syncs all artifacts after each workflow run
  3. Updates HANDOFF.md with current state for cross-session continuity
  4. Appends to CHANGELOG.md with the run verdict

You never need to manage these files manually — they're generated as byproducts of the workflow.

Example: Reading a Comprehension Record

Open any comprehension.md to see what StatsClaw understood before writing code. For example, example-probit/runs/probit-20260401-103705/comprehension.md shows:

  • Every equation inventoried from the PDF
  • Each method restated in the Planner's own notation
  • Self-test questions answered
  • Assumptions identified
  • Verdict: FULLY UNDERSTOOD → proceed to specification

This artifact lets you verify that the system correctly internalized your methodology before any code is generated.

Related Repositories

Repository What it is
statsclaw/statsclaw The framework itself
statsclaw/example-fect fect R package (target repo)
statsclaw/example-R2PY interflex Python package (target repo)
statsclaw/example-probit Probit estimators R/C++ package (target repo)
statsclaw/example-panelView panelView R package (target repo)

About

StatsClaw example: workspace repository showing workflow artifacts, run logs, and process records

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors