From e70e68b58bfddbee8f6c66d69958db05ebcbe93a Mon Sep 17 00:00:00 2001 From: mvoutov Date: Wed, 8 Apr 2026 11:50:43 -0700 Subject: [PATCH 01/17] fix: monorepo fix for subdirectory projects + impact + readme --- README.md | 100 +++-- bin/cli.js | 26 +- package.json | 2 +- src/commands/doc-impact.js | 115 ++++++ src/commands/doc-init.js | 96 ++++- src/commands/doc-sync.js | 39 +- src/index.js | 1 + src/lib/git-helpers.js | 16 +- src/lib/git-hook.js | 88 +++-- src/lib/impact.js | 352 ++++++++++++++++++ src/templates/hooks/graph-context-prompt.mjs | 2 +- src/templates/hooks/graph-context-prompt.sh | 4 +- src/templates/hooks/post-tool-use-tracker.sh | 26 +- .../hooks/skill-activation-prompt.mjs | 2 +- .../hooks/skill-activation-prompt.sh | 4 +- tests/git-hook.test.js | 41 +- tests/impact.test.js | 128 +++++++ 17 files changed, 942 insertions(+), 100 deletions(-) create mode 100644 src/commands/doc-impact.js create mode 100644 src/lib/impact.js create mode 100644 tests/impact.test.js diff --git a/README.md b/README.md index 8728602..a07645a 100644 --- a/README.md +++ b/README.md @@ -4,64 +4,77 @@ # aspens -## Stop re-explaining your repo. Start shipping. +## Prevent stale agent context. [![npm version](https://img.shields.io/npm/v/aspens.svg)](https://www.npmjs.com/package/aspens) [![npm downloads](https://img.shields.io/npm/dm/aspens.svg)](https://www.npmjs.com/package/aspens) [![GitHub stars](https://img.shields.io/github/stars/aspenkit/aspens)](https://github.com/aspenkit/aspens) [![MIT License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE) -Claude, Codex, and other coding agents write better code when they start with the right repo context. -Aspens scans your repo, discovers what matters, and generates context that stays updated on every commit — so each session starts on track. +Aspens keeps Claude Code and Codex aligned with your actual codebase. +It scans the repo, generates project-specific instructions and skills, and keeps them fresh as the code evolves. --- -**Why aspens?** +**One-line pitch** + +Repo context for coding agents that does not go stale. + +**Why it matters** | Without aspens | With aspens | |---|---| -| Agents ignore your conventions | Claude and Codex start with repo-specific instructions | -| Agents rebuild things that already exist | Skills and docs point them to the right abstractions | -| You manually maintain AI context files | Aspens generates and updates them for you | -| Agents spend half their tool calls searching for files | Import graph tells them which files actually matter | -| Your codebase gets fragmented and inconsistent over time | Domain-specific skills with critical rules and anti-patterns | -| Burns through tokens searching, reading, and rebuilding | Your AI tools already know what matters — dramatically fewer tool calls | +| Agents miss conventions and architectural boundaries | Agents start from repo-specific instructions | +| New sessions waste time rediscovering key files | Skills point to the right files, patterns, and rules | +| Context files drift after code changes | Aspens syncs them from the codebase | +| Teams keep correcting the same mistakes manually | Critical conventions and anti-patterns stay in generated context | --- +**Start here** + ```bash -npx aspens doc init . +npx aspens doc init --recommended . +npx aspens doc impact . ``` +Generate context, then verify it is fresh and covering the repo. + ![aspens demo](demo/demo-full.gif) -**What are skills?** Concise markdown files (~35 lines) that give coding agents the context they need to write correct code: key files, patterns, conventions, and critical rules. +**What aspens does** + +- `Scan` the repo to find domains, hub files, and structure +- `Generate` instructions and skills for Claude Code, Codex, or both +- `Sync` generated context as the codebase changes +- `Prove` coverage and freshness with `aspens doc impact` + +**What are skills?** Concise markdown files that give coding agents the context they need to write correct code: key files, patterns, conventions, and critical rules. ## Quick Start ```bash -npx aspens scan . # See what's in your repo -npx aspens doc init . # Generate repo docs for the active target -npx aspens doc init --target codex # Generate AGENTS.md + .agents/skills -npx aspens doc sync --install-hook # Auto-update generated docs on every commit +npx aspens scan . # Map the repo +npx aspens doc init --recommended . # Generate the recommended context setup +npx aspens doc impact . # Verify freshness and coverage +npx aspens doc sync --install-hook # Keep generated context synced on every commit ``` Requires [Node.js 20+](https://nodejs.org) and at least one supported backend CLI such as [Claude Code CLI](https://docs.anthropic.com/en/docs/claude-code) or Codex CLI. ## Target Support -Aspens supports different AI tools through different output targets: +Aspens supports multiple agent environments through output targets: - `claude`: `CLAUDE.md` + `.claude/skills` + Claude hooks - `codex`: `AGENTS.md` + `.agents/skills` + directory `AGENTS.md` - `all`: generate both sets together -Short version: - -- Claude support is hook-aware and document-aware -- Codex support is document-driven, not hook-driven +Use `claude` if you want hooks plus docs. +Use `codex` if you want instruction files plus skills. +Use `all` if your team works across both. Important distinction: @@ -86,7 +99,7 @@ See [docs/target-support.md](docs/target-support.md) for the full target model a ### `aspens scan [path]` -Detect tech stack, frameworks, structure, and domains. Builds an import graph to identify hub files, domain coupling, and hotspots. No LLM calls — pure file system + git inspection. +Map the repo before generating anything. `scan` is deterministic: it detects tech stack, domains, hub files, coupling, and hotspots without calling an LLM. ``` $ aspens scan . @@ -139,9 +152,21 @@ $ aspens scan . ### `aspens doc init [path]` -Generate repo docs for Claude, Codex, or both. Runs parallel discovery calls through the selected backend to understand your architecture, then generates skills/docs based on what it found. +Generate agent context from the repo itself. `doc init` scans the codebase, discovers the architecture and feature domains, then writes instructions and skills for Claude, Codex, or both. + +For the lowest-friction setup, use: + +```bash +aspens doc init --recommended . +``` + +`--recommended` is the fastest path: + +- reuses existing target config when present +- defaults to improving existing docs instead of prompting +- auto-picks the generation mode based on repo size -The flow: +What it does: 1. **Scan + Import Graph** — builds dependency map, finds hub files 2. **Parallel Discovery** — 2 backend-guided discovery passes explore simultaneously (domains + architecture) 3. **User picks domains** — from the discovered feature domains @@ -205,10 +230,35 @@ $ aspens doc init . | `--verbose` | Show backend reads/activity in real time | | `--target ` | Output target: `claude`, `codex`, or `all` | | `--backend ` | Generation backend: `claude` or `codex` | +| `--recommended` | Use the recommended target, strategy, and generation mode | + +### `aspens doc impact [path]` + +Show whether your generated context is still keeping up with the codebase. This is the proof surface: are the docs present, covering the right domains, surfacing the right hub files, and fresher than the repo changes? + +Checks: +- instructions and skills present per target +- domain coverage vs detected repo domains +- top hub files surfaced in root guidance +- whether generated context is older than the newest source changes + +```bash +$ aspens doc impact . + + my-app + + Claude Code + Instructions: present (CLAUDE.md) + Skills: 9 + Domain coverage: 8/9, missing onboarding + Hub files surfaced: 4/5 + Hooks: installed + Last updated: Apr 7, 2026, 9:41 AM stale vs source +``` ### `aspens doc sync [path]` -Update generated docs based on recent git commits. Reads the diff, maps changes to affected docs, and updates only what changed. +Keep generated context from drifting. `doc sync` reads recent git changes, maps them to affected skills and instructions, and updates only what changed. If your repo is configured for multiple targets, `doc sync` updates all configured outputs from one run. Claude activation hooks remain Claude-only, but the git post-commit sync hook can keep both Claude and Codex docs current. diff --git a/bin/cli.js b/bin/cli.js index fbe7e66..b689274 100755 --- a/bin/cli.js +++ b/bin/cli.js @@ -9,6 +9,7 @@ import { scanCommand } from '../src/commands/scan.js'; import { docInitCommand } from '../src/commands/doc-init.js'; import { docSyncCommand } from '../src/commands/doc-sync.js'; import { docGraphCommand } from '../src/commands/doc-graph.js'; +import { docImpactCommand } from '../src/commands/doc-impact.js'; import { addCommand } from '../src/commands/add.js'; import { customizeCommand } from '../src/commands/customize.js'; import { CliError } from '../src/lib/errors.js'; @@ -39,22 +40,25 @@ function countTemplates(subdir) { function showWelcome() { console.log(` - ${pc.cyan(pc.bold('aspens'))} ${pc.dim(`v${VERSION}`)} — AI-ready documentation for your codebase + ${pc.cyan(pc.bold('aspens'))} ${pc.dim(`v${VERSION}`)} — keep agent context accurate as your repo changes ${pc.bold('Quick Start')} ${pc.green('aspens scan')} See your repo's tech stack and domains - ${pc.green('aspens doc init')} Generate target docs for Claude, Codex, or both + ${pc.green('aspens doc init --recommended')} Generate the recommended context setup + ${pc.green('aspens doc impact')} Show freshness and coverage of generated context ${pc.green('aspens doc init --target codex')} Generate AGENTS.md + .agents/skills ${pc.green('aspens doc sync --install-hook')} Auto-update Claude docs on every commit ${pc.bold('Generate & Sync')} ${pc.green('aspens doc init')} ${pc.dim('[path]')} Generate docs from your code + ${pc.green('aspens doc init --recommended')} Use smart defaults and fewer prompts ${pc.green('aspens doc init --dry-run')} Preview without writing ${pc.green('aspens doc init --mode chunked')} One domain at a time (large repos) ${pc.green('aspens doc init --target all')} Generate Claude + Codex docs together ${pc.green('aspens doc init --model haiku')} Use a specific backend model ${pc.green('aspens doc init --verbose')} See backend activity in real time ${pc.green('aspens doc sync')} ${pc.dim('[path]')} Update generated docs from recent commits + ${pc.green('aspens doc impact')} ${pc.dim('[path]')} Check freshness, coverage, and hooks ${pc.green('aspens doc sync --commits 5')} Sync from last 5 commits ${pc.green('aspens doc sync --refresh')} Refresh all skills from current code @@ -77,10 +81,10 @@ function showWelcome() { ${pc.bold('Typical Workflow')} ${pc.dim('$')} aspens scan ${pc.dim('1. See what\'s in your repo')} - ${pc.dim('$')} aspens doc init --target all ${pc.dim('2. Generate CLAUDE.md + AGENTS.md outputs')} - ${pc.dim('$')} aspens add agent all ${pc.dim('3. Add Claude-side AI agents')} - ${pc.dim('$')} aspens customize agents ${pc.dim('4. Tailor Claude agents to your project')} - ${pc.dim('$')} aspens doc sync --install-hook ${pc.dim('5. Auto-update Claude docs on every commit')} + ${pc.dim('$')} aspens doc init --recommended ${pc.dim('2. Generate the right default context')} + ${pc.dim('$')} aspens doc impact ${pc.dim('3. Verify freshness and domain coverage')} + ${pc.dim('$')} aspens add agent all ${pc.dim('4. Add Claude-side AI agents')} + ${pc.dim('$')} aspens doc sync --install-hook ${pc.dim('5. Auto-update docs on every commit')} ${pc.bold('Target Notes')} ${pc.dim('Claude:')} ${pc.cyan('CLAUDE.md + .claude/skills + hooks')} @@ -118,7 +122,7 @@ function checkMissingHooks(repoPath) { program .name('aspens') - .description('Generate and maintain AI-ready documentation for your codebase') + .description('Keep agent context accurate as your codebase changes') .version(VERSION) .action(() => { // No command given — show welcome @@ -145,6 +149,7 @@ doc .command('init') .description('Scan repo and generate skills + guidelines') .argument('[path]', 'Path to repo', '.') + .option('--recommended', 'Use the recommended target, strategy, and generation mode') .option('--dry-run', 'Preview without writing files') .option('--force', 'Overwrite existing skills') .option('--timeout ', 'Backend timeout in seconds', parseTimeout, 300) @@ -179,6 +184,13 @@ doc return docSyncCommand(path, options); }); +doc + .command('impact') + .description('Show generated context freshness and coverage') + .argument('[path]', 'Path to repo', '.') + .option('--no-graph', 'Skip import graph analysis') + .action(docImpactCommand); + doc .command('graph') .description('Rebuild the import graph cache') diff --git a/package.json b/package.json index 83af9ad..6ab5f3e 100644 --- a/package.json +++ b/package.json @@ -1,7 +1,7 @@ { "name": "aspens", "version": "0.6.0", - "description": "Generate and maintain AI-ready documentation for any codebase", + "description": "Keep coding-agent context accurate as your codebase changes", "type": "module", "bin": { "aspens": "bin/cli.js" diff --git a/src/commands/doc-impact.js b/src/commands/doc-impact.js new file mode 100644 index 0000000..3d94dc7 --- /dev/null +++ b/src/commands/doc-impact.js @@ -0,0 +1,115 @@ +import { resolve } from 'path'; +import pc from 'picocolors'; +import * as p from '@clack/prompts'; +import { analyzeImpact } from '../lib/impact.js'; + +export async function docImpactCommand(path, options) { + const repoPath = resolve(path); + + p.intro(pc.cyan('aspens doc impact')); + + const spinner = p.spinner(); + spinner.start('Inspecting repo context coverage...'); + const report = await analyzeImpact(repoPath, options); + spinner.stop(pc.green('Impact report ready')); + + console.log(); + console.log(pc.dim(' Repo: ') + pc.bold(report.scan.name)); + console.log(pc.dim(' Summary: ') + `${report.summary.repoStatus}, ${report.summary.changedFiles} changed file(s), ${report.summary.affectedTargets} target(s) affected`); + console.log(pc.dim(' Context health: ') + scoreLabel(report.summary.averageHealth)); + console.log(pc.dim(' Latest source change: ') + formatDate(report.summary.latestSourceMtime)); + + for (const target of report.targets) { + console.log(); + console.log(pc.bold(` ${target.label}`)); + console.log(pc.dim(' Context health: ') + scoreLabel(target.health)); + console.log(pc.dim(' Status: ') + [ + `instructions ${statusLabel(target.status.instructions)}`, + `domains ${statusLabel(target.status.domains)}`, + target.status.hooks !== 'n/a' ? `hooks ${statusLabel(target.status.hooks)}` : null, + ].filter(Boolean).join(' | ')); + console.log(pc.dim(' Instructions: ') + `${target.instructionExists ? pc.green('present') : pc.yellow('missing')} (${target.instructionsFile})`); + console.log(pc.dim(' Skills: ') + target.skillCount); + + if (target.domainCoverage.total > 0) { + console.log(pc.dim(' Domain coverage: ') + `${target.domainCoverage.covered}/${target.domainCoverage.total}`); + for (const detail of target.domainCoverage.details.slice(0, 5)) { + console.log(pc.dim(' ') + `${detail.domain} ${detail.status === 'covered' ? pc.green('covered') : pc.yellow('missing')} ${pc.dim(`(${detail.reason})`)}`); + } + if (target.domainCoverage.details.length > 5) { + console.log(pc.dim(' ...')); + } + } else { + console.log(pc.dim(' Domain coverage: ') + pc.dim('n/a')); + } + + if (target.hubCoverage.total > 0) { + const missingHubs = target.hubCoverage.total - target.hubCoverage.mentioned; + console.log(pc.dim(' Hub files surfaced: ') + `${target.hubCoverage.mentioned}/${target.hubCoverage.total}${missingHubs > 0 ? `, ${missingHubs} missing from root context` : ''}`); + } else { + console.log(pc.dim(' Hub files surfaced: ') + pc.dim('n/a')); + } + + console.log(pc.dim(' Last generated: ') + (target.lastUpdated ? formatDate(target.lastUpdated) : pc.dim('not generated'))); + if (target.drift.changedCount > 0) { + console.log(pc.dim(' Context drift: ') + `${target.drift.changedCount} source file(s) changed since last update`); + if (target.drift.affectedDomains.length > 0) { + console.log(pc.dim(' Affected domains: ') + target.drift.affectedDomains.join(', ')); + } + for (const file of target.drift.changedFiles.slice(0, 4)) { + console.log(pc.dim(' ') + file.path); + } + if (target.drift.changedFiles.length > 4) { + console.log(pc.dim(' ...')); + } + if (target.drift.driftMs > 0) { + console.log(pc.dim(' Drift window: ') + formatDuration(target.drift.driftMs)); + } + } else { + console.log(pc.dim(' Context drift: ') + pc.green('none detected')); + } + + if (target.actions.length > 0) { + console.log(pc.dim(' Recommended: ') + target.actions.map(action => `\`${action}\``).join(' • ')); + } + } + + console.log(); + if (report.summary.actions.length === 0) { + p.outro(pc.green('Context looks current')); + return; + } + + p.outro(pc.yellow(`Recommended next step: ${report.summary.actions.map(action => `\`${action}\``).join(' • ')}`)); +} + +function scoreLabel(score) { + const color = score >= 85 ? pc.green : score >= 65 ? pc.yellow : pc.red; + return color(`${score}/100`); +} + +function statusLabel(status) { + if (status === 'healthy') return pc.green(status); + if (status === 'partial') return pc.yellow(status); + if (status === 'n/a') return pc.dim(status); + return pc.yellow(status); +} + +function formatDate(timestamp) { + if (!timestamp) return 'n/a'; + return new Date(timestamp).toLocaleString('en-US', { + year: 'numeric', + month: 'short', + day: 'numeric', + hour: 'numeric', + minute: '2-digit', + }); +} + +function formatDuration(ms) { + const totalMinutes = Math.round(ms / 60000); + if (totalMinutes < 60) return `${totalMinutes}m`; + const hours = Math.floor(totalMinutes / 60); + const minutes = totalMinutes % 60; + return minutes === 0 ? `${hours}h` : `${hours}h ${minutes}m`; +} diff --git a/src/commands/doc-init.js b/src/commands/doc-init.js index e7eef22..35b4ef2 100644 --- a/src/commands/doc-init.js +++ b/src/commands/doc-init.js @@ -1,4 +1,4 @@ -import { resolve, join, dirname } from 'path'; +import { resolve, join, dirname, relative } from 'path'; import { existsSync, readFileSync, writeFileSync, copyFileSync, mkdirSync, chmodSync } from 'fs'; import { fileURLToPath } from 'url'; import pc from 'picocolors'; @@ -11,10 +11,11 @@ import { persistGraphArtifacts } from '../lib/graph-persistence.js'; import { installGitHook } from '../lib/git-hook.js'; import { CliError } from '../lib/errors.js'; import { resolveTimeout } from '../lib/timeout.js'; -import { TARGETS, resolveTarget, getAllowedPaths, writeConfig } from '../lib/target.js'; +import { TARGETS, resolveTarget, getAllowedPaths, writeConfig, loadConfig } from '../lib/target.js'; import { detectAvailableBackends, resolveBackend } from '../lib/backend.js'; import { transformForTarget, validateTransformedFiles } from '../lib/target-transform.js'; import { findSkillFiles } from '../lib/skill-reader.js'; +import { getGitRoot } from '../lib/git-helpers.js'; const __dirname = dirname(fileURLToPath(import.meta.url)); const TEMPLATES_DIR = join(__dirname, '..', 'templates'); @@ -120,6 +121,7 @@ export async function docInitCommand(path, options) { _repoPath = repoPath; const verbose = !!options.verbose; const model = options.model || null; + const recommended = !!options.recommended; // --hooks-only: skip skill generation, just install/update hooks if (options.hooksOnly) { @@ -161,9 +163,19 @@ export async function docInitCommand(path, options) { // --- Step 1: Backend selection (which AI generates) --- let backendResult; + let recommendedTargetIds = null; + if (recommended && !options.target) { + const { config } = loadConfig(repoPath, { persist: false }); + if (config?.targets?.length) { + recommendedTargetIds = config.targets; + } + } + if (options.backend) { backendResult = resolveBackend({ backendFlag: options.backend, available }); - } else if (available.claude && available.codex) { + } else if (recommended && recommendedTargetIds?.length === 1) { + backendResult = resolveBackend({ targetId: recommendedTargetIds[0], available }); + } else if (available.claude && available.codex && !recommended) { const backendChoice = await p.select({ message: 'Which AI should generate the docs?', options: [ @@ -185,6 +197,10 @@ export async function docInitCommand(path, options) { let targetIds; if (options.target) { targetIds = options.target === 'all' ? ['claude', 'codex'] : [options.target]; + } else if (recommendedTargetIds?.length) { + targetIds = recommendedTargetIds; + } else if (recommended) { + targetIds = [backend.id]; } else if (available.claude && available.codex) { const selected = await p.multiselect({ message: 'Generate docs for which coding agents?', @@ -208,6 +224,9 @@ export async function docInitCommand(path, options) { console.log(pc.dim(` Target: ${targets.map(t => t.label).join(' + ')}`)); console.log(pc.dim(` Backend: ${backend.label}`)); + if (recommended) { + console.log(pc.dim(' Mode: ') + 'recommended defaults'); + } console.log(); // Step 1: Scan @@ -266,14 +285,18 @@ export async function docInitCommand(path, options) { _reuseSourceTarget = chooseReuseSourceTarget(targets, hasClaudeDocs, hasCodexDocs); let skipDiscovery = false; if (hasExistingDocs && !isBaseOnly && !isDomainsOnly && options.strategy !== 'rewrite') { - const existingSource = hasClaudeDocs && hasCodexDocs ? 'Claude + Codex' - : hasClaudeDocs ? 'Claude' : 'Codex'; - const reuse = await p.confirm({ - message: `Existing ${existingSource} docs found. Skip discovery and reuse existing domains?`, - initialValue: true, - }); - if (p.isCancel(reuse)) { p.cancel('Aborted'); return; } - skipDiscovery = reuse; + if (recommended) { + skipDiscovery = true; + } else { + const existingSource = hasClaudeDocs && hasCodexDocs ? 'Claude + Codex' + : hasClaudeDocs ? 'Claude' : 'Codex'; + const reuse = await p.confirm({ + message: `Existing ${existingSource} docs found. Skip discovery and reuse existing domains?`, + initialValue: true, + }); + if (p.isCancel(reuse)) { p.cancel('Aborted'); return; } + skipDiscovery = reuse; + } } if (repoGraph && repoGraph.stats.totalFiles > 0 && !isBaseOnly && !isDomainsOnly && !skipDiscovery) { console.log(pc.dim(' Running 2 discovery agents in parallel...')); @@ -394,6 +417,8 @@ export async function docInitCommand(path, options) { if (!['improve', 'rewrite', 'skip-existing', 'fresh'].includes(existingDocsStrategy)) { throw new CliError(`Unknown strategy: ${options.strategy}. Use: improve, rewrite, or skip`); } + } else if (recommended && hasExistingDocs) { + existingDocsStrategy = 'improve'; } else if ((scan.hasClaudeConfig || scan.hasClaudeMd || scan.hasAgentsMd) && !options.force && !isDomainsOnly) { // Detect what actually exists per-target const hasClaudeDocs = scan.hasClaudeConfig || scan.hasClaudeMd; @@ -463,6 +488,10 @@ export async function docInitCommand(path, options) { } else if (effectiveDomains.length === 0) { p.log.info(`No domains detected — generating ${baseArtifactLabel()} only.`); mode = 'base-only'; + } else if (recommended) { + const isLarge = scan.size && (scan.size.category === 'large' || scan.size.category === 'very-large'); + mode = isLarge || effectiveDomains.length > 6 ? 'chunked' : 'all-at-once'; + p.log.info(`Recommended mode: ${mode === 'chunked' ? 'one domain at a time' : 'all at once'}.`); } else { // Smart defaults based on repo size const isLarge = scan.size && (scan.size.category === 'large' || scan.size.category === 'very-large'); @@ -676,13 +705,20 @@ export async function docInitCommand(path, options) { // Step 10: Persist target config writeConfig(repoPath, { targets: targetIds, backend: backend.id }); + console.log(pc.dim(' Verification: ') + [ + `${targets.map(t => t.label).join(' + ')} configured`, + `${effectiveDomains.length} domain${effectiveDomains.length === 1 ? '' : 's'} analyzed`, + hasHookTarget && options.hooks !== false ? 'hooks updated where supported' : 'no hook changes', + ].join(' | ')); + showTokenSummary(startTime); // Offer auto-sync git hook (works for all targets — runs `aspens doc sync` on commit) - if (options.hook !== false && !options.dryRun && existsSync(join(repoPath, '.git'))) { - const hookPath = join(repoPath, '.git', 'hooks', 'post-commit'); + const gitRoot = getGitRoot(repoPath); + if (options.hook !== false && !options.dryRun && gitRoot) { + const hookPath = join(gitRoot, '.git', 'hooks', 'post-commit'); const hookInstalled = existsSync(hookPath) && - readFileSync(hookPath, 'utf8').includes('aspens doc'); + readFileSync(hookPath, 'utf8').includes(`aspens doc-sync hook (${toPosixRelative(gitRoot, repoPath) || '.'})`); if (!hookInstalled) { console.log(); const wantHook = await p.confirm({ @@ -814,8 +850,9 @@ async function installHooks(repoPath, options) { // 9d: Merge settings.json let templateSettings; try { - templateSettings = JSON.parse( - readFileSync(join(TEMPLATES_DIR, 'settings', 'settings.json'), 'utf8') + templateSettings = createHookSettings( + repoPath, + JSON.parse(readFileSync(join(TEMPLATES_DIR, 'settings', 'settings.json'), 'utf8')) ); } catch (err) { hookSpinner.stop(pc.yellow('Hook installation incomplete')); @@ -869,6 +906,33 @@ async function installHooks(repoPath, options) { } } +function createHookSettings(repoPath, templateSettings) { + const gitRoot = getGitRoot(repoPath) || repoPath; + const projectRelative = toPosixRelative(gitRoot, repoPath); + const hookPrefix = projectRelative ? `$CLAUDE_PROJECT_DIR/${projectRelative}` : '$CLAUDE_PROJECT_DIR'; + const settings = JSON.parse(JSON.stringify(templateSettings)); + + for (const entries of Object.values(settings.hooks || {})) { + if (!Array.isArray(entries)) continue; + for (const entry of entries) { + if (!Array.isArray(entry.hooks)) continue; + for (const hook of entry.hooks) { + if (typeof hook.command === 'string' && hook.command.startsWith('$CLAUDE_PROJECT_DIR/')) { + hook.command = hook.command.replace('$CLAUDE_PROJECT_DIR', hookPrefix); + } + } + } + } + + return settings; +} + +function toPosixRelative(from, to) { + const rel = relative(from, to); + if (!rel || rel === '.') return ''; + return rel.split('\\').join('/'); +} + // --- Generation modes --- function buildScanSummary(scan) { diff --git a/src/commands/doc-sync.js b/src/commands/doc-sync.js index 707bdbe..f02a4c7 100644 --- a/src/commands/doc-sync.js +++ b/src/commands/doc-sync.js @@ -12,7 +12,7 @@ import { buildDomainContext, buildBaseContext } from '../lib/context-builder.js' import { CliError } from '../lib/errors.js'; import { resolveTimeout } from '../lib/timeout.js'; import { installGitHook, removeGitHook } from '../lib/git-hook.js'; -import { isGitRepo, getGitDiff, getGitLog, getChangedFiles } from '../lib/git-helpers.js'; +import { isGitRepo, getGitRoot, getGitDiff, getGitLog, getChangedFiles } from '../lib/git-helpers.js'; import { TARGETS, getAllowedPaths, loadConfig } from '../lib/target.js'; import { getSelectedFilesDiff, buildPrioritizedDiff, truncate } from '../lib/diff-helpers.js'; import { projectCodexDomainDocs, transformForTarget } from '../lib/target-transform.js'; @@ -83,6 +83,8 @@ function publishFilesForTargets(baseFiles, sourceTarget, publishTargets, scan, g export async function docSyncCommand(path, options) { const repoPath = resolve(path); + const gitRoot = getGitRoot(repoPath); + const projectPrefix = toGitRelative(gitRoot, repoPath); const verbose = !!options.verbose; const commits = typeof options.commits === 'number' ? options.commits : 1; @@ -114,7 +116,7 @@ export async function docSyncCommand(path, options) { p.intro(pc.cyan('aspens doc sync')); // Step 1: Check prerequisites - if (!isGitRepo(repoPath)) { + if (!gitRoot || !isGitRepo(repoPath)) { throw new CliError('Not a git repository. doc sync requires git history.'); } @@ -126,20 +128,20 @@ export async function docSyncCommand(path, options) { const diffSpinner = p.spinner(); diffSpinner.start(`Reading last ${commits} commit(s)...`); - const { diff, actualCommits } = getGitDiff(repoPath, commits); + const { actualCommits } = getGitDiff(gitRoot, commits); if (actualCommits < commits) { diffSpinner.message(`Only ${actualCommits} commit(s) available (requested ${commits})`); } - const commitLog = getGitLog(repoPath, actualCommits); + const commitLog = getGitLog(gitRoot, actualCommits); + const changedFiles = scopeProjectFiles(getChangedFiles(gitRoot, actualCommits), projectPrefix); + diffSpinner.stop(`${changedFiles.length} files changed`); - if (!diff.trim()) { - diffSpinner.stop('No changes found'); + if (changedFiles.length === 0) { p.outro('Nothing to sync'); return; } - const changedFiles = getChangedFiles(repoPath, actualCommits); - diffSpinner.stop(`${changedFiles.length} files changed`); + const diff = getSelectedFilesDiff(gitRoot, changedFiles.map(file => withProjectPrefix(file, projectPrefix)), actualCommits); // Show what changed console.log(); @@ -231,7 +233,7 @@ export async function docSyncCommand(path, options) { // Build diff from selected files only, or use full prioritized diff let activeDiff; if (selectedFiles.length < changedFiles.length) { - activeDiff = getSelectedFilesDiff(repoPath, selectedFiles, actualCommits); + activeDiff = getSelectedFilesDiff(gitRoot, selectedFiles.map(file => withProjectPrefix(file, projectPrefix)), actualCommits); if (activeDiff.includes('(diff truncated')) { p.log.warn('Selected files still exceed 80k — diff truncated. Claude will use Read tool for the rest.'); } @@ -367,6 +369,25 @@ ${truncate(instructionsContent, 5000)} p.outro(`${results.length} file(s) updated`); } +function toGitRelative(gitRoot, repoPath) { + if (!gitRoot) return ''; + const rel = relative(gitRoot, repoPath); + if (!rel || rel === '.') return ''; + return rel.split('\\').join('/'); +} + +function withProjectPrefix(file, projectPrefix) { + return projectPrefix ? `${projectPrefix}/${file}` : file; +} + +function scopeProjectFiles(files, projectPrefix) { + if (!projectPrefix) return files; + const prefix = `${projectPrefix}/`; + return files + .filter(file => file.startsWith(prefix)) + .map(file => file.slice(prefix.length)); +} + // --- Skill mapping --- function findExistingSkills(repoPath, target) { diff --git a/src/index.js b/src/index.js index 882a8bd..0efb6f9 100644 --- a/src/index.js +++ b/src/index.js @@ -2,3 +2,4 @@ export { scanRepo } from './lib/scanner.js'; export { runClaude, loadPrompt, parseFileOutput } from './lib/runner.js'; export { writeSkillFiles } from './lib/skill-writer.js'; export { buildContext, buildBaseContext, buildDomainContext } from './lib/context-builder.js'; +export { analyzeImpact } from './lib/impact.js'; diff --git a/src/lib/git-helpers.js b/src/lib/git-helpers.js index 8092d50..ced4fca 100644 --- a/src/lib/git-helpers.js +++ b/src/lib/git-helpers.js @@ -1,14 +1,22 @@ import { execFileSync } from 'child_process'; -export function isGitRepo(repoPath) { +export function getGitRoot(repoPath) { try { - execFileSync('git', ['rev-parse', '--git-dir'], { cwd: repoPath, stdio: 'pipe', timeout: 5000 }); - return true; + return execFileSync('git', ['rev-parse', '--show-toplevel'], { + cwd: repoPath, + encoding: 'utf8', + stdio: 'pipe', + timeout: 5000, + }).trim(); } catch { - return false; + return null; } } +export function isGitRepo(repoPath) { + return !!getGitRoot(repoPath); +} + export function getGitDiff(repoPath, commits) { // Try requested commit count, fall back to fewer for (let n = commits; n >= 1; n--) { diff --git a/src/lib/git-hook.js b/src/lib/git-hook.js index 59cbd92..77b43a2 100644 --- a/src/lib/git-hook.js +++ b/src/lib/git-hook.js @@ -1,8 +1,9 @@ -import { join } from 'path'; +import { join, relative } from 'path'; import { existsSync, readFileSync, writeFileSync, mkdirSync, unlinkSync, chmodSync } from 'fs'; import { execSync } from 'child_process'; import pc from 'picocolors'; import { CliError } from './errors.js'; +import { getGitRoot } from './git-helpers.js'; function resolveAspensPath() { const cmd = process.platform === 'win32' ? 'where aspens' : 'which aspens'; @@ -18,28 +19,38 @@ function resolveAspensPath() { } export function installGitHook(repoPath) { - const hookDir = join(repoPath, '.git', 'hooks'); - const hookPath = join(hookDir, 'post-commit'); - - if (!existsSync(join(repoPath, '.git'))) { + const gitRoot = getGitRoot(repoPath); + if (!gitRoot) { throw new CliError('Not a git repository.'); } + const projectRelative = toPosixRelative(gitRoot, repoPath); + const projectLabel = projectRelative || '.'; + const projectSlug = projectRelative + ? projectRelative.replace(/[^a-zA-Z0-9]+/g, '_').replace(/^_+|_+$/g, '') + : 'root'; + const generatedPrefix = projectRelative ? `${projectRelative}/` : ''; + const projectPathExpr = projectRelative ? `"\${REPO_ROOT}/${projectRelative}"` : '"${REPO_ROOT}"'; + const scopePrefix = projectRelative ? `grep '^${escapeForSingleQuotes(projectRelative)}/' | ` : ''; + + const hookDir = join(gitRoot, '.git', 'hooks'); + const hookPath = join(hookDir, 'post-commit'); mkdirSync(hookDir, { recursive: true }); const aspensCmd = resolveAspensPath(); const hookBlock = ` -# >>> aspens doc-sync hook (do not edit) >>> -__aspens_doc_sync() { +# >>> aspens doc-sync hook (${projectLabel}) (do not edit) >>> +__aspens_doc_sync_${projectSlug}() { REPO_ROOT="\$(git rev-parse --show-toplevel 2>/dev/null)" || return 0 - REPO_HASH="\$(echo "\$REPO_ROOT" | (shasum 2>/dev/null || sha1sum 2>/dev/null || md5sum 2>/dev/null) | cut -c1-8)" + PROJECT_PATH=${projectPathExpr} + REPO_HASH="\$(printf '%s' "\$PROJECT_PATH" | (shasum 2>/dev/null || sha1sum 2>/dev/null || md5sum 2>/dev/null) | cut -c1-8)" ASPENS_LOCK="/tmp/aspens-sync-\${REPO_HASH}.lock" ASPENS_LOG="/tmp/aspens-sync-\${REPO_HASH}.log" # Skip aspens-only commits (skills, CLAUDE.md, AGENTS.md, graph artifacts) CHANGED="\$(git diff-tree --no-commit-id --name-only -r HEAD 2>/dev/null)" - NON_ASPENS="\$(echo "\$CHANGED" | grep -v '^\.claude/' | grep -v '^\.codex/' | grep -v '^\.agents/' | grep -v '^CLAUDE\.md\$' | grep -v '^AGENTS\.md\$' | grep -v '/AGENTS\.md\$' | grep -v '^\.aspens\.json\$' || true)" + NON_ASPENS="\$(echo "\$CHANGED" | ${scopePrefix}grep -v '^${generatedPrefix}\\.claude/' | grep -v '^${generatedPrefix}\\.codex/' | grep -v '^${generatedPrefix}\\.agents/' | grep -v '^${generatedPrefix}CLAUDE\\.md\$' | grep -v '^${generatedPrefix}AGENTS\\.md\$' | grep -v '^${generatedPrefix}\\.aspens\\.json\$' || true)" if [ -z "\$NON_ASPENS" ]; then return 0 fi @@ -63,26 +74,24 @@ __aspens_doc_sync() { fi # Run fully detached so git returns immediately (POSIX-compatible — no disown needed) - (echo "[sync] \$(date '+%Y-%m-%d %H:%M:%S') started" >> "\$ASPENS_LOG" && ${aspensCmd} doc sync --commits 1 "\$REPO_ROOT" >> "\$ASPENS_LOG" 2>&1; echo "[sync] \$(date '+%Y-%m-%d %H:%M:%S') finished (exit \$?)" >> "\$ASPENS_LOG") /dev/null 2>&1 & + (echo "[sync] \$(date '+%Y-%m-%d %H:%M:%S') started (${projectLabel})" >> "\$ASPENS_LOG" && ${aspensCmd} doc sync --commits 1 "\$PROJECT_PATH" >> "\$ASPENS_LOG" 2>&1; echo "[sync] \$(date '+%Y-%m-%d %H:%M:%S') finished (exit \$?)" >> "\$ASPENS_LOG") /dev/null 2>&1 & } -__aspens_doc_sync -# <<< aspens doc-sync hook <<< +__aspens_doc_sync_${projectSlug} +# <<< aspens doc-sync hook (${projectLabel}) <<< `; - // Check for existing hook if (existsSync(hookPath)) { const existing = readFileSync(hookPath, 'utf8'); - if (existing.includes('aspens doc-sync hook') || existing.includes('aspens doc sync')) { - console.log(pc.yellow('\n Hook already installed.\n')); + if (existing.includes(`# >>> aspens doc-sync hook (${projectLabel})`)) { + console.log(pc.yellow(`\n Hook already installed for ${projectLabel}.\n`)); return; } - // Append to existing hook (outside shebang) writeFileSync(hookPath, existing + '\n' + hookBlock, 'utf8'); - console.log(pc.green('\n Appended aspens doc-sync to existing post-commit hook.\n')); + console.log(pc.green(`\n Appended aspens doc-sync for ${projectLabel} to existing post-commit hook.\n`)); } else { writeFileSync(hookPath, '#!/bin/sh\n' + hookBlock, 'utf8'); chmodSync(hookPath, 0o755); - console.log(pc.green('\n Installed post-commit hook.\n')); + console.log(pc.green(`\n Installed post-commit hook for ${projectLabel}.\n`)); } console.log(pc.dim(' Skills will auto-update after every commit.')); @@ -91,7 +100,14 @@ __aspens_doc_sync } export function removeGitHook(repoPath) { - const hookPath = join(repoPath, '.git', 'hooks', 'post-commit'); + const gitRoot = getGitRoot(repoPath); + if (!gitRoot) { + console.log(pc.yellow('\n No post-commit hook found.\n')); + return; + } + + const projectLabel = toPosixRelative(gitRoot, repoPath) || '.'; + const hookPath = join(gitRoot, '.git', 'hooks', 'post-commit'); if (!existsSync(hookPath)) { console.log(pc.yellow('\n No post-commit hook found.\n')); @@ -108,16 +124,19 @@ export function removeGitHook(repoPath) { } if (hasMarkers) { - const cleaned = content - .replace(/\n?# >>> aspens doc-sync hook \(do not edit\) >>>[\s\S]*?# <<< aspens doc-sync hook <<<\n?/, '') - .trim(); + const cleaned = content.replace(buildMarkerRegex(projectLabel), '').trim(); + + if (cleaned === content.trim()) { + console.log(pc.yellow(`\n No aspens hook found for ${projectLabel}.\n`)); + return; + } if (!cleaned || cleaned === '#!/bin/sh') { unlinkSync(hookPath); - console.log(pc.green('\n Removed post-commit hook.\n')); + console.log(pc.green(`\n Removed post-commit hook for ${projectLabel}.\n`)); } else { writeFileSync(hookPath, cleaned + '\n', 'utf8'); - console.log(pc.green('\n Removed aspens doc-sync from post-commit hook.\n')); + console.log(pc.green(`\n Removed aspens doc-sync for ${projectLabel} from post-commit hook.\n`)); } } else { console.log(pc.yellow('\n Legacy aspens hook detected (no removal markers).')); @@ -125,3 +144,24 @@ export function removeGitHook(repoPath) { console.log(pc.dim(' Or edit manually: .git/hooks/post-commit\n')); } } + +function toPosixRelative(from, to) { + const rel = relative(from, to); + if (!rel || rel === '.') return ''; + return rel.split('\\').join('/'); +} + +function escapeForSingleQuotes(value) { + return value.replace(/'/g, `'\\''`); +} + +function escapeForRegex(value) { + return value.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); +} + +function buildMarkerRegex(projectLabel) { + const escaped = escapeForRegex(projectLabel); + return new RegExp( + `\\n?# >>> aspens doc-sync hook \\(${escaped}\\) \\(do not edit\\) >>>[\\s\\S]*?# <<< aspens doc-sync hook \\(${escaped}\\) <<<\\n?` + ); +} diff --git a/src/lib/impact.js b/src/lib/impact.js new file mode 100644 index 0000000..df030f5 --- /dev/null +++ b/src/lib/impact.js @@ -0,0 +1,352 @@ +import { existsSync, readFileSync, readdirSync, statSync } from 'fs'; +import { join, extname, relative } from 'path'; +import { scanRepo } from './scanner.js'; +import { buildRepoGraph } from './graph-builder.js'; +import { loadConfig, TARGETS } from './target.js'; +import { findSkillFiles } from './skill-reader.js'; + +const SOURCE_EXTS = new Set([ + '.js', '.jsx', '.ts', '.tsx', '.mjs', '.cjs', + '.py', '.rb', '.go', '.rs', '.java', '.cs', + '.php', '.swift', '.kt', '.kts', '.scala', + '.clj', '.ex', '.exs', '.elm', '.vue', '.svelte', +]); + +export async function analyzeImpact(repoPath, options = {}) { + const scan = scanRepo(repoPath); + const { config } = loadConfig(repoPath, { persist: false }); + const targetIds = config?.targets?.length ? config.targets : inferTargetsFromScan(scan); + const targets = targetIds.map(id => TARGETS[id]).filter(Boolean); + const sourceState = collectSourceState(repoPath); + + let graph = null; + if (options.graph !== false) { + try { + graph = await buildRepoGraph(repoPath, scan.languages); + } catch { + graph = null; + } + } + + const targetReports = targets.map(target => summarizeTarget(repoPath, target, scan, graph, sourceState)); + const summary = summarizeReport(targetReports, sourceState); + + return { + scan, + sourceState, + targets: targetReports, + summary, + graph, + }; +} + +export function summarizeTarget(repoPath, target, scan, graph, sourceState) { + const skillFiles = findSkillFiles(join(repoPath, target.skillsDir), { + skillFilename: target.skillFilename, + }); + const instructionPath = join(repoPath, target.instructionsFile); + const instructionExists = existsSync(instructionPath); + const contextText = buildContextText(repoPath, target, skillFiles); + const topHubs = Array.isArray(graph?.hubs) ? graph.hubs.slice(0, 5).map(hub => hub.path) : []; + const lastUpdated = latestMtime([ + ...(instructionExists ? [instructionPath] : []), + ...skillFiles.map(skill => skill.path), + ]); + const domainCoverage = computeDomainCoverage(scan.domains, skillFiles); + const hubCoverage = computeHubCoverage(topHubs, contextText); + const drift = computeDrift(sourceState, lastUpdated, scan.domains); + const status = computeTargetStatus({ + instructionExists, + skillCount: skillFiles.length, + hooksInstalled: target.supportsHooks + ? existsSync(join(repoPath, '.claude', 'hooks', 'skill-activation-prompt.sh')) + : false, + domainCoverage, + drift, + }, target); + const health = computeHealthScore({ + instructionExists, + skillCount: skillFiles.length, + hooksInstalled: status.hooksInstalled, + domainCoverage, + hubCoverage, + drift, + }, target); + const actions = recommendActions({ status, drift }); + + return { + id: target.id, + label: target.label, + instructionsFile: target.instructionsFile, + instructionExists, + skillCount: skillFiles.length, + hooksInstalled: status.hooksInstalled, + lastUpdated, + drift, + domainCoverage, + hubCoverage, + status, + health, + actions, + }; +} + +export function computeDomainCoverage(domains, skills) { + const details = (domains || []) + .map(domain => domain?.name?.toLowerCase()) + .filter(Boolean) + .map(name => { + const match = findMatchingSkill(skills || [], name); + return { + domain: name, + status: match ? 'covered' : 'missing', + reason: match?.reason || 'no matching skill or activation rule', + skill: match?.skillName || null, + }; + }); + + return { + covered: details.filter(detail => detail.status === 'covered').length, + total: details.length, + missing: details.filter(detail => detail.status === 'missing').map(detail => detail.domain), + details, + }; +} + +export function computeHubCoverage(hubPaths, contextText) { + const haystack = (contextText || '').toLowerCase(); + const mentioned = (hubPaths || []).filter(path => haystack.includes(path.toLowerCase())); + return { + mentioned: mentioned.length, + total: hubPaths?.length || 0, + paths: mentioned, + }; +} + +export function computeHealthScore(input, target) { + let score = 100; + + if (!input.instructionExists) score -= 35; + if (input.skillCount === 0) score -= 25; + if (input.domainCoverage.total > 0) { + const missingRatio = (input.domainCoverage.total - input.domainCoverage.covered) / input.domainCoverage.total; + score -= Math.round(missingRatio * 25); + } + if (input.hubCoverage.total > 0) { + const missedHubs = input.hubCoverage.total - input.hubCoverage.mentioned; + score -= missedHubs * 4; + } + if (input.drift.changedFiles.length > 0) { + score -= Math.min(20, input.drift.changedFiles.length * 3); + } + if (target.supportsHooks && !input.hooksInstalled) { + score -= 10; + } + + return Math.max(0, Math.min(100, score)); +} + +export function computeDrift(sourceState, lastUpdated, domains = []) { + const changedFiles = (sourceState?.files || []) + .filter(file => !lastUpdated || file.mtimeMs > lastUpdated) + .sort((a, b) => b.mtimeMs - a.mtimeMs); + + const affectedDomains = new Set(); + for (const file of changedFiles) { + const hit = (domains || []).find(domain => + (domain.directories || []).some(dir => file.path.startsWith(dir + '/') || file.path === dir) + ); + if (hit?.name) affectedDomains.add(hit.name.toLowerCase()); + } + + const latestChange = changedFiles[0]?.mtimeMs || sourceState?.newestSourceMtime || 0; + + return { + changedFiles, + changedCount: changedFiles.length, + affectedDomains: [...affectedDomains], + latestChange, + driftMs: lastUpdated && latestChange ? Math.max(0, latestChange - lastUpdated) : 0, + }; +} + +export function computeTargetStatus(input, target) { + const instructions = !input.instructionExists ? 'missing' + : input.drift.changedCount > 0 ? 'stale' + : 'healthy'; + const domains = input.domainCoverage.total === 0 ? 'n/a' + : input.domainCoverage.covered === input.domainCoverage.total ? 'healthy' + : input.domainCoverage.covered === 0 ? 'missing' + : 'partial'; + const hooksInstalled = target.supportsHooks ? !!input.hooksInstalled : false; + const hooks = target.supportsHooks ? (hooksInstalled ? 'healthy' : 'missing') : 'n/a'; + + return { + instructions, + domains, + hooks, + hooksInstalled, + }; +} + +export function recommendActions(target) { + const actions = []; + if (target.status.instructions === 'missing' || target.status.domains === 'missing') { + actions.push('aspens doc init --recommended'); + } else if (target.status.instructions === 'stale' || target.drift.changedCount > 0) { + actions.push('aspens doc sync'); + } + if (target.status.hooks === 'missing') { + actions.push('aspens doc init --hooks-only'); + } + if (target.status.domains === 'partial' && !actions.includes('aspens doc init --recommended')) { + actions.push('aspens doc init --recommended'); + } + return [...new Set(actions)]; +} + +export function summarizeReport(targets, sourceState) { + const staleTargets = targets.filter(target => target.status.instructions === 'stale'); + const missingTargets = targets.filter(target => + target.status.instructions === 'missing' || + target.status.domains === 'missing' || + target.status.hooks === 'missing' + ); + const partialTargets = targets.filter(target => target.status.domains === 'partial'); + const actions = [...new Set(targets.flatMap(target => target.actions))]; + + return { + repoStatus: + missingTargets.length > 0 ? 'missing context' + : staleTargets.length > 0 ? 'partially stale' + : partialTargets.length > 0 ? 'partial coverage' + : 'healthy', + changedFiles: Math.max(...targets.map(target => target.drift.changedCount), 0), + affectedTargets: targets.filter(target => target.drift.changedCount > 0 || target.status.domains !== 'healthy' || target.status.instructions !== 'healthy').length, + actions, + averageHealth: targets.length > 0 + ? Math.round(targets.reduce((sum, target) => sum + target.health, 0) / targets.length) + : 0, + latestSourceMtime: sourceState.newestSourceMtime, + }; +} + +function inferTargetsFromScan(scan) { + const targets = []; + if (scan.hasClaudeConfig || scan.hasClaudeMd) targets.push('claude'); + if (scan.hasCodexConfig || scan.hasAgentsMd) targets.push('codex'); + return targets.length > 0 ? targets : ['claude']; +} + +function buildContextText(repoPath, target, skillFiles) { + const parts = []; + const instructionPath = join(repoPath, target.instructionsFile); + if (existsSync(instructionPath)) { + try { + parts.push(readFileSync(instructionPath, 'utf8')); + } catch { /* ignore unreadable artifact */ } + } + + for (const skill of skillFiles) { + const name = (skill.frontmatter?.name || skill.name || '').toLowerCase(); + if (name === 'base') { + parts.push(skill.content); + } + } + + return parts.join('\n\n'); +} + +function findMatchingSkill(skills, domainName) { + for (const skill of skills) { + const skillName = (skill.frontmatter?.name || skill.name || '').toLowerCase(); + if (skillName === domainName || skillName.includes(domainName)) { + return { skillName, reason: `skill "${skillName}"` }; + } + + const activationPatterns = Array.isArray(skill.activationPatterns) ? skill.activationPatterns : []; + const matchingPattern = activationPatterns.find(pattern => { + const lower = pattern.toLowerCase(); + return ( + lower.includes(`/${domainName}/`) || + lower.includes(`/${domainName}.`) || + lower.endsWith(`/${domainName}`) || + lower.includes(domainName) + ); + }); + if (matchingPattern) { + return { skillName, reason: `activation "${matchingPattern}"` }; + } + } + return null; +} + +function collectSourceState(repoPath) { + const files = []; + let newestSourceMtime = 0; + + function walk(dir, depth) { + if (depth > 5) return; + let entries = []; + try { + entries = readdirSync(dir); + } catch { + return; + } + + for (const entry of entries) { + if ( + entry.startsWith('.') || + entry === 'node_modules' || + entry === 'dist' || + entry === 'build' || + entry === 'coverage' || + entry === '.git' + ) { + continue; + } + + const full = join(dir, entry); + let stat; + try { + stat = statSync(full); + } catch { + continue; + } + + if (stat.isDirectory()) { + walk(full, depth + 1); + continue; + } + + if (!SOURCE_EXTS.has(extname(entry))) continue; + + const relPath = relative(repoPath, full); + files.push({ path: relPath, mtimeMs: stat.mtimeMs }); + if (stat.mtimeMs > newestSourceMtime) { + newestSourceMtime = stat.mtimeMs; + } + } + } + + walk(repoPath, 0); + files.sort((a, b) => b.mtimeMs - a.mtimeMs); + return { + newestSourceMtime, + sourceFiles: files.length, + files, + }; +} + +function latestMtime(paths) { + let newest = 0; + for (const filePath of paths) { + try { + const mtime = statSync(filePath).mtimeMs; + if (mtime > newest) newest = mtime; + } catch { + // Ignore unreadable files. + } + } + return newest; +} diff --git a/src/templates/hooks/graph-context-prompt.mjs b/src/templates/hooks/graph-context-prompt.mjs index 457e2e8..971a1b4 100644 --- a/src/templates/hooks/graph-context-prompt.mjs +++ b/src/templates/hooks/graph-context-prompt.mjs @@ -330,7 +330,7 @@ async function main() { process.exit(0); } - const projectDir = process.env.CLAUDE_PROJECT_DIR; + const projectDir = process.env.ASPENS_PROJECT_DIR || process.env.CLAUDE_PROJECT_DIR; if (!projectDir) { process.exit(0); } diff --git a/src/templates/hooks/graph-context-prompt.sh b/src/templates/hooks/graph-context-prompt.sh index 2e53c56..181df4c 100644 --- a/src/templates/hooks/graph-context-prompt.sh +++ b/src/templates/hooks/graph-context-prompt.sh @@ -33,7 +33,9 @@ get_script_dir() { } SCRIPT_DIR="$(get_script_dir)" +PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)" log_debug "SCRIPT_DIR=$SCRIPT_DIR" +log_debug "ASPENS_PROJECT_DIR=$PROJECT_DIR" cd "$SCRIPT_DIR" || { echo "[Graph] Failed to cd to $SCRIPT_DIR" >&2; exit 0; } @@ -50,7 +52,7 @@ STDOUT_FILE=$(mktemp) STDERR_FILE=$(mktemp) trap 'rm -f "$STDOUT_FILE" "$STDERR_FILE"' EXIT -printf '%s' "$INPUT" | NODE_NO_WARNINGS=1 node graph-context-prompt.mjs \ +printf '%s' "$INPUT" | ASPENS_PROJECT_DIR="$PROJECT_DIR" NODE_NO_WARNINGS=1 node graph-context-prompt.mjs \ >"$STDOUT_FILE" 2>"$STDERR_FILE" EXIT_CODE=$? diff --git a/src/templates/hooks/post-tool-use-tracker.sh b/src/templates/hooks/post-tool-use-tracker.sh index c4c9242..3c20ac0 100755 --- a/src/templates/hooks/post-tool-use-tracker.sh +++ b/src/templates/hooks/post-tool-use-tracker.sh @@ -12,8 +12,22 @@ if ! command -v jq &> /dev/null; then exit 0 fi +get_script_dir() { + local source="${BASH_SOURCE[0]}" + while [ -h "$source" ]; do + local dir + dir="$(cd -P "$(dirname "$source")" && pwd)" || return 1 + source="$(readlink "$source")" + [[ $source != /* ]] && source="$dir/$source" + done + cd -P "$(dirname "$source")" && pwd +} + +SCRIPT_DIR="$(get_script_dir)" +PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)" + # Exit early if CLAUDE_PROJECT_DIR is not set -if [[ -z "$CLAUDE_PROJECT_DIR" ]]; then +if [[ -z "$CLAUDE_PROJECT_DIR" ]] && [[ -z "$PROJECT_DIR" ]]; then exit 0 fi @@ -38,13 +52,13 @@ if [[ "$file_path" =~ \.(md|markdown)$ ]]; then fi # Create cache directory in project -cache_dir="$CLAUDE_PROJECT_DIR/.claude/tsc-cache/${session_id:-default}" +cache_dir="$PROJECT_DIR/.claude/tsc-cache/${session_id:-default}" mkdir -p "$cache_dir" # Function to detect repo from file path detect_repo() { local file="$1" - local project_root="$CLAUDE_PROJECT_DIR" + local project_root="$PROJECT_DIR" # Remove project root from path local relative_path="${file#$project_root/}" @@ -100,7 +114,7 @@ detect_repo() { # Function to get build command for repo get_build_command() { local repo="$1" - local project_root="$CLAUDE_PROJECT_DIR" + local project_root="$PROJECT_DIR" # Map special repo names to actual paths local repo_path @@ -142,7 +156,7 @@ get_build_command() { # Function to get TSC command for repo get_tsc_command() { local repo="$1" - local project_root="$CLAUDE_PROJECT_DIR" + local project_root="$PROJECT_DIR" # Map special repo names to actual paths local repo_path @@ -290,7 +304,7 @@ add_skill_to_session() { # Track skill domain for session-sticky behavior skill_domain=$(detect_skill_domain "$file_path") if [[ -n "$skill_domain" ]]; then - session_file=$(get_session_file "$CLAUDE_PROJECT_DIR") + session_file=$(get_session_file "$PROJECT_DIR") add_skill_to_session "$skill_domain" "$session_file" "$repo" fi diff --git a/src/templates/hooks/skill-activation-prompt.mjs b/src/templates/hooks/skill-activation-prompt.mjs index 55e6e53..fe60f38 100644 --- a/src/templates/hooks/skill-activation-prompt.mjs +++ b/src/templates/hooks/skill-activation-prompt.mjs @@ -296,7 +296,7 @@ async function main() { } // Determine project directory - const projectDir = process.env.CLAUDE_PROJECT_DIR; + const projectDir = process.env.ASPENS_PROJECT_DIR || process.env.CLAUDE_PROJECT_DIR; if (!projectDir) { process.exit(0); } diff --git a/src/templates/hooks/skill-activation-prompt.sh b/src/templates/hooks/skill-activation-prompt.sh index 266fe60..9eb801f 100755 --- a/src/templates/hooks/skill-activation-prompt.sh +++ b/src/templates/hooks/skill-activation-prompt.sh @@ -31,8 +31,10 @@ get_script_dir() { } SCRIPT_DIR="$(get_script_dir)" +PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)" log_debug "SCRIPT_DIR=$SCRIPT_DIR" log_debug "CLAUDE_PROJECT_DIR=$CLAUDE_PROJECT_DIR" +log_debug "ASPENS_PROJECT_DIR=$PROJECT_DIR" cd "$SCRIPT_DIR" || { echo "⚡ [Skills] Failed to cd to $SCRIPT_DIR" >&2; exit 0; } @@ -49,7 +51,7 @@ STDOUT_FILE=$(mktemp) STDERR_FILE=$(mktemp) trap 'rm -f "$STDOUT_FILE" "$STDERR_FILE"' EXIT -printf '%s' "$INPUT" | NODE_NO_WARNINGS=1 node skill-activation-prompt.mjs \ +printf '%s' "$INPUT" | ASPENS_PROJECT_DIR="$PROJECT_DIR" NODE_NO_WARNINGS=1 node skill-activation-prompt.mjs \ >"$STDOUT_FILE" 2>"$STDERR_FILE" EXIT_CODE=$? diff --git a/tests/git-hook.test.js b/tests/git-hook.test.js index 1d44148..136a30f 100644 --- a/tests/git-hook.test.js +++ b/tests/git-hook.test.js @@ -1,15 +1,19 @@ import { describe, it, expect, beforeEach, afterAll } from 'vitest'; import { existsSync, readFileSync, rmSync, mkdirSync, writeFileSync, statSync } from 'fs'; import { join } from 'path'; +import { execFileSync } from 'child_process'; import { installGitHook, removeGitHook } from '../src/lib/git-hook.js'; const TEST_DIR = join(import.meta.dirname, 'tmp-hook'); const HOOKS_DIR = join(TEST_DIR, '.git', 'hooks'); const HOOK_PATH = join(HOOKS_DIR, 'post-commit'); +const SUBPROJECT_DIR = join(TEST_DIR, 'backend'); beforeEach(() => { if (existsSync(TEST_DIR)) rmSync(TEST_DIR, { recursive: true }); - mkdirSync(HOOKS_DIR, { recursive: true }); + mkdirSync(TEST_DIR, { recursive: true }); + execFileSync('git', ['init'], { cwd: TEST_DIR, stdio: 'pipe' }); + mkdirSync(SUBPROJECT_DIR, { recursive: true }); }); afterAll(() => { @@ -21,9 +25,9 @@ describe('installGitHook', () => { installGitHook(TEST_DIR); const content = readFileSync(HOOK_PATH, 'utf8'); expect(content).toContain('#!/bin/sh'); - expect(content).toContain('# >>> aspens doc-sync hook (do not edit) >>>'); - expect(content).toContain('__aspens_doc_sync()'); - expect(content).toContain('# <<< aspens doc-sync hook <<<'); + expect(content).toContain('# >>> aspens doc-sync hook (.) (do not edit) >>>'); + expect(content).toContain('__aspens_doc_sync_root()'); + expect(content).toContain('# <<< aspens doc-sync hook (.) <<<'); }); it('makes hook executable', () => { @@ -63,6 +67,24 @@ describe('installGitHook', () => { // Only one shebang expect(content.match(/^#!\/bin\/sh/gm)).toHaveLength(1); }); + + it('installs subproject hooks at the git root and syncs the subproject path', () => { + installGitHook(SUBPROJECT_DIR); + const content = readFileSync(HOOK_PATH, 'utf8'); + expect(content).toContain('# >>> aspens doc-sync hook (backend) (do not edit) >>>'); + expect(content).toContain('PROJECT_PATH="${REPO_ROOT}/backend"'); + expect(content).toContain('doc sync --commits 1 "$PROJECT_PATH"'); + }); + + it('supports installing hooks for multiple subprojects', () => { + installGitHook(SUBPROJECT_DIR); + const frontendDir = join(TEST_DIR, 'frontend'); + mkdirSync(frontendDir, { recursive: true }); + installGitHook(frontendDir); + const content = readFileSync(HOOK_PATH, 'utf8'); + expect(content).toContain('# >>> aspens doc-sync hook (backend) (do not edit) >>>'); + expect(content).toContain('# >>> aspens doc-sync hook (frontend) (do not edit) >>>'); + }); }); describe('removeGitHook', () => { @@ -102,4 +124,15 @@ describe('removeGitHook', () => { removeGitHook(TEST_DIR); expect(existsSync(HOOK_PATH)).toBe(true); }); + + it('removes only the matching subproject hook block', () => { + installGitHook(SUBPROJECT_DIR); + const frontendDir = join(TEST_DIR, 'frontend'); + mkdirSync(frontendDir, { recursive: true }); + installGitHook(frontendDir); + removeGitHook(SUBPROJECT_DIR); + const content = readFileSync(HOOK_PATH, 'utf8'); + expect(content).not.toContain('# >>> aspens doc-sync hook (backend) (do not edit) >>>'); + expect(content).toContain('# >>> aspens doc-sync hook (frontend) (do not edit) >>>'); + }); }); diff --git a/tests/impact.test.js b/tests/impact.test.js new file mode 100644 index 0000000..719d8f9 --- /dev/null +++ b/tests/impact.test.js @@ -0,0 +1,128 @@ +import { describe, it, expect } from 'vitest'; +import { + computeDomainCoverage, + computeHubCoverage, + computeDrift, + computeHealthScore, + computeTargetStatus, + recommendActions, + summarizeReport, +} from '../src/lib/impact.js'; + +describe('computeDomainCoverage', () => { + it('counts covered and missing domains with reasons', () => { + const coverage = computeDomainCoverage( + [{ name: 'auth' }, { name: 'billing' }, { name: 'profile' }], + [ + { name: 'base', activationPatterns: [] }, + { name: 'auth', activationPatterns: [] }, + { name: 'payments-skill', activationPatterns: ['src/billing/**'] }, + ] + ); + + expect(coverage.covered).toBe(2); + expect(coverage.total).toBe(3); + expect(coverage.missing).toEqual(['profile']); + expect(coverage.details.find(d => d.domain === 'auth')?.reason).toContain('skill'); + expect(coverage.details.find(d => d.domain === 'billing')?.reason).toContain('activation'); + }); +}); + +describe('computeHubCoverage', () => { + it('counts hub paths mentioned in context text', () => { + const coverage = computeHubCoverage( + ['src/lib/runner.js', 'src/lib/target.js', 'src/lib/errors.js'], + 'Read src/lib/runner.js first. See src/lib/errors.js for failures.' + ); + + expect(coverage.mentioned).toBe(2); + expect(coverage.total).toBe(3); + expect(coverage.paths).toEqual(['src/lib/runner.js', 'src/lib/errors.js']); + }); +}); + +describe('computeDrift', () => { + it('finds changed files and affected domains since last update', () => { + const drift = computeDrift( + { + newestSourceMtime: 300, + files: [ + { path: 'src/auth/session.ts', mtimeMs: 300 }, + { path: 'src/lib/db.ts', mtimeMs: 250 }, + { path: 'src/billing/stripe.ts', mtimeMs: 200 }, + ], + }, + 225, + [ + { name: 'auth', directories: ['src/auth'] }, + { name: 'billing', directories: ['src/billing'] }, + ] + ); + + expect(drift.changedCount).toBe(2); + expect(drift.changedFiles.map(file => file.path)).toEqual(['src/auth/session.ts', 'src/lib/db.ts']); + expect(drift.affectedDomains).toEqual(['auth']); + expect(drift.driftMs).toBe(75); + }); +}); + +describe('target status and actions', () => { + it('marks stale partial context and recommends sync + init', () => { + const status = computeTargetStatus({ + instructionExists: true, + skillCount: 3, + hooksInstalled: true, + domainCoverage: { covered: 2, total: 3 }, + drift: { changedCount: 4 }, + }, { supportsHooks: true }); + + expect(status.instructions).toBe('stale'); + expect(status.domains).toBe('partial'); + expect(status.hooks).toBe('healthy'); + + const actions = recommendActions({ + status, + drift: { changedCount: 4 }, + }); + expect(actions).toEqual(['aspens doc sync', 'aspens doc init --recommended']); + }); +}); + +describe('computeHealthScore', () => { + it('penalizes missing instructions, coverage gaps, and drift', () => { + const score = computeHealthScore({ + instructionExists: false, + skillCount: 1, + hooksInstalled: false, + domainCoverage: { covered: 1, total: 4 }, + hubCoverage: { mentioned: 1, total: 3 }, + drift: { changedFiles: [{}, {}, {}] }, + }, { supportsHooks: true }); + + expect(score).toBeLessThan(50); + }); +}); + +describe('summarizeReport', () => { + it('summarizes repo status and deduplicates actions', () => { + const summary = summarizeReport([ + { + health: 70, + drift: { changedCount: 3 }, + status: { instructions: 'stale', domains: 'healthy', hooks: 'healthy' }, + actions: ['aspens doc sync'], + }, + { + health: 90, + drift: { changedCount: 0 }, + status: { instructions: 'healthy', domains: 'partial', hooks: 'n/a' }, + actions: ['aspens doc init --recommended'], + }, + ], { newestSourceMtime: 1234 }); + + expect(summary.repoStatus).toBe('partially stale'); + expect(summary.changedFiles).toBe(3); + expect(summary.averageHealth).toBe(80); + expect(summary.actions).toEqual(['aspens doc sync', 'aspens doc init --recommended']); + }); +}); From 14312151c0e713e5969d726c6327e67e5f71e916 Mon Sep 17 00:00:00 2001 From: mvoutov Date: Wed, 8 Apr 2026 11:55:10 -0700 Subject: [PATCH 02/17] skills --- .../architecture/references/code-map.md | 12 ++--- .agents/skills/base/SKILL.md | 15 +++--- .agents/skills/doc-impact/SKILL.md | 47 ++++++++++++++++++ .agents/skills/doc-sync/SKILL.md | 12 +++-- .agents/skills/skill-generation/SKILL.md | 21 ++++---- .agents/skills/template-library/SKILL.md | 6 +-- .claude/skills/base/skill.md | 15 +++--- .claude/skills/doc-impact/skill.md | 47 ++++++++++++++++++ .claude/skills/doc-sync/skill.md | 12 +++-- .claude/skills/skill-generation/skill.md | 21 ++++---- .claude/skills/skill-rules.json | 41 ++++++++++++++++ .claude/skills/template-library/skill.md | 6 +-- AGENTS.md | 49 ++++++++++++++++--- CLAUDE.md | 5 +- README.md | 5 +- 15 files changed, 253 insertions(+), 61 deletions(-) create mode 100644 .agents/skills/doc-impact/SKILL.md create mode 100644 .claude/skills/doc-impact/skill.md diff --git a/.agents/skills/architecture/references/code-map.md b/.agents/skills/architecture/references/code-map.md index 91cf850..d7b94d6 100644 --- a/.agents/skills/architecture/references/code-map.md +++ b/.agents/skills/architecture/references/code-map.md @@ -4,19 +4,19 @@ **Hub files (most depended-on):** - `src/lib/runner.js` - 8 dependents +- `src/lib/scanner.js` - 8 dependents +- `src/lib/target.js` - 8 dependents - `src/lib/errors.js` - 7 dependents -- `src/lib/scanner.js` - 7 dependents -- `src/lib/target.js` - 7 dependents -- `src/lib/skill-writer.js` - 6 dependents +- `src/lib/graph-builder.js` - 6 dependents **Domain clusters:** | Domain | Files | Top entries | |--------|-------|-------------| -| src | 37 | `src/lib/runner.js`, `src/commands/doc-init.js`, `src/commands/doc-sync.js` | +| src | 40 | `src/lib/runner.js`, `src/commands/doc-init.js`, `src/lib/target.js` | **High-churn hotspots:** -- `src/commands/doc-init.js` - 27 changes -- `src/commands/doc-sync.js` - 19 changes +- `src/commands/doc-init.js` - 28 changes +- `src/commands/doc-sync.js` - 20 changes - `src/lib/runner.js` - 16 changes diff --git a/.agents/skills/base/SKILL.md b/.agents/skills/base/SKILL.md index 04ef8c3..e5610b7 100644 --- a/.agents/skills/base/SKILL.md +++ b/.agents/skills/base/SKILL.md @@ -9,7 +9,7 @@ This is a **base skill** that always loads when working in this repository. --- -You are working in **aspens** — a CLI tool that generates and maintains AI-ready documentation (skill files + AGENTS.md) for any codebase. Supports multiple output targets (Claude Code, Codex CLI). +You are working in **aspens** — a CLI that keeps coding-agent context accurate as your codebase changes. Scans repos, generates project-specific instructions and skills for Claude Code and Codex CLI, and keeps them fresh. ## Tech Stack Node.js (ESM) | Commander | Vitest | es-module-lexer | @clack/prompts | picocolors @@ -18,7 +18,8 @@ Node.js (ESM) | Commander | Vitest | es-module-lexer | @clack/prompts | picocolo - `npm test` — Run vitest suite - `npm start` / `node bin/cli.js` — Run CLI - `aspens scan [path]` — Deterministic repo analysis (no LLM) -- `aspens doc init [path]` — Generate skills + hooks + AGENTS.md (supports `--target claude|codex|all`, `--backend claude|codex`) +- `aspens doc init [path]` — Generate skills + hooks + AGENTS.md (`--target claude|codex|all`, `--recommended` for smart defaults) +- `aspens doc impact [path]` — Show freshness, coverage, and drift of generated context - `aspens doc sync [path]` — Incremental skill updates from git diffs - `aspens doc graph [path]` — Rebuild import graph cache (`.claude/graph.json`) - `aspens add [name]` — Install templates (agents, commands, hooks) @@ -35,8 +36,9 @@ CLI entry (`bin/cli.js`) → command handlers (`src/commands/`) → lib modules - `src/lib/skill-writer.js` — Writes skill files and directory-scoped files, generates skill-rules.json, merges settings - `src/lib/skill-reader.js` — Parses skill files, frontmatter, activation patterns, keywords - `src/lib/diff-helpers.js` — Targeted file diffs and prioritized diff truncation for doc-sync -- `src/lib/git-helpers.js` — Git repo detection, diff retrieval, log formatting -- `src/lib/git-hook.js` — Post-commit git hook installation/removal for auto doc-sync +- `src/lib/git-helpers.js` — Git repo detection, git root resolution, diff retrieval, log formatting +- `src/lib/git-hook.js` — Post-commit git hook installation/removal for auto doc-sync (monorepo-aware) +- `src/lib/impact.js` — Context health analysis: domain coverage, hub surfacing, drift detection - `src/lib/timeout.js` — Timeout resolution (`--timeout` flag > `ASPENS_TIMEOUT` env > default) - `src/lib/errors.js` — `CliError` class (structured errors caught by CLI top-level handler) - `src/lib/target.js` — Target definitions (claude/codex), config persistence (`.aspens.json`) @@ -55,14 +57,15 @@ CLI entry (`bin/cli.js`) → command handlers (`src/commands/`) → lib modules - **Target/Backend distinction** — Target = output format/location; Backend = which LLM CLI generates content. Config persisted in `.aspens.json` - **Scanner is deterministic** — no LLM calls; pure filesystem analysis - **CliError pattern** — command handlers throw `CliError` instead of calling `process.exit()`; caught at top level in `bin/cli.js` +- **Monorepo support** — `getGitRoot()` resolves the actual git root; hooks, sync, and impact scope to the subdirectory project path ## Structure - `bin/` — CLI entry point (commander setup, CliError handler) -- `src/commands/` — Command handlers (scan, doc-init, doc-sync, doc-graph, add, customize) +- `src/commands/` — Command handlers (scan, doc-init, doc-impact, doc-sync, doc-graph, add, customize) - `src/lib/` — Core library modules - `src/prompts/` — Prompt templates + partials - `src/templates/` — Installable agents, commands, hooks, settings - `tests/` — Vitest test files --- -**Last Updated:** 2026-04-02 +**Last Updated:** 2026-04-08 diff --git a/.agents/skills/doc-impact/SKILL.md b/.agents/skills/doc-impact/SKILL.md new file mode 100644 index 0000000..103339c --- /dev/null +++ b/.agents/skills/doc-impact/SKILL.md @@ -0,0 +1,47 @@ +--- +name: doc-impact +description: Context health analysis — freshness, domain coverage, hub surfacing, drift detection for generated agent context +--- + +## Activation + +This skill triggers when editing doc-impact files: +- `src/commands/doc-impact.js` +- `src/lib/impact.js` +- `tests/impact.test.js` + +Keywords: impact, freshness, coverage, drift, health score, context health + +--- + +You are working on **doc impact** — the command that shows whether generated agent context is keeping up with the codebase. + +## Key Files +- `src/commands/doc-impact.js` — CLI command: calls `analyzeImpact()`, renders per-target report with health scores, coverage, drift, and recommended actions +- `src/lib/impact.js` — Core analysis: `analyzeImpact()` orchestrates scan + config + graph + per-target summarization +- `tests/impact.test.js` — Unit tests for coverage, drift, health score, status, and report summarization + +## Key Concepts +- **`analyzeImpact(repoPath, options)`** — Main entry point. Runs `scanRepo()`, loads config from `.aspens.json`, infers targets if not configured, collects source file state, optionally builds import graph, then produces per-target reports. +- **Target inference:** If no `.aspens.json` config, infers targets from scan results (`.claude/` → claude, `.agents/` → codex). Falls back to `['claude']`. +- **`summarizeTarget()`** — Per-target analysis: finds skills, checks instruction file existence, computes domain coverage, hub coverage, drift, status, health score, and recommended actions. +- **Domain coverage:** `computeDomainCoverage()` matches scan-detected domains against installed skills by name match or activation pattern match. Returns covered/missing counts with reasons. +- **Hub coverage:** `computeHubCoverage()` checks if top 5 graph hub file paths appear in the instruction file + base skill text. +- **Drift detection:** `computeDrift()` finds source files modified after the latest generated context mtime. Maps changed files to affected domains via directory matching. +- **Health score:** `computeHealthScore()` starts at 100, deducts for: missing instructions (-35), no skills (-25), domain gaps (up to -25), missed hubs (-4 each), drift (-3 per file, max -20), missing hooks (-10 for Claude). +- **Source state collection:** `collectSourceState()` walks repo (depth 5, skips dotfiles/node_modules/dist/build/coverage), collects mtime for source extensions only (`SOURCE_EXTS` set). +- **Recommended actions:** `recommendActions()` suggests `aspens doc init --recommended` for missing context, `aspens doc sync` for stale context, `aspens doc init --hooks-only` for missing hooks. + +## Critical Rules +- **No LLM calls** — impact analysis is fully deterministic (scan + filesystem + optional graph). +- **`--no-graph` flag** — skips import graph build; hub coverage section shows `n/a`. +- **Graph failure is non-fatal** — if `buildRepoGraph` throws, graph is set to null and analysis continues without hub data. +- **`SOURCE_EXTS` set** — only these extensions count as source files for drift detection. Adding a language requires updating this set. +- **Walk depth capped at 5** — deep nested source files won't appear in drift analysis. +- **Exported functions** — `computeDomainCoverage`, `computeHubCoverage`, `computeDrift`, `computeHealthScore`, `computeTargetStatus`, `recommendActions`, `summarizeReport` are all individually exported for testing. + +## References +- **Patterns:** `src/lib/skill-reader.js` — `findSkillFiles()` used for skill discovery per target + +--- +**Last Updated:** 2026-04-08 diff --git a/.agents/skills/doc-sync/SKILL.md b/.agents/skills/doc-sync/SKILL.md index a7f2f75..0b22073 100644 --- a/.agents/skills/doc-sync/SKILL.md +++ b/.agents/skills/doc-sync/SKILL.md @@ -23,18 +23,19 @@ You are working on **doc-sync**, the incremental skill update command (`aspens d - `src/commands/doc-sync.js` — Main command: git diff → graph rebuild → skill mapping → LLM update → publish for targets → write. Also contains refresh mode and `skillToDomain()` export. - `src/prompts/doc-sync.md` — System prompt for diff-based sync (uses `{{skill-format}}` partial, target-specific path variables) - `src/prompts/doc-sync-refresh.md` — System prompt for `--refresh` mode (full skill review) -- `src/lib/git-helpers.js` — `isGitRepo()`, `getGitDiff()`, `getGitLog()`, `getChangedFiles()` — git primitives +- `src/lib/git-helpers.js` — `getGitRoot()`, `isGitRepo()`, `getGitDiff()`, `getGitLog()`, `getChangedFiles()` — git primitives - `src/lib/diff-helpers.js` — `getSelectedFilesDiff()`, `buildPrioritizedDiff()`, `truncateDiff()`, `truncate()` — diff budgeting -- `src/lib/git-hook.js` — `installGitHook()` / `removeGitHook()` for post-commit auto-sync +- `src/lib/git-hook.js` — `installGitHook()` / `removeGitHook()` for post-commit auto-sync (monorepo-aware) - `src/lib/context-builder.js` — `buildDomainContext()`, `buildBaseContext()` used by refresh mode - `src/lib/runner.js` — `runLLM()`, `loadPrompt()`, `parseFileOutput()` shared across commands - `src/lib/skill-writer.js` — `writeSkillFiles()`, `writeTransformedFiles()`, `extractRulesFromSkills()` for output - `src/lib/target-transform.js` — `projectCodexDomainDocs()`, `transformForTarget()` for multi-target publish ## Key Concepts +- **Monorepo-aware:** `getGitRoot(repoPath)` resolves the actual git root. `projectPrefix` (`toGitRelative`) computes the subdirectory offset. `scopeProjectFiles()` filters changed files to the project subdirectory. Diffs are fetched from `gitRoot` but file paths are project-relative. - **Multi-target publish:** `configuredTargets()` reads `.aspens.json` for all configured targets. `chooseSyncSourceTarget()` picks the best source (prefers Claude if both exist). LLM generates for the source target; `publishFilesForTargets()` transforms output for all other configured targets. `graphSerialized` is passed through to control conditional architecture references. - **Backend routing:** `runLLM()` from `runner.js` dispatches to `runClaude()` or `runCodex()` based on `config.backend` (defaults to source target's id). -- **Diff-based flow:** Gets `git diff HEAD~N..HEAD` and `git log`, feeds them plus existing skill contents and graph context to the selected backend. +- **Diff-based flow:** Gets `git diff HEAD~N..HEAD` from git root, scopes changed files to project prefix, then feeds diff plus existing skill contents and graph context to the selected backend. - **Prompt path variables:** Passes `{ skillsDir, skillFilename, instructionsFile, configDir }` from source target to `loadPrompt()` for path substitution in prompts. - **Refresh mode (`--refresh`):** Skips diff entirely. Reviews every skill against the current codebase. Base skill refreshed first, then domain skills in parallel batches of `PARALLEL_LIMIT` (3). Also refreshes instructions file and reports uncovered domains. - **Graph rebuild on every sync:** Calls `buildRepoGraph` + `persistGraphArtifacts` (with source target) to keep graph fresh. `graphSerialized` return value is captured and forwarded to `publishFilesForTargets` for conditional Codex architecture refs. Graph failure is non-fatal. @@ -46,7 +47,7 @@ You are working on **doc-sync**, the incremental skill update command (`aspens d - **Split writes:** Direct-write files (`.claude/`, `AGENTS.md`, root `AGENTS.md`) use `writeSkillFiles()`. Directory-scoped `AGENTS.md` files (e.g. `src/AGENTS.md`) use `writeTransformedFiles()`. - **Skill-rules regeneration:** After writing, regenerates `skill-rules.json` via `extractRulesFromSkills()` — only for targets with `supportsHooks: true` (Claude). Uses `hookTarget` from publish targets list. - **`findExistingSkills` is target-aware:** Uses `target.skillsDir` and `target.skillFilename` to locate skills for any target. -- **Git hook:** `installGitHook()` creates a `post-commit` hook with 5-minute cooldown lock file. Hook skips aspens-only commits (filters `.claude/`, `.codex/`, `.agents/`, `AGENTS.md`, `AGENTS.md`, `.aspens.json`). Works for all configured targets. +- **Git hook (monorepo-aware):** `installGitHook()` installs at the git root with per-project scoping. Hook uses `PROJECT_PATH` derived from project-relative offset. Each subproject gets its own labeled hook block (`# >>> aspens doc-sync hook (label) >>>`) with a unique function name (`__aspens_doc_sync_`). Multiple subprojects can coexist in one post-commit hook. Hook skips aspens-only commits scoped to the project prefix. - **Force writes:** doc-sync always calls `writeSkillFiles` with `force: true`. ## Critical Rules @@ -57,9 +58,10 @@ You are working on **doc-sync**, the incremental skill update command (`aspens d - The command exits early with `CliError` if the source target's skills directory doesn't exist. - `checkMissingHooks()` in `bin/cli.js` only checks for Claude skills (not Codex — Codex doesn't use hooks). - `dedupeFiles()` ensures no duplicate paths when publishing across multiple targets. +- **Git operations use `gitRoot`** — diffs, logs, and changed files are fetched from git root, not `repoPath`. File paths are then scoped via `projectPrefix`. ## References - **Patterns:** `src/lib/skill-reader.js` — `GENERIC_PATH_SEGMENTS`, `fileMatchesActivation()`, `getActivationBlock()` --- -**Last Updated:** 2026-04-07 +**Last Updated:** 2026-04-08 diff --git a/.agents/skills/skill-generation/SKILL.md b/.agents/skills/skill-generation/SKILL.md index 6db7f3d..7371c9d 100644 --- a/.agents/skills/skill-generation/SKILL.md +++ b/.agents/skills/skill-generation/SKILL.md @@ -14,7 +14,7 @@ This skill triggers when editing skill-generation files: - `src/lib/timeout.js` - `src/prompts/**/*` -Keywords: doc-init, generate skills, discovery agents, chunked generation +Keywords: doc-init, generate skills, discovery agents, chunked generation, recommended --- @@ -25,34 +25,37 @@ You are working on **aspens' skill generation pipeline** — the system that sca - `src/lib/runner.js` — `runClaude()`, `runCodex()`, `runLLM()`, `loadPrompt()`, `parseFileOutput()`, `validateSkillFiles()` - `src/lib/skill-writer.js` — Writes files, generates `skill-rules.json`, domain bash patterns, merges `settings.json` - `src/lib/skill-reader.js` — Parses skill frontmatter, activation patterns, keywords (used by skill-writer) -- `src/lib/git-hook.js` — `installGitHook()` / `removeGitHook()` for post-commit auto-sync +- `src/lib/git-hook.js` — `installGitHook()` / `removeGitHook()` for post-commit auto-sync (monorepo-aware) - `src/lib/timeout.js` — `resolveTimeout()` for auto-scaled + user-override timeouts -- `src/lib/target.js` — Target definitions, `resolveTarget()`, `getAllowedPaths()`, `writeConfig()` +- `src/lib/target.js` — Target definitions, `resolveTarget()`, `getAllowedPaths()`, `writeConfig()`, `loadConfig()` - `src/lib/backend.js` — Backend detection/resolution (`detectAvailableBackends()`, `resolveBackend()`) - `src/lib/target-transform.js` — `transformForTarget()` converts Claude output to other target formats - `src/prompts/` — `doc-init.md` (base), `doc-init-domain.md`, `doc-init-claudemd.md`, `discover-domains.md`, `discover-architecture.md` ## Key Concepts - **Pipeline steps:** (1) detect backends (2) **backend selection** (3) **target selection** (4) scan + graph (5) existing docs discovery check (6) parallel discovery agents (7) strategy (8) mode (9) generate (10) validate (11) transform for non-Claude targets (12) show files + dry-run (13) write (14) install hooks (Claude-only) (15) persist config to `.aspens.json` -- **Backend before target:** Backend selection (step 2) happens before target selection (step 3). If both CLIs available, user picks backend first, then targets. Pre-selects matching target in the multiselect. +- **`--recommended` flag:** Skips interactive prompts with smart defaults. Reuses existing target config from `.aspens.json`. Auto-selects backend from target. Defaults strategy to `improve` when existing docs found. Auto-picks discovery skip when docs exist. Auto-selects generation mode based on repo size (chunked for large repos or >6 domains, all-at-once otherwise). +- **Backend before target:** Backend selection (step 2) happens before target selection (step 3). If both CLIs available, user picks backend first, then targets. Pre-selects matching target in the multiselect. With `--recommended`, backend is inferred from existing target config. - **Canonical generation:** All prompts receive `CANONICAL_VARS` (hardcoded Claude paths: `.claude/skills/`, `skill.md`, `AGENTS.md`). Generation always produces Claude-canonical format regardless of target. Non-Claude targets are produced by post-generation transform. - **`parseLLMOutput` with strict single-file fallback:** Codex often returns plain markdown without `` tags. `parseLLMOutput(text, allowedPaths, expectedPath)` only wraps tagless text as the expected file for **true single-file prompts** (exactly one `exactFile` in allowedPaths, no `dirPrefixes`). Multi-file prompts require proper `` tags. - **Existing docs reuse:** When existing Claude docs are found and strategy is `improve`, reuse is handled as improvement context without a separate loading spinner. Supports cross-target reuse (e.g., existing Claude docs → generate Codex output). - **Domain reuse helpers:** `loadReusableDomains()` tries `loadReusableDomainsFromRules()` (reads `skill-rules.json` from source target, falls back to `.claude/skills/` for non-Claude targets) first. Falls back to `findSkillFiles()` with `extractKeyFilePatterns()` to derive file patterns from `## Key Files` sections when activation patterns are missing. -- **Target selection:** `--target claude|codex|all` or interactive multiselect if both CLIs available. Stored in `.aspens.json`. +- **Target selection:** `--target claude|codex|all` or interactive multiselect if both CLIs available. With `--recommended`, reuses `.aspens.json` targets or falls back to backend id. Stored in `.aspens.json`. - **Backend routing:** `runLLM()` imported from `runner.js` dispatches to `runClaude()` or `runCodex()` based on `_backendId`. `--backend` flag overrides auto-detection. - **Content transform (step 11):** Canonical files preserved as originals. Non-Claude targets get `transformForTarget()` applied. If Claude not in target list, canonical files are filtered out of final output. - **Split writes:** Direct-write files (`.claude/`, `.agents/`, `AGENTS.md`, root `AGENTS.md`) use `writeSkillFiles()`. Directory-scoped files (e.g., `src/billing/AGENTS.md`) use `writeTransformedFiles()` with warn-and-skip policy. - **Dynamic labels:** `baseArtifactLabel()` and `instructionsArtifactLabel()` return target-appropriate names ("base skill" vs "root AGENTS.md") for spinner messages. - **Parallel discovery:** Two agents run via `Promise.all` — domain discovery and architecture analysis — before any user prompt. -- **Generation modes:** `all-at-once` = single call; `chunked` = base + per-domain (up to 3 parallel) + instructions file; `base-only` = just base skill; `pick` = interactive domain picker +- **Generation modes:** `all-at-once` = single call; `chunked` = base + per-domain (up to 3 parallel) + instructions file; `base-only` = just base skill; `pick` = interactive domain picker. With `--recommended`, mode is auto-selected based on repo size. - **Retry logic:** Base skill and instructions file retry up to 2 times if `parseLLMOutput` returns empty (format correction prompt asking for `` tags). -- **Hook installation:** Only for targets with `supportsHooks: true` (Claude). Generates `skill-rules.json`, copies hook scripts, merges `settings.json`. +- **Monorepo hook support:** `createHookSettings()` adjusts hook command paths for subdirectory projects by replacing `$CLAUDE_PROJECT_DIR` with the subdirectory-scoped prefix. `getGitRoot()` resolves the actual git root for hook installation. +- **Hook installation:** Only for targets with `supportsHooks: true` (Claude). Generates `skill-rules.json`, copies hook scripts, merges `settings.json`. Git hook offer checks for project-specific marker (`aspens doc-sync hook (