diff --git a/CLAUDE.md b/CLAUDE.md index 4564b2ce..f11c8bde 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -4,18 +4,19 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co ## Overview -ArcKit is an **Enterprise Architecture Governance & Vendor Procurement Toolkit** providing 68 slash commands for AI coding assistants (Claude Code, Codex CLI, Gemini CLI, OpenCode CLI) to generate architecture artifacts. It transforms architecture governance from scattered documents into a systematic, template-driven process. +ArcKit is an **Enterprise Architecture Governance & Vendor Procurement Toolkit** providing 68 slash commands for AI coding assistants (Claude Code, OpenAI Codex CLI, Gemini CLI, OpenCode CLI, GitHub Copilot, Roo Code) to generate architecture artifacts. It transforms architecture governance from scattered documents into a systematic, template-driven process. -**Six distribution formats** exist side-by-side in this repo: +**Seven distribution formats** exist side-by-side in this repo: -1. **CLI package** (`src/arckit_cli/`) -- Python CLI installed via `pip`/`uv`, runs `arckit init` to scaffold projects for **Codex CLI** or **OpenCode CLI** (copies templates/commands into them) +1. **CLI package** (`src/arckit_cli/`) -- Python CLI installed via `pip` or `uv`, runs `arckit init` to scaffold projects for **OpenAI Codex CLI**, **OpenCode CLI**, **GitHub Copilot**, or **Roo Code** (copies templates/commands into them) 2. **Claude Code plugin** (`arckit-claude/`) -- Standalone plugin for **Claude Code**, installed via marketplace (`/plugin marketplace add tractorjuice/arc-kit`) 3. **Gemini CLI extension** (`arckit-gemini/`) -- Native extension for **Gemini CLI** with sub-agents, hooks, policies, and GDS theme, published as a separate repo (`tractorjuice/arckit-gemini`) and installed via `gemini extensions install` 4. **OpenCode CLI extension** (`arckit-opencode/`) -- Extension for **OpenCode CLI**, scaffolded via `arckit init --ai opencode` 5. **Codex CLI extension** (`arckit-codex/`) -- Standalone extension for **Codex CLI** with skills, agents, and MCP servers, published as a separate repo (`tractorjuice/arckit-codex`) 6. **Copilot extension** (`arckit-copilot/`) -- Prompt files + custom agents for **GitHub Copilot** in VS Code, scaffolded via `arckit init --ai copilot` +7. **Roo Code bundle** (`arckit-roocode/`) -- `.roomodes`, `.roo/rules/`, and `.roo/skills/` for **Roo Code** in VS Code, scaffolded via `arckit init --ai roocode` -The CLI and plugin have independent version numbers (CLI: `VERSION` + `pyproject.toml`, Plugin: `arckit-claude/VERSION` + `arckit-claude/.claude-plugin/plugin.json`). The Gemini extension version tracks the plugin version (`arckit-gemini/VERSION` + `arckit-gemini/gemini-extension.json`). The OpenCode extension version also tracks the plugin version (`arckit-opencode/VERSION`). The Codex extension version also tracks the plugin version (`arckit-codex/VERSION`). The Copilot extension version also tracks the plugin version (`arckit-copilot/VERSION`). Claude Code support was removed from the CLI in favour of the plugin, which references files in-place via `${CLAUDE_PLUGIN_ROOT}`. The Gemini, OpenCode, Codex, and Copilot extensions are generated by `scripts/converter.py` which rewrites paths, copies supporting files, generates `config.toml` (Codex MCP + agent roles), and rewrites skill command references. +The CLI and plugin have independent version numbers (CLI: `VERSION` + `pyproject.toml`, Plugin: `arckit-claude/VERSION` + `arckit-claude/.claude-plugin/plugin.json`). The Gemini extension version tracks the plugin version (`arckit-gemini/VERSION` + `arckit-gemini/gemini-extension.json`). The OpenCode extension version also tracks the plugin version (`arckit-opencode/VERSION`). The Codex extension version also tracks the plugin version (`arckit-codex/VERSION`). The Copilot extension version also tracks the plugin version (`arckit-copilot/VERSION`). The Roo Code bundle tracks the CLI release and is generated into `arckit-roocode/` from the Claude source set. Claude Code support was removed from the CLI in favour of the plugin, which references files in-place via `${CLAUDE_PLUGIN_ROOT}`. The Gemini, OpenCode, Codex, Copilot, and Roo Code outputs are generated by `scripts/converter.py` which rewrites paths, copies supporting files, generates `config.toml` (Codex MCP + agent roles), and rewrites skill command references. ## Build & Development Commands @@ -268,7 +269,7 @@ project/ **Init flags:** -- `--ai codex` / `--ai opencode` / `--ai copilot` - Select AI assistant. `--ai claude` redirects to plugin installation. `--ai gemini` redirects to extension installation. +- `--ai codex` / `--ai opencode` / `--ai copilot` / `--ai roocode` - Select AI assistant. `--ai claude` redirects to plugin installation. `--ai gemini` redirects to extension installation. - `--minimal` - Skip docs, guides, and reference files - `--no-git` - Skip git repository initialization - `--here` - Initialize in current directory @@ -295,7 +296,7 @@ project/ 2. Create `.arckit/templates/{name}-template.md` with document control section (also copy to `arckit-claude/templates/`) 3. Create `docs/guides/{name}.md` with usage guide (also copy to `arckit-claude/guides/`) 4. If command needs heavy web research (>10 WebSearch/WebFetch calls), also create `arckit-claude/agents/arckit-{name}.md` and make the slash command a thin wrapper that delegates to the agent -5. Run `python scripts/converter.py` to generate Codex Markdown (`.codex/`), OpenCode Markdown (`.opencode/` + `arckit-opencode/`), Gemini extension TOML (`arckit-gemini/`), and Copilot prompts (`arckit-copilot/`) +5. Run `python scripts/converter.py` to generate Codex Markdown (`.codex/`), OpenCode Markdown (`.opencode/` + `arckit-opencode/`), Gemini extension TOML (`arckit-gemini/`), Copilot prompts (`arckit-copilot/`), and Roo Code mode bundles (`arckit-roocode/`) 6. Test plugin: Open a test repo with the plugin enabled and run the command 7. Test CLI: `arckit init test --ai codex --no-git && cd test && codex` (or `--ai opencode`) 8. Update documentation: README.md, docs/index.html, docs/DEPENDENCY-MATRIX.md, CHANGELOG.md @@ -410,6 +411,7 @@ Every template must start with a **Document Control** table followed by **Revisi - `arckit-opencode/VERSION` - `arckit-codex/VERSION` - `arckit-copilot/VERSION` +- `arckit-roocode/VERSION` **Release automation**: diff --git a/README.md b/README.md index e28e1ff0..6d86a201 100644 --- a/README.md +++ b/README.md @@ -49,6 +49,16 @@ A comprehensive book-length guide to ArcKit — covering every subsystem (comman ### Installation +**ArcKit CLI** (for OpenAI Codex CLI, OpenCode CLI, GitHub Copilot, and Roo Code scaffolding): + +```bash +# Install with pip +pip install git+https://github.com/tractorjuice/arc-kit.git + +# Or install with uv +uv tool install arckit-cli --from git+https://github.com/tractorjuice/arc-kit.git +``` + **Claude Code** (premier experience) — install the ArcKit plugin (requires **v2.1.112+**): ```text @@ -70,15 +80,21 @@ Zero-config: all 68 commands, templates, scripts, and bundled MCP servers (AWS K **GitHub Copilot** (VS Code) — install the ArcKit CLI and scaffold prompt files: ```bash -# Install with pip -pip install git+https://github.com/tractorjuice/arc-kit.git - # Scaffold a project with Copilot prompt files arckit init my-project --ai copilot ``` Creates `.github/prompts/arckit-*.prompt.md` (68 prompt files), `.github/agents/arckit-*.agent.md` (10 custom agents), and `.github/copilot-instructions.md` (repo-wide context). Invoke commands in Copilot Chat as `/arckit-requirements`, `/arckit-stakeholders`, etc. +**Roo Code** (VS Code) — install the ArcKit CLI and scaffold Roo project files: + +```bash +# Scaffold a project with Roo Code mode files and rules +arckit init my-project --ai roocode +``` + +Creates `.roomodes`, `.roo/rules/`, `.roo/skills/`, and a project README. Open the workspace in VS Code and select an ArcKit custom mode from the Roo Code mode picker. + **Codex CLI** — install the ArcKit CLI: ```bash @@ -96,14 +112,16 @@ uvx --from git+https://github.com/tractorjuice/arc-kit.git arckit init my-projec ### Platform Support -| Platform | Claude Code Plugin | Gemini CLI Extension | GitHub Copilot | Codex / OpenCode CLI | -|----------|-------------------|---------------------|----------------|---------------------| -| macOS | Full support | Full support | Full support | Full support | -| Linux | Full support | Full support | Full support | Full support | -| Windows (WSL2) | Full support | Full support | Full support | Full support | -| Windows (native) | Full support | Full support | Full support | Partial | +| Platform | Claude Code Plugin | OpenAI Codex CLI | Gemini CLI Extension | OpenCode CLI | GitHub Copilot | Roo Code | +|----------|-------------------|------------------|---------------------|--------------|----------------|----------| +| macOS | Full support | Full support | Full support | Full support | Full support | Full support | +| Linux | Full support | Full support | Full support | Full support | Full support | Full support | +| Windows (WSL2) | Full support | Full support | Full support | Full support | Full support | Full support | +| Windows (native) | Full support | Partial | Full support | Partial | Full support | Full support | -**Windows users**: The Claude Code plugin, Gemini CLI extension, and GitHub Copilot prompt files work natively on all platforms. For Codex CLI / OpenCode CLI on native Windows (without WSL), some commands containing inline bash snippets may require [Git Bash](https://git-scm.com/downloads/win) or [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install). We recommend WSL2 for the best experience. +**Windows users**: The Claude Code plugin, OpenAI Codex CLI, Gemini CLI extension, GitHub Copilot prompt files, OpenCode CLI commands, and Roo Code workspace files work natively on all platforms. For Codex CLI / OpenCode CLI on native Windows (without WSL), some commands containing inline bash snippets may require [Git Bash](https://git-scm.com/downloads/win) or [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install). We recommend WSL2 for the best experience. + +Roo Code project scaffolding is also available via `arckit init --ai roocode` and uses `.roomodes`, `.roo/rules/`, and `.roo/skills/` in the workspace root. ### Initialize a Project @@ -119,6 +137,16 @@ arckit init payment-modernization --ai copilot arckit init . --ai copilot ``` +**Roo Code** (VS Code): + +```bash +# Create a new architecture governance project +arckit init payment-modernization --ai roocode + +# Or initialize in current directory +arckit init . --ai roocode +``` + **OpenCode CLI**: ```bash @@ -151,6 +179,11 @@ cd payment-modernization && code . /arckit-principles Create principles for a financial services company /arckit-requirements Build a payment processing system... +# Roo Code (VS Code) +cd payment-modernization && code . +# In Roo Code, choose the matching ArcKit mode from the mode picker: +# ArcKit Plan, ArcKit Principles, ArcKit Requirements, etc. + # Codex CLI cd payment-modernization codex @@ -168,6 +201,8 @@ codex **GitHub Copilot**: Re-run `arckit init --here --ai copilot` to update prompt files, agents, and instructions. +**Roo Code**: Re-run `arckit init --here --ai roocode` to refresh `.roomodes`, `.roo/rules/`, `.roo/skills/`, and the project README. + **Codex CLI**: ```bash @@ -1025,10 +1060,11 @@ Publish all project documentation as an interactive website: | Assistant | Support | Notes | |-----------|---------|-------| | [Claude Code](https://www.anthropic.com/claude-code) | ✅ Premier | **Primary platform.** Plugin with agents, hooks, MCP servers, and auto-updates | -| [Gemini CLI](https://github.com/google-gemini/gemini-cli) | ✅ Full | Extension with commands, MCP servers, and auto-updates | -| [GitHub Copilot](https://github.com/features/copilot) | ✅ Core | VS Code prompt files, custom agents, and repo-wide instructions (`arckit init --ai copilot`) | | [OpenAI Codex CLI](https://chatgpt.com/features/codex) | ✅ Core | CLI with commands and templates. ChatGPT Plus/Pro/Enterprise ([Setup Guide](.codex/README.md)) | +| [Gemini CLI](https://github.com/google-gemini/gemini-cli) | ✅ Full | Extension with commands, MCP servers, and auto-updates | | [OpenCode CLI](https://opencode.net/cli) | ✅ Core | CLI with commands and templates | +| [GitHub Copilot](https://github.com/features/copilot) | ✅ Core | VS Code prompt files, custom agents, and repo-wide instructions (`arckit init --ai copilot`) | +| [Roo Code](https://docs.roocode.com/) | ✅ Core | VS Code custom modes, rules, skills, and repo README (`arckit init --ai roocode`) | > **Platform Support**: ArcKit is developed and tested on **Linux**. Windows has limited support — hooks (session init, project context, filename validation, MCP auto-allow) require bash and jq which are not available on stock Windows. For the best experience on Windows, use a **devcontainer** or **WSL2**. @@ -1036,27 +1072,27 @@ Publish all project documentation as an interactive website: Claude Code is the **primary development platform** for ArcKit and provides capabilities not available in other formats: -| Feature | Claude Code | Gemini CLI | Copilot | Codex / OpenCode | -|---------|:-----------:|:----------:|:-------:|:----------------:| -| 68 slash commands | ✅ | ✅ | ✅ | ✅ | -| Templates & scripts | ✅ | ✅ | ✅ | ✅ | -| Bundled MCP servers (AWS, Azure, GCP, DataCommons, govreposcrape) | ✅ | ✅ (3 servers) | — | Manual setup | -| **Autonomous research agents** (10 agents for research, datascout, cloud research, gov code discovery, grants, framework) | ✅ | — | ✅ (10 agents) | — | -| **SessionStart hook** (auto-detect version + projects) | ✅ | — | — | — | -| **UserPromptSubmit hook** (project context injection on every prompt) | ✅ | — | — | — | -| **PreToolUse hook** (ARC filename auto-correction) | ✅ | — | — | — | -| **PermissionRequest hook** (auto-allow MCP documentation tools) | ✅ | — | — | — | -| **Per-command Stop hooks** (output validation, e.g. Wardley Map math checks) | ✅ | — | — | — | -| Wardley Mapping skill (with Pinecone MCP book corpus) | ✅ | — | — | — | -| Mermaid Syntax Reference skill (23 diagram types + config) | ✅ | ✅ | — | ✅ | -| Automatic marketplace updates | ✅ | ✅ | Manual reinstall | Manual reinstall | -| Zero-config installation | ✅ | ✅ | `arckit init` required | `arckit init` required | +| Feature | Claude Code | Gemini CLI | Copilot | Codex / OpenCode | Roo Code | +|---------|:-----------:|:----------:|:-------:|:----------------:|:-------:| +| 68 slash commands | ✅ | ✅ | ✅ | ✅ | ✅ | +| Templates & scripts | ✅ | ✅ | ✅ | ✅ | ✅ | +| Bundled MCP servers (AWS, Azure, GCP, DataCommons, govreposcrape) | ✅ | ✅ (3 servers) | — | Manual setup | — | +| **Autonomous research agents** (10 agents for research, datascout, cloud research, gov code discovery, grants, framework) | ✅ | — | ✅ (10 agents) | — | — | +| **SessionStart hook** (auto-detect version + projects) | ✅ | — | — | — | — | +| **UserPromptSubmit hook** (project context injection on every prompt) | ✅ | — | — | — | — | +| **PreToolUse hook** (ARC filename auto-correction) | ✅ | — | — | — | — | +| **PermissionRequest hook** (auto-allow MCP documentation tools) | ✅ | — | — | — | — | +| **Per-command Stop hooks** (output validation, e.g. Wardley Map math checks) | ✅ | — | — | — | — | +| Wardley Mapping skill (with Pinecone MCP book corpus) | ✅ | — | — | — | ✅ | +| Mermaid Syntax Reference skill (23 diagram types + config) | ✅ | ✅ | — | ✅ | ✅ | +| Automatic marketplace updates | ✅ | ✅ | Manual reinstall | Manual reinstall | — | +| Zero-config installation | ✅ | ✅ | `arckit init` required | `arckit init` required | `arckit init` required | **Agents** run research-heavy commands (market research, data source discovery, cloud service evaluation) in isolated context windows, keeping the main conversation clean and enabling dozens of WebSearch/WebFetch/MCP calls without context bloat. **Hooks** provide automated governance: filenames are auto-corrected to ArcKit conventions, project context is injected into every prompt so commands know what artifacts exist, MCP tools are auto-approved, and generated outputs like Wardley Maps are validated for mathematical consistency before being finalized. -Gemini CLI provides a strong experience with all commands and MCP servers but lacks agent delegation and hooks. GitHub Copilot provides all 68 commands as prompt files and 10 custom agents but lacks hooks and MCP servers. Codex CLI and OpenCode CLI provide core command functionality but require manual setup and `arckit init` scaffolding. +Gemini CLI provides a strong experience with all commands and MCP servers but lacks agent delegation and hooks. GitHub Copilot provides all 68 commands as prompt files and 10 custom agents but lacks hooks and MCP servers. Codex CLI and OpenCode CLI provide core command functionality but require manual setup and `arckit init` scaffolding. Roo Code provides custom modes, rules, skills, and a project README, but not the hook and MCP automation that Claude Code ships with the plugin. ### Why Commands, Not Skills @@ -1073,7 +1109,12 @@ For GitHub Copilot users in VS Code, ArcKit commands are delivered as prompt fil ```bash # Install and create project (3 steps, zero config) +# Install with pip pip install git+https://github.com/tractorjuice/arc-kit.git + +# Or install with uv +uv tool install arckit-cli --from git+https://github.com/tractorjuice/arc-kit.git + arckit init my-project --ai copilot cd my-project && code . @@ -1091,7 +1132,12 @@ For OpenAI Codex CLI users, ArcKit commands are delivered as skills and auto-dis ```bash # Install and create project (3 steps, zero config) +# Install with pip pip install git+https://github.com/tractorjuice/arc-kit.git + +# Or install with uv +uv tool install arckit-cli --from git+https://github.com/tractorjuice/arc-kit.git + arckit init my-project --ai codex cd my-project && codex @@ -1445,7 +1491,7 @@ Key references live in `docs/` and top-level guides: - **Python 3.11+** - **Git** (optional but recommended) -- **AI Coding Agent**: [Claude Code](https://www.anthropic.com/claude-code) v2.1.112+ (via plugin), [Gemini CLI](https://github.com/google-gemini/gemini-cli) (via extension), [OpenCode CLI](https://opencode.net/cli) (via CLI), or [OpenAI Codex CLI](https://chatgpt.com/features/codex) (via CLI) +- **AI Coding Agent**: [Claude Code](https://www.anthropic.com/claude-code) v2.1.112+ (via plugin), [OpenAI Codex CLI](https://chatgpt.com/features/codex) (via CLI), [Gemini CLI](https://github.com/google-gemini/gemini-cli) (via extension), [OpenCode CLI](https://opencode.net/cli) (via CLI), [GitHub Copilot](https://github.com/features/copilot) (via VS Code), or [Roo Code](https://docs.roocode.com/) (via VS Code scaffolding) - **uv** for package management: [Install uv](https://docs.astral.sh/uv/) --- @@ -1460,6 +1506,9 @@ cd arc-kit # Install in development mode pip install -e . +# Or using uv +uv pip install -e . + # Run the CLI arckit init my-project ``` @@ -1572,6 +1621,9 @@ ls .github/prompts/arckit-*.prompt.md # For OpenCode CLI, check if commands directory exists ls .opencode/commands/ + +# For Roo Code, check if the workspace files exist +ls .roomodes .roo/rules/ ``` **Template not found**: Ensure you've run `/arckit.principles` first diff --git a/arckit-copilot/README.md b/arckit-copilot/README.md new file mode 100644 index 00000000..b6e4c9d1 --- /dev/null +++ b/arckit-copilot/README.md @@ -0,0 +1,35 @@ +# ArcKit for GitHub Copilot + +**Enterprise Architecture Governance & Vendor Procurement Toolkit for GitHub Copilot** + +ArcKit transforms GitHub Copilot into a powerful Architecture Governance platform, providing specialized prompts and instructions for generating architecture artifacts, vendor procurement documents, and UK Government compliance assessments. + +## Features + +- **Project Context Awareness**: Automatically reads project artifacts (Requirements, Risks, Principles) to inform new documents. +- **UK Government Aligned**: Built-in support for GDS Service Standard, Technology Code of Practice (TCoP), and Secure by Design. +- **Cloud Native**: Integrated research instructions for AWS, Azure, and GCP. +- **Traceability**: Maintains a strict traceability chain from stakeholders to user stories. + +## Usage + +Use the instructions in `copilot-instructions.md` to configure your GitHub Copilot custom instructions or use them as a reference in your chat sessions. + +## Directory Structure + +```text +. +├── copilot-instructions.md # Core instructions for GitHub Copilot +├── agents/ # Autonomous research agents (Markdown) +├── commands/ # Command reference (Markdown) +├── prompts/ # Reusable prompt snippets +├── skills/ # Reusable ArcKit skills +├── templates/ # Document templates +├── references/ # Quality checklists and guides +├── scripts/ # Helper scripts +└── docs/ # Documentation and guides +``` + +## License + +MIT License - see [LICENSE](LICENSE) for details. diff --git a/arckit-copilot/commands/adr.md b/arckit-copilot/commands/adr.md new file mode 100644 index 00000000..e79cee2d --- /dev/null +++ b/arckit-copilot/commands/adr.md @@ -0,0 +1,538 @@ +--- +description: Document architectural decisions with options analysis and traceability +argument-hint: "" +effort: high +handoffs: + - command: hld-review + description: Reflect decision in High-Level Design + - command: diagram + description: Update architecture diagrams + - command: traceability + description: Update traceability matrix with decision links +--- + +You are helping an enterprise architect create an Architecture Decision Record (ADR) following MADR v4.0 format enhanced with UK Government requirements. + +## User Input + +```text +$ARGUMENTS +``` + +## Instructions + +> **Note**: The ArcKit Project Context hook has already detected all projects, artifacts, external documents, and global policies. Use that context below — no need to scan directories manually. + +### 1. **Read existing artifacts from the project context:** + +**MANDATORY** (warn if missing): + +- **PRIN** (Architecture Principles, in 000-global) + - Extract: Technology standards, constraints, compliance requirements that inform decision drivers + - If missing: warn user to run `/arckit:principles` first +- **REQ** (Requirements) + - Extract: BR/FR/NFR/INT/DR IDs that this decision addresses + - If missing: warn user to run `/arckit:requirements` first + +**RECOMMENDED** (read if available, note if missing): + +- **RISK** (Risk Register) + - Extract: Risks this decision mitigates, risk appetite context + +**OPTIONAL** (read if available, skip silently if missing): + +- **RSCH** (Research Findings) or **AWSR** / **AZUR** (Cloud Research) + - Extract: Options already analyzed, vendor comparisons, TCO data +- **STKE** (Stakeholder Analysis) + - Extract: Stakeholder goals, decision authority, RACI context +- **WARD** (Wardley Map) + - Extract: Evolution stage influences on build vs buy choices + +### 1b. **Read external documents and policies** + +- Read any **external documents** listed in the project context (`external/` files) — extract previous architectural decisions, decision rationale, options considered, decision outcomes +- Read any **enterprise standards** in `projects/000-global/external/` — extract enterprise decision frameworks, architecture review board templates, cross-project decision logs +- If no external docs exist but they would improve context, ask: "Do you have any previous ADRs from legacy systems or decision logs? I can read PDFs directly. Place them in `projects/{project-dir}/external/` and re-run, or skip." +- **Citation traceability**: When referencing content from external documents, follow the citation instructions in `${CLAUDE_PLUGIN_ROOT}/references/citation-instructions.md`. Place inline citation markers (e.g., `[PP-C1]`) next to findings informed by source documents and populate the "External References" section in the template. + +### 1c. **Interactive Configuration** + +Before creating the ADR, use the **AskUserQuestion** tool to gather key decision parameters. **Skip any question where the user has already provided a clear answer in their arguments.** + +**Gathering rules** (apply to all questions in this section): + +- Ask the most important question first; fill in secondary details from context or reasonable defaults. +- **Maximum 2 rounds of questions.** After that, pick the best option from available context. +- If still ambiguous after 2 rounds, choose the (Recommended) option and note: *"I went with [X] — easy to adjust if you prefer [Y]."* + +**Question 1** — header: `Escalation`, multiSelect: false +> "What escalation level does this architectural decision require?" + +- **Team**: Local implementation decision (frameworks, libraries, testing approaches) +- **Cross-team**: Affects multiple teams (integration patterns, shared services, APIs) +- **Department (Recommended)**: Department-wide impact (technology standards, cloud providers, security frameworks) +- **Cross-government**: National infrastructure or cross-department interoperability + +**Question 2** — header: `Options`, multiSelect: false +> "How many options should be evaluated (plus a 'Do Nothing' baseline)?" + +- **3 options (Recommended)**: Standard analysis — Do Nothing + 2 alternatives provides clear comparison +- **2 options**: Quick decision — Do Nothing + 1 proposed approach for straightforward choices +- **4+ options**: Comprehensive analysis — Do Nothing + 3+ alternatives for complex technology selections + +Apply the user's selections: the escalation level determines the governance forum and stakeholder RACI in the ADR. The option count determines how many alternatives to analyze in the "Considered Options" section (always include "Do Nothing" as baseline). + +### 2. **Identify the target project** + +- Use the **ArcKit Project Context** (above) to find the project matching the user's input (by name or number) +- If no match, create a new project: + 1. Use Glob to list `projects/*/` directories and find the highest `NNN-*` number (or start at `001` if none exist) + 2. Calculate the next number (zero-padded to 3 digits, e.g., `002`) + 3. Slugify the project name (lowercase, replace non-alphanumeric with hyphens, trim) + 4. Use the Write tool to create `projects/{NNN}-{slug}/README.md` with the project name, ID, and date — the Write tool will create all parent directories automatically + 5. Also create `projects/{NNN}-{slug}/external/README.md` with a note to place external reference documents here + 6. Set `PROJECT_ID` = the 3-digit number, `PROJECT_PATH` = the new directory path + +### 3. **Create decisions directory and determine ADR number** + +- Use Glob to find existing `projects/{project-slug}/decisions/ADR-*.md` files +- If none found, the next ADR number is `ADR-001` +- If found, extract the highest ADR number and increment by 1 (e.g., `ADR-003` → `ADR-004`), zero-padded to 3 digits +- The decisions directory will be created automatically when saving the file with the Write tool + +### 4. **Read the template** (with user override support) + +- **First**, check if `.arckit/templates/adr-template.md` exists in the project root +- **If found**: Read the user's customized template (user override takes precedence) +- **If not found**: Read `${CLAUDE_PLUGIN_ROOT}/templates/adr-template.md` (default) + + > **Tip**: Users can customize templates with `/arckit:customize adr` + +### 5. **Gather decision information from user** + +- **Decision title**: Short noun phrase (e.g., "Use PostgreSQL for Data Persistence") +- **Problem statement**: What architectural decision needs to be made? +- **Context**: Why is this decision needed? Business/technical drivers? +- **Status**: Proposed (default) / Accepted / Deprecated / Superseded +- **Escalation level**: Team / Cross-team / Department / Cross-government +- **Governance forum**: Architecture Review Board, TDA, Programme Board, etc. + +### 6. **Generate comprehensive ADR** following MADR v4.0 + UK Gov framework + + **Document Control** (see "Auto-Populate Document Control Fields" section below for full details): + +- Document ID: `ARC-{PROJECT_ID}-ADR-{NUM}-v{VERSION}` (e.g., `ARC-001-ADR-001-v1.0`) +- ADR Number: ADR-{NUM} (e.g., ADR-001, ADR-002) +- Version: ${VERSION} (from Step 0: Detect Version) +- Status: Proposed (or as user specified) +- Date: Current date (YYYY-MM-DD) +- Escalation Level: Based on decision scope +- Governance Forum: Based on escalation level + + **Stakeholders**: + +- **Deciders**: Who has authority to approve this ADR? +- **Consulted**: Subject matter experts to involve (two-way communication) +- **Informed**: Stakeholders to keep updated (one-way communication) +- **UK Government Escalation Context**: + - Team: Local implementation (frameworks, libraries, testing) + - Cross-team: Integration patterns, shared services, APIs + - Department: Technology standards, cloud providers, security + - Cross-government: National infrastructure, cross-department interoperability + + **Context and Problem Statement**: + +- Problem description (2-3 sentences or story format) +- Why is this decision needed? +- Business context (link to BR-xxx requirements) +- Technical context (link to FR-xxx, NFR-xxx requirements) +- Regulatory context (GDPR, GDS Service Standard, Cyber Essentials) +- Supporting links (user stories, requirements, research) + + **Decision Drivers (Forces)**: + +- **Technical drivers**: Performance, scalability, maintainability, security + - Link to NFR-xxx requirements + - Reference architecture principles +- **Business drivers**: Cost, time to market, risk reduction + - Link to BR-xxx requirements + - Link to stakeholder goals +- **Regulatory & compliance drivers**: + - GDS Service Standard (which points apply?) + - Technology Code of Practice (Point 5: Cloud first, Point 8: Reuse, Point 13: AI) + - NCSC Cyber Security (Cyber Essentials, CAF principles) + - Data Protection (UK GDPR Article 25, 35) +- **Alignment to architecture principles**: Create table showing which principles support/conflict + + **Considered Options** (MINIMUM 2-3 options, always include "Do Nothing"): + + For each option: + +- **Description**: What is this option? +- **Implementation approach**: How would it be implemented? +- **Wardley Evolution Stage**: Genesis / Custom-Built / Product / Commodity +- **Good (Pros)**: + - ✅ Benefits, requirements met, principles supported + - ✅ Quantify where possible (performance, cost savings) +- **Bad (Cons)**: + - ❌ Drawbacks, requirements not met, risks + - ❌ Trade-offs and negative consequences +- **Cost Analysis**: + - CAPEX: One-time costs (licenses, hardware, migration) + - OPEX: Ongoing costs (support, training, maintenance per year) + - TCO (3-year): Total cost of ownership +- **GDS Service Standard Impact**: Create table showing impact on relevant points + + **Option: Do Nothing (Baseline)**: + +- Always include this as baseline comparison +- Pros: No immediate cost, no risk +- Cons: Technical debt accumulates, opportunity cost, compliance risk + + **Decision Outcome**: + +- **Chosen Option**: Which option was selected +- **Y-Statement** (structured justification): + > In the context of [use case], + > facing [concern], + > we decided for [option], + > to achieve [quality/benefit], + > accepting [downside/trade-off]. +- **Justification**: Why this option over alternatives? + - Key reasons with evidence + - Stakeholder consensus or dissenting views + - Risk appetite alignment + + **Consequences**: + +- **Positive**: Benefits, capabilities enabled, compliance achieved + - Include measurable outcomes (metrics: baseline → target) +- **Negative**: Accepted trade-offs, limitations, technical debt + - Include mitigation strategies +- **Neutral**: Changes needed (training, infrastructure, process, vendors) +- **Risks and Mitigations**: Create table with risk, likelihood, impact, mitigation, owner + - Link to risk register (RISK-xxx) + + **Validation & Compliance**: + +- **How will implementation be verified?** + - Design review requirements (HLD, DLD include this decision) + - Code review checklist (PR checklist includes ADR compliance) + - Testing strategy (unit, integration, performance, security tests) +- **Monitoring & Observability**: + - Success metrics (how to measure if goals achieved) + - Alerts and dashboards +- **Compliance verification**: + - GDS Service Assessment: Which points addressed, evidence prepared + - Technology Code of Practice: Which points addressed + - Security assurance: NCSC principles, Cyber Essentials, security testing + - Data protection: DPIA updated, data flows, privacy notice + + **Links to Supporting Documents**: + +- **Requirements traceability**: + - Business: BR-xxx requirements addressed + - Functional: FR-xxx requirements addressed + - Non-functional: NFR-xxx requirements addressed +- **Architecture artifacts**: + - Architecture principles: Which influenced this decision + - Stakeholder drivers: Which stakeholder goals supported + - Risk register: Which risks mitigated (RISK-xxx) + - Research findings: Which research sections analyzed these options + - Wardley Maps: Which maps show evolution stage + - Architecture diagrams: Which C4/deployment/sequence diagrams show this + - Strategic roadmap: Which theme/initiative this supports +- **Design documents**: + - High-Level Design: HLD section implementing this + - Detailed Design: DLD specifications + - Data model: If decision affects data structure +- **External references**: + - Standards and RFCs + - Vendor documentation + - UK Government guidance (GDS Service Manual, NCSC, GOV.UK patterns) + - Research and evidence + + **Implementation Plan**: + +- **Dependencies**: Prerequisite ADRs, infrastructure, team skills +- **Implementation timeline**: Phases, activities, duration, owners +- **Rollback plan**: Trigger, procedure, owner + + **Review and Updates**: + +- **Review schedule**: Initial (3-6 months), periodic (annually) +- **Review criteria**: Metrics met? Assumptions changed? Still optimal? +- **Trigger events**: Version changes, cost changes, security incidents, regulatory changes + + **Related Decisions**: + +- **Depends on**: ADR-xxx +- **Depended on by**: ADR-yyy +- **Conflicts with**: ADR-zzz (how resolved) + + **Appendices** (optional): + +- **Options analysis details**: Benchmarks, PoC results +- **Stakeholder consultation log**: Date, stakeholder, feedback, action +- **Mermaid decision flow diagram**: Visual representation of decision logic + +### 7. **Ensure comprehensive traceability** + +- Link decision drivers to requirements (BR-xxx, FR-xxx, NFR-xxx) +- Link to architecture principles (show alignment/conflicts) +- Link to stakeholder goals (from ARC-{PROJECT_ID}-STKE-v*.md) +- Link to risk mitigations (from ARC-{PROJECT_ID}-RISK-v*.md) +- Link to research findings (which sections analyzed these options) +- Link to Wardley maps (evolution stage influences choice) +- Link to roadmap (which theme/initiative this supports) +- Create bidirectional traceability chain + +### 8. **Create file naming** + +- **Format**: `ARC-{PROJECT_ID}-ADR-{NUM}-v{VERSION}.md` +- **Example**: `ARC-001-ADR-001-v1.0.md`, `ARC-001-ADR-002-v1.0.md` +- **Path**: `projects/{PROJECT_ID}-{project-name}/decisions/ARC-{PROJECT_ID}-ADR-{NUM}-v{VERSION}.md` +- Sequence number auto-assigned from existing files in the directory + +Before writing the file, read `${CLAUDE_PLUGIN_ROOT}/references/quality-checklist.md` and verify all **Common Checks** plus the **ADR** per-type checks pass. Fix any failures before proceeding. + +### 9. **Use Write tool to create the ADR file** + +- **CRITICAL**: Because ADRs are very large documents (500+ lines), you MUST use the Write tool to create the file +- Do NOT output the full ADR content in your response (this will exceed token limits) +- Use Write tool with the full ADR content +- Path: `projects/{PROJECT_ID}-{project-name}/decisions/ARC-{PROJECT_ID}-ADR-{NUM}-v${VERSION}.md` + +**CRITICAL - Auto-Populate Document Control Fields**: + +Before completing the document, populate ALL document control fields in the header: + +### Step 0: Detect Version + +Before generating the document ID, check if a previous version exists: + +ADRs are multi-instance documents. Version detection depends on whether you are creating a **new** ADR or **updating** an existing one: + +**Creating a new ADR** (default): Use `VERSION="1.0"` — the ADR number is auto-incremented by `--next-num`. + +**Updating an existing ADR** (user explicitly references an existing ADR number, e.g., "update ADR-001", "revise ADR-003"): + +1. Look for existing `ARC-{PROJECT_ID}-ADR-{NUM}-v*.md` files in `projects/{project-dir}/decisions/` +2. **If no existing file**: Use VERSION="1.0" +3. **If existing file found**: + - Read the existing document to understand its current state + - Compare against current inputs and the decision being made + - **Minor increment** (e.g., 1.0 → 1.1): Status change, updated evidence, corrected details, same decision outcome + - **Major increment** (e.g., 1.0 → 2.0): Decision outcome changed, options re-evaluated, fundamentally different justification +4. Use the determined version for document ID, filename, Document Control, and Revision History +5. For v1.1+/v2.0+: Add a Revision History entry describing what changed from the previous version + +### Step 1: Construct Document ID + +- **Document ID**: `ARC-{PROJECT_ID}-ADR-{NNN}-v{VERSION}` (e.g., `ARC-001-ADR-001-v1.0`) +- Sequence number `{NNN}`: Check existing files in `decisions/` and use the next number (001, 002, ...) + +### Step 2: Populate Required Fields + +**Auto-populated fields** (populate these automatically): + +- `[PROJECT_ID]` → Extract from project path (e.g., "001" from "projects/001-project-name") +- `[VERSION]` → Determined version from Step 0 +- `[DATE]` / `[YYYY-MM-DD]` → Current date in YYYY-MM-DD format +- `[DOCUMENT_TYPE_NAME]` → "Architecture Decision Record" +- `ARC-[PROJECT_ID]-ADR-[NUM]-v[VERSION]` → Construct using format from Step 1 +- `[COMMAND]` → "arckit.adr" + +**User-provided fields** (extract from project metadata or user input): + +- `[PROJECT_NAME]` → Full project name from project metadata or user input +- `[OWNER_NAME_AND_ROLE]` → Document owner (prompt user if not in metadata) +- `[CLASSIFICATION]` → Default to "OFFICIAL" for UK Gov, "PUBLIC" otherwise (or prompt user) + +**Calculated fields**: + +- `[YYYY-MM-DD]` for Review Date → Current date + 30 days (requirements, research, risks) +- `[YYYY-MM-DD]` for Review Date → Phase gate dates (Alpha/Beta/Live for compliance docs) + +**Pending fields** (leave as [PENDING] until manually updated): + +- `[REVIEWER_NAME]` → [PENDING] +- `[APPROVER_NAME]` → [PENDING] +- `[DISTRIBUTION_LIST]` → Default to "Project Team, Architecture Team" or [PENDING] + +### Step 3: Populate Revision History + +```markdown +| 1.0 | {DATE} | ArcKit AI | Initial creation from `/arckit:adr` command | [PENDING] | [PENDING] | +``` + +### Step 4: Populate Generation Metadata Footer + +The footer should be populated with: + +```markdown +**Generated by**: ArcKit `/arckit:adr` command +**Generated on**: {DATE} {TIME} GMT +**ArcKit Version**: {ARCKIT_VERSION} +**Project**: {PROJECT_NAME} (Project {PROJECT_ID}) +**AI Model**: [Use actual model name, e.g., "claude-sonnet-4-5-20250929"] +**Generation Context**: [Brief note about source documents used] +``` + +### Example Fully Populated Document Control Section + +```markdown +## Document Control + +| Field | Value | +|-------|-------| +| **Document ID** | ARC-001-ADR-003-v1.0 | +| **Document Type** | Architecture Decision Record | +| **Project** | Windows 10 to Windows 11 Migration (Project 001) | +| **Classification** | OFFICIAL-SENSITIVE | +| **Status** | DRAFT | +| **Version** | 1.0 | +| **Created Date** | 2025-10-29 | +| **Last Modified** | 2025-10-29 | +| **Review Date** | 2025-11-30 | +| **Owner** | John Smith (Enterprise Architect) | +| **Reviewed By** | [PENDING] | +| **Approved By** | [PENDING] | +| **Distribution** | PM Team, Architecture Team, Dev Team | + +## Revision History + +| Version | Date | Author | Changes | Approved By | Approval Date | +|---------|------|--------|---------|-------------|---------------| +| 1.0 | 2025-10-29 | ArcKit AI | Initial creation from `/arckit:adr` command | [PENDING] | [PENDING] | +``` + +### 10. **Show summary to user** (NOT full document) + + ```markdown + ## Architecture Decision Record Created + + **ADR Number**: ADR-{NUM} + **Title**: {Decision title} + **Status**: {Proposed/Accepted/etc} + **File**: `projects/{PROJECT_ID}-{project-name}/decisions/ARC-{PROJECT_ID}-ADR-{NUM}-v${VERSION}.md` + + ### Chosen Option + {Option name} + + ### Y-Statement + > In the context of {use case}, + > facing {concern}, + > we decided for {option}, + > to achieve {quality}, + > accepting {downside}. + + ### Options Considered + - Option 1: {Name} - {Brief summary} + - Option 2: {Name} - {Brief summary} + - Option 3: Do Nothing - Baseline comparison + + ### Key Consequences + **Positive**: + - {Benefit 1} + - {Benefit 2} + + **Negative** (accepted trade-offs): + - {Trade-off 1} + - {Trade-off 2} + + ### Decision Drivers + - {Driver 1}: {Brief description} + - {Driver 2}: {Brief description} + + ### Requirements Addressed + - BR-XXX: {Business requirement} + - FR-XXX: {Functional requirement} + - NFR-XXX: {Non-functional requirement} + + ### Traceability Links + - Architecture principles: {Count} principles referenced + - Stakeholder goals: {Count} goals supported + - Requirements: {Count} requirements addressed + - Risks: {Count} risks mitigated + + ### Next Steps + - [ ] Stakeholder review and approval + - [ ] Update status to "Accepted" once approved + - [ ] Reflect decision in HLD/DLD + - [ ] Update architecture diagrams + - [ ] Implement decision + - [ ] Verify with testing + - [ ] Schedule ADR review ({Date}) + + ### UK Government Compliance + **Escalation Level**: {Level} + **Governance Forum**: {Forum} + **GDS Service Standard**: Points {X, Y, Z} addressed + **Technology Code of Practice**: Points {A, B, C} addressed + ``` + +### 11. **Provide guidance on ADR lifecycle** + +- **Status transitions**: + - Proposed → Accepted (after approval) + - Accepted → Superseded (when replaced by new ADR) + - Accepted → Deprecated (when no longer recommended but not replaced) +- **When to create new ADR**: + - Significant architectural decision affecting structure, behavior, or quality attributes + - Technology choices (databases, frameworks, cloud services, APIs) + - Integration patterns and protocols + - Security and compliance approaches + - Deployment and infrastructure decisions + - Data management and privacy decisions +- **When NOT to create ADR**: + - Minor implementation details (variable names, coding style) + - Temporary workarounds or fixes + - Decisions that don't affect other teams or systems +- **ADR numbering**: + - Sequential: ADR-001, ADR-002, ADR-003, etc. + - Never reuse numbers (even if ADR is superseded) + - Superseded ADRs remain in place with updated status + +## Important Notes + +- **Token Limit**: ADRs are very large documents. Always use Write tool to create the file, never output full content +- **Minimum Options**: Always analyze at least 2-3 options plus "Do Nothing" baseline +- **Y-Statement**: This is the concise justification format - always include it +- **Traceability**: Every ADR must link to requirements, principles, stakeholders, risks +- **UK Government**: Include escalation level and governance forum for compliance +- **MADR Format**: Follow MADR v4.0 structure (Context, Decision Drivers, Options, Outcome, Consequences) +- **Evidence-Based**: Decisions should be supported by research findings, benchmarks, PoCs +- **Wardley Evolution**: Consider evolution stage (Genesis/Custom/Product/Commodity) when choosing options +- **GDS Service Standard**: Document which Service Standard points the decision addresses +- **Technology Code of Practice**: Show TCoP compliance (Point 5: Cloud first, Point 8: Reuse, etc.) +- **Security**: Include NCSC guidance, Cyber Essentials, security testing requirements +- **Review Schedule**: Every ADR needs review schedule and trigger events for re-evaluation +- **Rollback Plan**: Document how to rollback if decision proves wrong +- **Cost Analysis**: Always include CAPEX, OPEX, TCO for each option +- **Consequences**: Be explicit about both positive and negative consequences +- **Validation**: Define how implementation will be verified (review, testing, monitoring) + +- **Markdown escaping**: When writing less-than or greater-than comparisons, always include a space after `<` or `>` (e.g., `< 3 seconds`, `> 99.9% uptime`) to prevent markdown renderers from interpreting them as HTML tags or emoji + +## Example Decision Titles + +- "Use PostgreSQL for Transactional Data Persistence" +- "Adopt API Gateway Pattern for Service Integration" +- "Deploy on Azure Government Cloud" +- "Implement OAuth 2.0 with Azure AD for Authentication" +- "Use Event-Driven Architecture for Real-Time Processing" +- "Choose React with TypeScript for Frontend Development" +- "Implement Microservices over Monolithic Architecture" +- "Use Terraform for Infrastructure as Code" +- "Adopt Kubernetes for Container Orchestration" +- "Implement CQRS Pattern for Read/Write Separation" + +## UK Government Escalation Guidance + +| Level | Decision Makers | Example Decisions | Governance Forum | +|-------|----------------|-------------------|------------------| +| **Team** | Tech Lead, Senior Developers | Framework choice, testing strategy, code patterns | Team standup, Sprint review | +| **Cross-team** | Technical Architects, Lead Engineers | Integration patterns, API standards, shared libraries | Architecture Forum, Technical Design Review | +| **Department** | Enterprise Architects, CTO, Architecture Board | Cloud provider, security framework, technology standards | Architecture Review Board, Enterprise Architecture Board | +| **Cross-government** | Technical Design Authority, GDS | National infrastructure, cross-department APIs, GOV.UK standards | Technical Design Council, GDS Architecture Community | diff --git a/arckit-copilot/commands/ai-playbook.md b/arckit-copilot/commands/ai-playbook.md new file mode 100644 index 00000000..2dea0471 --- /dev/null +++ b/arckit-copilot/commands/ai-playbook.md @@ -0,0 +1,508 @@ +--- +description: Assess UK Government AI Playbook compliance for responsible AI deployment +argument-hint: "" +effort: max +--- + +You are helping a UK government organization assess compliance with the UK Government AI Playbook for responsible AI deployment. + +## User Input + +```text +$ARGUMENTS +``` + +## Instructions + +> **Note**: The ArcKit Project Context hook has already detected all projects, artifacts, external documents, and global policies. Use that context below — no need to scan directories manually. + +1. **Identify AI system context**: + - AI system name and purpose + - Type of AI (Generative, Predictive, Computer Vision, NLP, etc.) + - Use case in government operations + - Users (internal staff, citizens, affected population) + - Decision authority level + +2. **Determine risk level**: + +**HIGH-RISK AI** (requires strictest oversight): + +- Fully automated decisions affecting: + - Health and safety + - Fundamental rights + - Access to services + - Legal status + - Employment + - Financial circumstances +- Examples: Benefit eligibility, immigration decisions, medical diagnosis, predictive policing + +**MEDIUM-RISK AI** (significant impact with human oversight): + +- Semi-automated decisions with human review +- Significant resource allocation +- Examples: Case prioritization, fraud detection scoring, resource allocation + +**LOW-RISK AI** (productivity/administrative): + +- Recommendation systems with human control +- Administrative automation +- Examples: Email categorization, meeting scheduling, document summarization + +3. **Read existing artifacts from the project context:** + + **MANDATORY** (warn if missing): + - **PRIN** (Architecture Principles, in 000-global) + - Extract: AI/ML governance standards, technology constraints, compliance requirements + - If missing: warn user to run `/arckit:principles` first + - **REQ** (Requirements) + - Extract: AI/ML-related FR requirements, NFR (security, compliance, fairness), DR (data requirements) + - If missing: warn user to run `/arckit:requirements` first + + **RECOMMENDED** (read if available, note if missing): + - **DATA** (Data Model) + - Extract: Training data sources, personal data, special category data, data quality + - **RISK** (Risk Register) + - Extract: AI-specific risks, bias risks, security risks, mitigation strategies + + **OPTIONAL** (read if available, skip silently if missing): + - **STKE** (Stakeholder Analysis) + - Extract: Affected populations, decision authority, accountability + - **DPIA** (Data Protection Impact Assessment) + - Extract: Data protection context, lawful basis, privacy risks + + **Read the template** (with user override support): + - **First**, check if `.arckit/templates/uk-gov-ai-playbook-template.md` exists in the project root + - **If found**: Read the user's customized template (user override takes precedence) + - **If not found**: Read `${CLAUDE_PLUGIN_ROOT}/templates/uk-gov-ai-playbook-template.md` (default) + + > **Tip**: Users can customize templates with `/arckit:customize ai-playbook` + +4. **Read external documents and policies**: + - Read any **external documents** listed in the project context (`external/` files) — extract AI ethics policies, model cards, algorithmic impact assessments, bias testing results + - Read any **global policies** listed in the project context (`000-global/policies/`) — extract AI governance framework, approved AI/ML platforms, responsible AI guidelines + - Read any **enterprise standards** in `projects/000-global/external/` — extract enterprise AI strategy, responsible AI frameworks, cross-project AI maturity assessments + - If no external docs exist but they would improve the output, ask: "Do you have any AI governance policies, model cards, or ethical AI assessments? I can read PDFs directly. Place them in `projects/{project-dir}/external/` and re-run, or skip." + - **Citation traceability**: When referencing content from external documents, follow the citation instructions in `${CLAUDE_PLUGIN_ROOT}/references/citation-instructions.md`. Place inline citation markers (e.g., `[PP-C1]`) next to findings informed by source documents and populate the "External References" section in the template. + +5. **Assess the 10 Core Principles**: + +### Principle 1: Understanding AI + +- Team understands AI limitations (no reasoning, contextual awareness) +- Realistic expectations (hallucinations, biases, edge cases) +- Appropriate use case for AI capabilities + +### Principle 2: Lawful and Ethical Use + +- **CRITICAL**: DPIA, EqIA, Human Rights assessment completed +- UK GDPR compliance +- Equality Act 2010 compliance +- Data Ethics Framework applied +- Legal/compliance team engaged early + +### Principle 3: Security + +- Cyber security assessment (NCSC guidance) +- AI-specific threats assessed: + - Prompt injection + - Data poisoning + - Model theft + - Adversarial attacks + - Model inversion +- Security controls implemented +- Red teaming conducted (for high-risk) + +### Principle 4: Human Control + +- **CRITICAL for HIGH-RISK**: Human-in-the-loop required +- Human override capability +- Escalation process documented +- Staff trained on AI limitations +- Clear responsibilities assigned + +**Human Oversight Models**: + +- **Human-in-the-loop**: Review EVERY decision (required for high-risk) +- **Human-on-the-loop**: Periodic/random review +- **Human-in-command**: Can override at any time +- **Fully automated**: AI acts autonomously (HIGH-RISK - justify!) + +### Principle 5: Lifecycle Management + +- Lifecycle plan documented (selection → decommissioning) +- Model versioning and change management +- Monitoring and performance tracking +- Model drift detection +- Retraining schedule +- Decommissioning plan + +### Principle 6: Right Tool Selection + +- Problem clearly defined +- Alternatives considered (non-AI, simpler solutions) +- Cost-benefit analysis +- AI adds genuine value +- Success metrics defined +- NOT using AI just because it's trendy + +### Principle 7: Collaboration + +- Cross-government collaboration (GDS, CDDO, AI Standards Hub) +- Academia, industry, civil society engagement +- Knowledge sharing +- Contributing to government AI community + +### Principle 8: Commercial Partnership + +- Procurement team engaged early +- Contract includes AI-specific terms: + - Performance metrics and SLAs + - Explainability requirements + - Bias audits + - Data rights and ownership + - Exit strategy (data portability) + - Liability for AI failures + +### Principle 9: Skills and Expertise + +- Team composition verified: + - AI/ML technical expertise + - Data science + - Ethical AI expertise + - Domain expertise + - User research + - Legal/compliance + - Cyber security +- Training provided on AI fundamentals, ethics, bias + +### Principle 10: Organizational Alignment + +- AI Governance Board approval +- AI strategy alignment +- Senior Responsible Owner (SRO) assigned +- Assurance team engaged +- Risk management process followed + +6. **Assess the 6 Ethical Themes**: + +### Theme 1: Safety, Security, and Robustness + +- Safety testing (no harmful outputs) +- Robustness testing (edge cases) +- Fail-safe mechanisms +- Incident response plan + +### Theme 2: Transparency and Explainability + +- **MANDATORY**: Algorithmic Transparency Recording Standard (ATRS) published +- System documented publicly (where appropriate) +- Decision explanations available to affected persons +- Model card/factsheet published + +### Theme 3: Fairness, Bias, and Discrimination + +- Bias assessment completed +- Training data reviewed for bias +- Fairness metrics calculated across protected characteristics: + - Gender + - Ethnicity + - Age + - Disability + - Religion + - Sexual orientation +- Bias mitigation techniques applied +- Ongoing monitoring for bias drift + +### Theme 4: Accountability and Responsibility + +- Clear ownership (SRO, Product Owner) +- Decision-making process documented +- Audit trail of all AI decisions +- Incident response procedures +- Accountability for errors defined + +### Theme 5: Contestability and Redress + +- Right to contest AI decisions enabled +- Human review process for contested decisions +- Appeal mechanism documented +- Redress process for those harmed +- Response times defined (e.g., 28 days) + +### Theme 6: Societal Wellbeing and Public Good + +- Positive societal impact assessment +- Environmental impact considered (carbon footprint) +- Benefits distributed fairly +- Negative impacts mitigated +- Alignment with public values + +7. **Generate comprehensive assessment**: + +Create detailed report with: + +**Executive Summary**: + +- Overall score (X/160 points, Y%) +- Risk level (High/Medium/Low) +- Compliance status (Excellent/Good/Adequate/Poor) +- Critical issues +- Go/No-Go decision + +**10 Principles Assessment** (each 0-10): + +- Compliance status (✅/⚠️/❌) +- Evidence gathered +- Findings +- Gaps +- Score + +**6 Ethical Themes Assessment** (each 0-10): + +- Compliance status +- Evidence +- Findings +- Gaps +- Score + +**Risk-Based Decision**: + +- **HIGH-RISK**: MUST score ≥90%, ALL principles met, human-in-the-loop REQUIRED +- **MEDIUM-RISK**: SHOULD score ≥75%, critical principles met +- **LOW-RISK**: SHOULD score ≥60%, basic safeguards in place + +**Mandatory Documentation Checklist**: + +- [ ] ATRS (Algorithmic Transparency Recording Standard) +- [ ] DPIA (Data Protection Impact Assessment) +- [ ] EqIA (Equality Impact Assessment) +- [ ] Human Rights Assessment +- [ ] Security Risk Assessment +- [ ] Bias Audit Report +- [ ] User Research Report + +**Action Plan**: + +- High priority (before deployment) +- Medium priority (within 3 months) +- Low priority (continuous improvement) + +8. **Map to existing ArcKit artifacts**: + +**Link to Requirements**: + +- Principle 2 (Lawful) → NFR-C-xxx (GDPR compliance requirements) +- Principle 3 (Security) → NFR-S-xxx (security requirements) +- Principle 4 (Human Control) → FR-xxx (human review features) +- Theme 3 (Fairness) → NFR-E-xxx (equity/fairness requirements) + +**Link to Design Reviews**: + +- Check HLD addresses AI Playbook principles +- Verify DLD includes human oversight mechanisms +- Ensure security controls for AI-specific threats + +**Link to TCoP**: + +- AI Playbook complements TCoP +- TCoP Point 6 (Secure) aligns with Principle 3 +- TCoP Point 7 (Privacy) aligns with Principle 2 + +9. **Provide risk-appropriate guidance**: + +**For HIGH-RISK AI systems**: + +- **STOP**: Do NOT deploy without meeting ALL principles +- Human-in-the-loop MANDATORY (review every decision) +- ATRS publication MANDATORY +- DPIA, EqIA, Human Rights assessments MANDATORY +- Quarterly audits REQUIRED +- AI Governance Board approval REQUIRED +- Senior leadership sign-off REQUIRED + +**For MEDIUM-RISK AI**: + +- Strong human oversight required +- Critical principles must be met (2, 3, 4) +- ATRS recommended +- DPIA likely required +- Annual audits + +**For LOW-RISK AI**: + +- Basic safeguards sufficient +- Human oversight recommended +- Periodic review (annual) +- Continuous improvement mindset + +10. **Highlight mandatory requirements**: + +**ATRS (Algorithmic Transparency Recording Standard)**: + +- MANDATORY for central government departments +- MANDATORY for arm's length bodies +- Publish on department website +- Update when system changes significantly + +**DPIAs (Data Protection Impact Assessments)**: + +- MANDATORY for AI processing personal data +- Must be completed BEFORE deployment +- Must be reviewed and updated regularly + +**Equality Impact Assessments (EqIA)**: + +- MANDATORY to assess impact on protected characteristics +- Must document how discrimination is prevented + +**Human Rights Assessments**: + +- MANDATORY for decisions affecting rights +- Must consider ECHR (European Convention on Human Rights) +- Document how rights are protected + +--- + +**CRITICAL - Auto-Populate Document Control Fields**: + +Before completing the document, populate ALL document control fields in the header: + +**Construct Document ID**: + +- **Document ID**: `ARC-{PROJECT_ID}-AIPB-v{VERSION}` (e.g., `ARC-001-AIPB-v1.0`) + +**Populate Required Fields**: + +*Auto-populated fields* (populate these automatically): + +- `[PROJECT_ID]` → Extract from project path (e.g., "001" from "projects/001-project-name") +- `[VERSION]` → "1.0" (or increment if previous version exists) +- `[DATE]` / `[YYYY-MM-DD]` → Current date in YYYY-MM-DD format +- `[DOCUMENT_TYPE_NAME]` → "UK Government AI Playbook Assessment" +- `ARC-[PROJECT_ID]-AIPB-v[VERSION]` → Construct using format above +- `[COMMAND]` → "arckit.ai-playbook" + +*User-provided fields* (extract from project metadata or user input): + +- `[PROJECT_NAME]` → Full project name from project metadata or user input +- `[OWNER_NAME_AND_ROLE]` → Document owner (prompt user if not in metadata) +- `[CLASSIFICATION]` → Default to "OFFICIAL" for UK Gov, "PUBLIC" otherwise (or prompt user) + +*Calculated fields*: + +- `[YYYY-MM-DD]` for Review Date → Current date + 30 days + +*Pending fields* (leave as [PENDING] until manually updated): + +- `[REVIEWER_NAME]` → [PENDING] +- `[APPROVER_NAME]` → [PENDING] +- `[DISTRIBUTION_LIST]` → Default to "Project Team, Architecture Team" or [PENDING] + +**Populate Revision History**: + +```markdown +| 1.0 | {DATE} | ArcKit AI | Initial creation from `/arckit:ai-playbook` command | [PENDING] | [PENDING] | +``` + +**Populate Generation Metadata Footer**: + +The footer should be populated with: + +```markdown +**Generated by**: ArcKit `/arckit:ai-playbook` command +**Generated on**: {DATE} {TIME} GMT +**ArcKit Version**: {ARCKIT_VERSION} +**Project**: {PROJECT_NAME} (Project {PROJECT_ID}) +**AI Model**: [Use actual model name, e.g., "claude-sonnet-4-5-20250929"] +**Generation Context**: [Brief note about source documents used] +``` + +--- + +Before writing the file, read `${CLAUDE_PLUGIN_ROOT}/references/quality-checklist.md` and verify all **Common Checks** plus the **AIPB** per-type checks pass. Fix any failures before proceeding. + +11. **Write comprehensive output**: + +Output location: `projects/{project-dir}/ARC-{PROJECT_ID}-AIPB-v1.0.md` + +Use template structure from `uk-gov-ai-playbook-template.md` + +12. **Provide next steps**: + +After assessment: + +- Summary of compliance level +- Critical blocking issues +- Recommended actions with priorities +- Timeline for remediation +- Next review date + +## Example Usage + +User: `/arckit:ai-playbook Assess AI Playbook compliance for benefits eligibility chatbot using GPT-4` + +You should: + +- Identify system: Benefits eligibility chatbot, Generative AI (LLM) +- Determine risk: **HIGH-RISK** (affects access to benefits - fundamental right) +- Assess 10 principles: + - 1. Understanding AI: ⚠️ PARTIAL - team aware of hallucinations, but risk of false advice + - 2. Lawful/Ethical: ❌ NON-COMPLIANT - DPIA not yet completed (BLOCKING) + - 3. Security: ✅ COMPLIANT - prompt injection defenses, content filtering + - 4. Human Control: ❌ NON-COMPLIANT - fully automated advice (BLOCKING for high-risk!) + - 5. Lifecycle: ✅ COMPLIANT - monitoring, retraining schedule defined + - 6. Right Tool: ⚠️ PARTIAL - AI appropriate but alternatives not fully explored + - 7. Collaboration: ✅ COMPLIANT - engaged with GDS, DWP + - 8. Commercial: ✅ COMPLIANT - OpenAI contract includes audit rights + - 9. Skills: ✅ COMPLIANT - multidisciplinary team + - 10. Organizational: ✅ COMPLIANT - SRO assigned, governance in place +- Assess 6 ethical themes: + - 1. Safety: ⚠️ PARTIAL - content filtering but some harmful outputs in testing + - 2. Transparency: ❌ NON-COMPLIANT - ATRS not yet published (MANDATORY) + - 3. Fairness: ⚠️ PARTIAL - bias testing started, gaps in demographic coverage + - 4. Accountability: ✅ COMPLIANT - clear ownership, audit trail + - 5. Contestability: ❌ NON-COMPLIANT - no human review process (BLOCKING) + - 6. Societal: ✅ COMPLIANT - improves access to benefits advice +- Calculate score: 92/160 (58%) - **POOR, NON-COMPLIANT** +- **CRITICAL ISSUES**: + - **BLOCKING-01**: No DPIA completed (legal requirement) + - **BLOCKING-02**: Fully automated advice (high-risk requires human-in-the-loop) + - **BLOCKING-03**: No ATRS published (mandatory for central government) + - **BLOCKING-04**: No contestability mechanism (right to human review) +- **DECISION**: ❌ **REJECTED - DO NOT DEPLOY** +- **Remediation required**: + 1. Complete DPIA immediately + 2. Implement human-in-the-loop (review all advice before shown to citizens) + 3. Publish ATRS + 4. Create contestability process + 5. Re-assess after remediation +- Write to `projects/NNN-benefits-chatbot/ARC-NNN-AIPB-v1.0.md` +- **Summary**: "HIGH-RISK AI system with 4 blocking issues. Cannot deploy until ALL principles met." + +## Important Notes + +- AI Playbook is **MANDATORY** guidance for all UK government AI systems +- HIGH-RISK AI cannot deploy without meeting ALL principles +- ATRS publication is MANDATORY for central government +- DPIAs are MANDATORY for AI processing personal data +- Human oversight is REQUIRED for high-risk decisions +- Non-compliance can result in legal challenges, ICO fines, public backlash +- "Move fast and break things" does NOT apply to government AI +- When in doubt, err on side of caution (add more safeguards) + +- **Markdown escaping**: When writing less-than or greater-than comparisons, always include a space after `<` or `>` (e.g., `< 3 seconds`, `> 99.9% uptime`) to prevent markdown renderers from interpreting them as HTML tags or emoji + +## Related Frameworks + +- **Technology Code of Practice** (TCoP) - broader technology governance +- **Data Ethics Framework** - responsible data use +- **Service Standard** - service design and delivery +- **NCSC Guidance** - cyber security for AI systems +- **ICO AI Guidance** - data protection and AI + +## Resources + +- AI Playbook: https://www.gov.uk/government/publications/ai-playbook-for-the-uk-government +- ATRS: https://www.gov.uk/government/publications/guidance-for-organisations-using-the-algorithmic-transparency-recording-standard +- Data Ethics Framework: https://www.gov.uk/government/publications/data-ethics-framework +- ICO AI Guidance: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/ diff --git a/arckit-copilot/commands/analyze.md b/arckit-copilot/commands/analyze.md new file mode 100644 index 00000000..308b5d0d --- /dev/null +++ b/arckit-copilot/commands/analyze.md @@ -0,0 +1,1600 @@ +--- +description: Perform comprehensive governance quality analysis across architecture artifacts (requirements, principles, designs, assessments) +argument-hint: "" +effort: high +--- + +## User Input + +```text +$ARGUMENTS +``` + +## Goal + +Identify inconsistencies, gaps, ambiguities, and compliance issues across all architecture governance artifacts before implementation or procurement. This command performs **non-destructive analysis** and produces a structured report saved to the project directory for tracking and audit purposes. + +## Operating Constraints + +**Non-Destructive Analysis**: Do **not** modify existing artifacts. Generate a comprehensive analysis report and save it to the project directory for tracking, sharing, and audit trail. + +**Architecture Principles Authority**: The architecture principles (`ARC-000-PRIN-*.md` in `projects/000-global/`) are **non-negotiable**. Any conflicts with principles are automatically CRITICAL and require adjustment of requirements, designs, or vendor proposals—not dilution or reinterpretation of the principles. + +**UK Government Compliance Authority** (if applicable): TCoP, AI Playbook, and ATRS compliance are mandatory for UK government projects. Non-compliance is CRITICAL. + +## Execution Steps + +### 0. Read the Template + +**Read the template** (with user override support): + +- **First**, check if `.arckit/templates/analysis-report-template.md` exists in the project root +- **If found**: Read the user's customized template (user override takes precedence) +- **If not found**: Read `${CLAUDE_PLUGIN_ROOT}/templates/analysis-report-template.md` (default) + +> **Tip**: Users can customize templates with `/arckit:customize analyze` + +### Hook-Aware Shortcut + +If the hook has injected a `## Governance Scan Pre-processor Complete` section in the context, follow this protocol. If no hook data is present, proceed with Steps 1-2 as normal. + +**Rule 1 — Hook tables are primary data.** Use them directly for all detection passes. Do NOT re-read any artifact file listed in the Artifact Inventory table. + +**Rule 2 — Targeted reads only.** When a detection pass needs evidence beyond hook tables, use Grep (search for specific patterns) or Read with offset/limit (specific sections). NEVER read an entire artifact file. + +**Rule 3 — Skip Steps 1-2 entirely.** Go directly to Step 3. Still read the template (Step 0) for output formatting. + +#### Hook Data to Detection Pass Mapping + +Use this table to identify the primary data source for each detection pass. Only perform a targeted read when the hook data is genuinely insufficient for a specific check. + +| Detection Pass | Primary Hook Data | Targeted Read (only if needed) | +|---|---|---| +| A. Requirements Quality | Requirements Inventory, Priority Distribution, Placeholder Counts | Hook data sufficient for all Pass A checks | +| B. Principles Alignment | Principles table + Requirements Inventory | Grep PRIN files for full validation criteria of specific principles flagged as violated | +| C. Req-Design Traceability | Coverage Summary, Orphan Requirements, Cross-Reference Map | Hook data sufficient for all Pass C checks | +| D. Vendor Procurement | Vendor Inventory + Cross-Reference Map | Grep vendor HLD/DLD for specific requirement IDs missing from cross-ref map | +| E. Stakeholder Traceability | Artifact Inventory (STKE presence) + Requirements Inventory | Grep STKE for driver-goal-outcome chains when validating orphan requirements | +| F. Risk Management | Risks table + Requirements Inventory | Grep RISK file for "Risk Appetite" section only (appetite thresholds) | +| G. Business Case | Artifact Inventory (SOBC presence) + Risks table | Grep SOBC for benefits table and option analysis section | +| H. Data Model Consistency | Requirements Inventory (DR-xxx) + Cross-Reference Map | Grep DATA file for entity catalog when validating DR-entity mapping | +| I. UK Gov Compliance | Compliance Artifact Presence | Grep TCOP for per-point scores; Grep AIPB for risk level and principle status | +| J. MOD SbD Compliance | Compliance Artifact Presence | Grep SECD-MOD for SbD principle scores and NIST CSF function scores | +| K. Cross-Artifact Consistency | All hook tables (Document Control, coverage, cross-refs) | Hook data sufficient for all Pass K checks | + +#### Targeted Read Examples + +Correct (surgical): + +- `Grep "Risk Appetite" in projects/001-*/ARC-*-RISK-*.md` then read only 10-20 lines around match +- `Grep "### 5\. Cloud" in projects/000-global/ARC-000-PRIN-*.md` to get one principle's full criteria +- `Read ARC-001-TCOP-v1.0.md offset=50 limit=30` to get just the scoring table + +Wrong (wasteful — this data is already in hook tables): + +- `Read ARC-001-REQ-v1.0.md` — entire requirements file (use Requirements Inventory table) +- `Read ARC-001-RISK-v1.0.md` — entire risk register (use Risks table) +- `Read ARC-000-PRIN-v1.0.md` — entire principles file (use Principles table, grep only for specific criteria) + +### 1. Discover Project Context + +Identify the project directory to analyze: + +- If user specifies project: Use specified project directory +- If only one project exists: Analyze that project +- If multiple projects: Ask user which project to analyze + +Expected structure: + +```text +projects/ +└── {project-dir}/ + ├── ARC-{PROJECT_ID}-STKE-v*.md (RECOMMENDED - stakeholder analysis) + ├── ARC-{PROJECT_ID}-RISK-v*.md (RECOMMENDED - risk register) + ├── ARC-{PROJECT_ID}-SOBC-v*.md (RECOMMENDED - business case) + ├── ARC-{PROJECT_ID}-REQ-v*.md (requirements) + ├── ARC-{PROJECT_ID}-DATA-v*.md (if DR-xxx requirements exist - data model) + ├── ARC-*-SOW-*.md (if vendor procurement) + ├── ARC-*-EVAL-*.md (if vendor procurement) + ├── vendors/ + │ └── {vendor-name}/ + │ ├── hld-v1.md + │ ├── dld-v1.md + │ └── reviews/ + ├── ARC-*-TCOP-*.md (if UK Gov) + ├── ARC-*-AIPB-*.md (if UK Gov AI) + ├── ARC-*-ATRS-*.md (if UK Gov AI) + ├── ARC-*-SECD-MOD-*.md (if MOD project) + └── ARC-{PROJECT_ID}-TRAC-v*.md (traceability matrix) +``` + +### 2. Load Artifacts (Progressive Disclosure) + +Load only minimal necessary context from each artifact: + +**From any `ARC-000-PRIN-*.md` file in `projects/000-global/`** (if exists): + +- Strategic principles (Cloud-First, API-First, etc.) +- Security principles +- Data principles +- Technology standards +- Compliance requirements + +**From any `ARC-*-STKE-*.md` file in `projects/{project-dir}/`** (if exists): + +- Stakeholder roster with power-interest grid +- Driver types (STRATEGIC, OPERATIONAL, FINANCIAL, COMPLIANCE, PERSONAL, RISK, CUSTOMER) +- Driver → Goal → Outcome traceability +- Conflicts and resolutions +- RACI matrix for governance + +**From any `ARC-*-RISK-*.md` file in `projects/{project-dir}/`** (if exists): + +- Risk categories (Strategic, Operational, Financial, Compliance, Reputational, Technology) +- Inherent vs Residual risk scores (5×5 matrix) +- Risk responses (4Ts: Tolerate, Treat, Transfer, Terminate) +- Risk owners (should align with RACI matrix) +- Risk appetite and tolerance levels + +**From any `ARC-*-SOBC-*.md` file in `projects/{project-dir}/`** (if exists): + +- Strategic Case (problem, drivers, stakeholder goals) +- Economic Case (options, benefits, NPV, ROI) +- Commercial Case (procurement strategy) +- Financial Case (budget, TCO) +- Management Case (governance, delivery, change, risks, benefits realization) + +**From any `ARC-*-REQ-*.md` file in `projects/{project-dir}/`** (if exists): + +- Business requirements (BR-xxx) +- Functional requirements (FR-xxx) +- Non-functional requirements (NFR-xxx) + - Security (NFR-S-xxx) + - Performance (NFR-P-xxx) + - Compliance (NFR-C-xxx) + - Accessibility (NFR-A-xxx) +- Integration requirements (INT-xxx) +- Data requirements (DR-xxx) +- Success criteria + +**From any `ARC-*-DATA-*.md` file in `projects/{project-dir}/`** (if exists): + +- Entity-Relationship Diagram (ERD) +- Entity catalog (E-001, E-002, etc.) +- PII identification and GDPR compliance +- Data governance matrix (owners, stewards, custodians) +- CRUD matrix (component access patterns) +- Data integration mapping (upstream/downstream) +- DR-xxx requirement traceability to entities + +**From `projects/{project-dir}/ARC-*-SOW-*.md`** (if exists): + +- Scope of work +- Deliverables +- Technical requirements +- Timeline and budget + +**From `projects/{project-dir}/vendors/{vendor}/hld-v*.md`** (if exists): + +- Architecture overview +- Component design +- Technology stack +- Security architecture +- Data architecture + +**From `projects/{project-dir}/vendors/{vendor}/dld-v*.md`** (if exists): + +- Component specifications +- API contracts +- Database schemas +- Security implementation + +**From UK Government Assessments** (if exist): + +- `ARC-*-TCOP-*.md`: TCoP compliance status +- `ARC-*-AIPB-*.md`: AI Playbook compliance status +- `ARC-*-ATRS-*.md`: ATRS record completeness + +**From MOD Assessment** (if exists): + +- `ARC-*-SECD-MOD-*.md`: MOD SbD compliance status + - 7 SbD Principles assessment + - NIST CSF (Identify, Protect, Detect, Respond, Recover) + - CAAT registration and self-assessment completion + - Three Lines of Defence + - Delivery Team Security Lead (DTSL) appointment + - Supplier attestation (for vendor-delivered systems) + +### 3. Build Semantic Models + +Create internal representations (do not include raw artifacts in output): + +**Stakeholder Traceability Matrix** (if ARC-*-STKE-*.md exists): + +- Each stakeholder with drivers, goals, outcomes +- RACI roles for governance +- Conflicts and resolutions +- Which requirements trace to which stakeholder goals? + +**Risk Coverage Matrix** (if ARC-*-RISK-*.md exists): + +- Each risk with category, inherent/residual scores, response +- Risk owners from RACI matrix +- Which requirements address risk mitigation? +- Which design elements mitigate risks? + +**Business Case Alignment Matrix** (if ARC-*-SOBC-*.md exists): + +- Benefits mapping to stakeholder goals +- Benefits mapping to requirements +- Costs mapping to requirements scope +- Risks from risk register reflected in Management Case + +**Requirements Inventory**: + +- Each requirement with ID, type, priority (MUST/SHOULD/MAY) +- Map to principles (which principles does this requirement satisfy?) +- Map to stakeholder goals (which goals does this requirement address?) +- Map to success criteria + +**Data Model Coverage Matrix** (if ARC-*-DATA-*.md exists): + +- Each DR-xxx requirement mapped to entities +- Each entity with PII flags, governance owners, CRUD access +- Data owners from stakeholder RACI matrix +- Database schema in DLD matches data model entities + +**Principles Compliance Matrix**: + +- Each principle with validation criteria +- Which requirements/designs satisfy each principle? + +**Design Coverage Matrix**: + +- Which requirements are addressed in HLD/DLD? +- Which components implement which requirements? + +**UK Government Compliance Matrix** (if applicable): + +- TCoP: 13 points with compliance status +- AI Playbook: 10 principles + 6 themes with compliance status +- ATRS: Mandatory fields completion status + +**MOD Compliance Matrix** (if ARC-*-SECD-MOD-*.md exists): + +- 7 SbD Principles with compliance status +- NIST CSF functions (Identify, Protect, Detect, Respond, Recover) +- CAAT registration status +- Three Lines of Defence implementation + +### 4. Detection Passes (Token-Efficient Analysis) + +Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary. + +#### A. Requirements Quality Analysis + +**Duplication Detection**: + +- Near-duplicate requirements across BR/FR/NFR categories +- Redundant requirements that should be consolidated + +**Ambiguity Detection**: + +- Vague adjectives lacking measurable criteria ("fast", "secure", "scalable", "intuitive") +- Missing acceptance criteria for functional requirements +- Unresolved placeholders (TODO, TBD, TBC, ???, ``) + +**Underspecification**: + +- Requirements with verbs but missing measurable outcomes +- Missing non-functional requirements (no security, no performance, no compliance) +- Missing data requirements (system handles sensitive data but no DR-xxx) +- Missing integration requirements (integrates with external systems but no INT-xxx) + +**Priority Issues**: + +- All requirements marked as MUST (no prioritization) +- No MUST requirements (everything is optional) +- Conflicting priorities + +#### B. Architecture Principles Alignment + +**Principle Violations** (CRITICAL): + +- Requirements or designs that violate architecture principles +- Technology choices that conflict with approved stack +- Security approaches that violate security-by-design principle +- Cloud architecture that violates Cloud-First principle + +**Missing Principle Coverage**: + +- Principles not reflected in requirements +- Principles not validated in design reviews + +**Principle Drift**: + +- Inconsistent interpretation of principles across artifacts + +#### C. Requirements → Design Traceability + +**Coverage Gaps**: + +- Requirements with zero design coverage (not addressed in HLD/DLD) +- Critical MUST requirements not covered +- Security requirements (NFR-S-xxx) not reflected in security architecture +- Performance requirements (NFR-P-xxx) not validated in design +- Compliance requirements (NFR-C-xxx) not addressed + +**Orphan Design Elements**: + +- Components in HLD/DLD not mapped to any requirement +- Technology choices not justified by requirements +- Architecture complexity not justified by requirements + +**Traceability Completeness**: + +- Does traceability matrix exist? +- Are all requirements mapped? +- Are all design elements mapped? + +#### D. Vendor Procurement Analysis (if applicable) + +**SOW Quality**: + +- SOW requirements match ARC-*-REQ-*.md? +- All technical requirements from ARC-*-REQ-*.md included in SOW? +- Missing evaluation criteria? +- Ambiguous acceptance criteria? + +**Vendor Evaluation**: + +- Evaluation criteria align with requirements priorities? +- Scoring methodology fair and unbiased? +- All critical requirements included in evaluation? + +**Vendor Design Review**: + +- HLD addresses all SOW requirements? +- Technology stack matches approved standards? +- Security architecture meets NFR-S requirements? +- Performance architecture meets NFR-P requirements? + +#### E. Stakeholder Traceability Analysis (if ARC-*-STKE-*.md exists) + +**Stakeholder Coverage**: + +- All requirements traced to stakeholder goals? +- Orphan requirements (not linked to any stakeholder goal)? +- Requirements missing stakeholder justification? + +**Conflict Resolution**: + +- Requirement conflicts documented and resolved? +- Stakeholder impact of conflict resolutions documented? +- Decision authority identified for conflicting requirements? + +**RACI Governance Alignment**: + +- Risk owners from stakeholder RACI matrix? +- Data owners from stakeholder RACI matrix? +- Delivery roles aligned with RACI assignments? + +**Missing Stakeholder Analysis**: + +- Project has requirements but no stakeholder analysis document (RECOMMENDED to run `/arckit:stakeholders`) + +#### F. Risk Management Analysis (if ARC-*-RISK-*.md exists) + +**Risk Coverage**: + +- High/Very High inherent risks have mitigation requirements? +- Risks reflected in design (risk mitigation controls in HLD/DLD)? +- Risk owners assigned and aligned with RACI matrix? +- Risk responses appropriate (4Ts: Tolerate, Treat, Transfer, Terminate)? + +**Risk-SOBC Alignment** (if ARC-*-SOBC-*.md exists): + +- Strategic risks reflected in Strategic Case urgency? +- Financial risks reflected in Economic Case cost contingency? +- Risks from risk register included in Management Case Part E? + +**Risk-Requirements Alignment**: + +- Risk mitigation actions translated into requirements? +- Security risks addressed by NFR-S-xxx requirements? +- Compliance risks addressed by NFR-C-xxx requirements? + +**Missing Risk Assessment**: + +- Project has requirements but no risk register document (RECOMMENDED to run `/arckit:risk`) + +#### G. Business Case Alignment (if ARC-*-SOBC-*.md exists) + +**Benefits Traceability**: + +- All benefits in Economic Case mapped to stakeholder goals? +- All benefits supported by requirements? +- Benefits measurable and verifiable? +- Benefits realization plan in Management Case? + +**Option Analysis Quality**: + +- Do Nothing baseline included? +- Options analysis covers build vs buy? +- Recommended option justified by requirements scope? +- Costs realistic for requirements complexity? + +**SOBC-Requirements Alignment**: + +- Strategic Case drivers reflected in requirements? +- Economic Case benefits delivered by requirements? +- Financial Case budget adequate for requirements scope? +- Management Case delivery plan realistic for requirements? + +**SOBC-Risk Alignment**: + +- Risks from risk register included in Management Case? +- Cost contingency reflects financial risks? +- Strategic risks justify urgency ("Why Now?")? + +**Missing Business Case**: + +- Project has requirements but no SOBC (RECOMMENDED for major investments to run `/arckit:sobc`) + +#### H. Data Model Consistency (if ARC-*-DATA-*.md exists) + +**DR-xxx Requirements Coverage**: + +- All DR-xxx requirements mapped to entities? +- All entities traced back to DR-xxx requirements? +- Missing data requirements (system handles data but no DR-xxx)? + +**Data Model-Design Alignment**: + +- Database schemas in DLD match data model entities? +- CRUD matrix aligns with component design in HLD? +- Data integration flows in HLD match data model upstream/downstream mappings? + +**Data Governance Alignment**: + +- Data owners from stakeholder RACI matrix? +- Data stewards and custodians assigned? +- PII identified and GDPR compliance documented? + +**Data Model Quality**: + +- ERD exists and renderable (Mermaid syntax)? +- Entities have complete attribute specifications? +- Relationships properly defined (cardinality, foreign keys)? +- Data quality metrics defined and measurable? + +**Missing Data Model**: + +- Project has DR-xxx requirements but no data model (RECOMMENDED to run `/arckit:data-model`) + +#### I. UK Government Compliance (if applicable) + +**Technology Code of Practice (TCoP)**: + +- Assessment exists? +- All 13 points assessed? +- Critical issues resolved? +- Evidence provided for each point? + +**AI Playbook** (for AI systems): + +- Assessment exists for AI/ML systems? +- Risk level determined (High/Medium/Low)? +- All 10 principles assessed? +- All 6 ethical themes assessed? +- Mandatory assessments completed (DPIA, EqIA, Human Rights)? +- Bias testing completed? +- Human oversight model defined? + +**ATRS** (for AI systems): + +- ATRS record exists for algorithmic tools? +- Tier 1 (public summary) completed? +- Tier 2 (technical details) completed? +- All mandatory fields filled? +- Ready for GOV.UK publication? + +**Compliance Alignment**: + +- Requirements aligned with TCoP? +- Design complies with TCoP (Cloud First, Open Standards, Secure)? +- AI requirements comply with AI Playbook? +- ATRS record reflects requirements and design? + +#### J. MOD Secure by Design Compliance (if ARC-*-SECD-MOD-*.md exists) + +**7 SbD Principles Assessment**: + +- Principle 1 (Understand and Define Context): Context documented, data classification determined? +- Principle 2 (Apply Security from the Start): Security embedded from inception, not bolt-on? +- Principle 3 (Apply Defence in Depth): Layered security controls implemented? +- Principle 4 (Follow Secure Design Patterns): NCSC/NIST guidance applied? +- Principle 5 (Continuously Manage Risk): Risk register maintained, continuous testing? +- Principle 6 (Secure the Supply Chain): SBOM maintained, supplier attestations obtained? +- Principle 7 (Enable Through-Life Assurance): Continuous monitoring, incident response capability? + +**NIST Cybersecurity Framework Coverage**: + +- **Identify**: Asset inventory, business environment, governance, risk assessment? +- **Protect**: Access control, data security, protective technology, training? +- **Detect**: Continuous monitoring, anomaly detection, security testing? +- **Respond**: Incident response plan, communications to MOD CERT, analysis? +- **Recover**: Recovery planning, backup/DR/BC, post-incident improvements? + +**Continuous Assurance Process** (replaced RMADS August 2023): + +- CAAT (Cyber Activity and Assurance Tracker) registration completed? +- CAAT self-assessment question sets completed based on 7 SbD Principles? +- CAAT continuously updated (not one-time submission)? +- Delivery Team Security Lead (DTSL) appointed? +- Security Assurance Coordinator (SAC) appointed (if applicable)? +- Project Security Officer (PSyO) appointed for SECRET+ systems? + +**Three Lines of Defence Implementation**: + +- **First Line**: Delivery team owns security, DTSL leads day-to-day management? +- **Second Line**: Technical Coherence assurance, security policies, independent reviews? +- **Third Line**: Independent audit, penetration testing, external audit (NAO, GIAA)? + +**Supplier Attestation** (if vendor-delivered system): + +- Suppliers attest systems are secure (ISN 2023/10)? +- Supplier-owned continuous assurance (not MOD accreditation)? +- Supplier security requirements in contracts? + +**Classification-Specific Requirements**: + +- OFFICIAL: Cyber Essentials baseline, basic access controls? +- OFFICIAL-SENSITIVE: Cyber Essentials Plus, MFA, enhanced logging, DPIA? +- SECRET: SC personnel, CESG crypto, air-gap/assured network, enhanced physical security? +- TOP SECRET: DV personnel, compartmented security, strict access control? + +**Critical Issues (Deployment Blockers)**: + +- SECRET+ data without appropriate controls? +- No encryption at rest or in transit? +- Personnel lacking security clearances? +- No threat model or risk assessment? +- Critical vulnerabilities unpatched? + +**Missing MOD SbD Assessment**: + +- Project for MOD but no SbD assessment (MANDATORY to run `/arckit:mod-secure`) + +#### K. Consistency Across Artifacts + +**Terminology Drift**: + +- Same concept named differently across files +- Inconsistent capitalization/formatting of terms +- Conflicting definitions + +**Data Model Consistency**: + +- Data entities referenced in requirements match design +- Database schemas in DLD match data requirements (DR-xxx) +- Data sharing agreements align across artifacts + +**Technology Stack Consistency**: + +- Stack choices in HLD match principles +- Technology in DLD matches HLD +- Third-party dependencies consistently listed + +**Timeline/Budget Consistency** (if vendor procurement): + +- SOW timeline realistic for requirements scope? +- Budget adequate for requirements complexity? +- Vendor proposal timeline/budget match SOW? + +#### G. Security & Compliance Analysis + +**Security Coverage**: + +- Security requirements (NFR-S-xxx) exist? +- Threat model documented? +- Security architecture in HLD? +- Security implementation in DLD? +- Security testing plan? + +**Compliance Coverage**: + +- Compliance requirements (NFR-C-xxx) exist? +- Regulatory requirements identified (GDPR, PCI-DSS, HIPAA, etc.)? +- Compliance validated in design? +- Audit requirements addressed? + +**Data Protection**: + +- Personal data handling defined? +- GDPR/UK GDPR compliance addressed? +- Data retention policy defined? +- Data breach procedures defined? + +### 5. Severity Assignment + +Use this heuristic to prioritise findings: + +**CRITICAL**: + +- Violates architecture principles (MUST) +- Missing core artifact (no ARC-*-REQ-*.md) +- MUST requirement with zero design coverage +- Stakeholder: Orphan requirements (not linked to any stakeholder goal) +- Risk: High/Very High risks with no mitigation in requirements or design +- Risk: Risk owners not from stakeholder RACI matrix (governance gap) +- SOBC: Benefits not traced to stakeholder goals or requirements +- SOBC: Costs inadequate for requirements scope (budget shortfall) +- Data Model: DR-xxx requirements with no entity mapping +- Data Model: PII not identified (GDPR compliance failure) +- Data Model: Data owners not from stakeholder RACI matrix +- UK Gov: TCoP non-compliance for mandatory points +- UK Gov: AI Playbook blocking issues for high-risk AI +- UK Gov: Missing mandatory ATRS for central government AI +- MOD: CAAT not registered (MANDATORY for all programmes) +- MOD: No DTSL appointed (required from Discovery phase) +- MOD: SECRET+ data without classification-specific controls +- MOD: Supplier attestation missing for vendor-delivered system +- Security requirement with no design coverage +- Compliance requirement with no validation + +**HIGH**: + +- Duplicate or conflicting requirements +- Ambiguous security/performance attribute +- Untestable acceptance criterion +- Missing non-functional requirements category (no security, no performance) +- Stakeholder: Requirement conflicts not documented or resolved +- Risk: Medium risks with no mitigation plan +- Risk: Risk responses not appropriate (4Ts misapplied) +- SOBC: Benefits not measurable or verifiable +- SOBC: Option analysis missing Do Nothing baseline +- Data Model: Database schema in DLD doesn't match data model entities +- Data Model: CRUD matrix doesn't align with HLD component design +- Vendor design doesn't address SOW requirements +- UK Gov: TCoP partial compliance with gaps +- UK Gov: AI Playbook non-compliance for medium-risk AI +- MOD: SbD Principles partially compliant with significant gaps +- MOD: NIST CSF functions not fully covered + +**MEDIUM**: + +- Terminology drift +- Missing optional non-functional requirement coverage +- Underspecified edge case +- Minor traceability gaps +- Documentation incomplete +- Stakeholder: Missing stakeholder analysis (recommended to add) +- Risk: Missing risk register (recommended to add) +- SOBC: Missing business case (recommended for major investments) +- Data Model: Missing data model (recommended if DR-xxx exist) +- Data Model: Data quality metrics not defined +- UK Gov: TCoP minor gaps +- MOD: CAAT self-assessment incomplete (some question sets missing) +- MOD: Third Line of Defence not fully implemented + +**LOW**: + +- Style/wording improvements +- Minor redundancy not affecting execution +- Documentation formatting +- Non-critical missing optional fields + +### 6. Produce Comprehensive Analysis Report + +Generate a comprehensive Markdown report and save it to `projects/{project-dir}/ARC-{PROJECT_ID}-ANAL-v1.0.md` with the following structure: + +```markdown +# Architecture Governance Analysis Report + +**Project**: {project-name} +**Date**: {current-date} +**Analyzed By**: ArcKit v{version} + +--- + +## Executive Summary + +**Overall Status**: ✅ Ready / ⚠️ Issues Found / ❌ Critical Issues + +**Key Metrics**: +- Total Requirements: {count} +- Requirements Coverage: {percentage}% +- Critical Issues: {count} +- High Priority Issues: {count} +- Medium Priority Issues: {count} +- Low Priority Issues: {count} + +**Recommendation**: [PROCEED / RESOLVE CRITICAL ISSUES FIRST / MAJOR REWORK NEEDED] + +--- + +## Findings Summary + +| ID | Category | Severity | Location(s) | Summary | Recommendation | +|----|----------|----------|-------------|---------|----------------| +| R1 | Requirements Quality | HIGH | ARC-*-REQ-*.md:L45-52 | Duplicate security requirements | Merge NFR-S-001 and NFR-S-005 | +| P1 | Principles Alignment | CRITICAL | ARC-*-REQ-*.md:L120 | Violates Cloud-First principle | Change to cloud-native architecture | +| T1 | Traceability | HIGH | No HLD coverage | NFR-P-002 (10K TPS) not addressed | Add performance architecture section to HLD | +| UK1 | UK Gov Compliance | CRITICAL | Missing DPIA | AI system requires DPIA before deployment | Complete DPIA for AI Playbook compliance | + +--- + +## Requirements Analysis + +### Requirements Coverage Matrix + +| Requirement ID | Type | Priority | Design Coverage | Tests Coverage | Status | +|----------------|------|----------|-----------------|----------------|--------| +| BR-001 | Business | MUST | ✅ HLD | ❌ Missing | ⚠️ Partial | +| FR-001 | Functional | MUST | ✅ HLD, DLD | ✅ Tests | ✅ Complete | +| NFR-S-001 | Security | MUST | ❌ Missing | ❌ Missing | ❌ Not Covered | + +**Statistics**: +- Total Requirements: {count} +- Fully Covered: {count} ({percentage}%) +- Partially Covered: {count} ({percentage}%) +- Not Covered: {count} ({percentage}%) + +### Uncovered Requirements (CRITICAL) + +| Requirement ID | Priority | Description | Why Critical | +|----------------|----------|-------------|--------------| +| NFR-S-003 | MUST | Encrypt data at rest | Security requirement | +| NFR-P-002 | MUST | Support 10K TPS | Performance critical | + +--- + +## Architecture Principles Compliance + +| Principle | Status | Evidence | Issues | +|-----------|--------|----------|--------| +| Cloud-First | ✅ COMPLIANT | AWS architecture in HLD | None | +| API-First | ⚠️ PARTIAL | REST APIs defined, missing OpenAPI specs | Document API contracts | +| Security-by-Design | ❌ NON-COMPLIANT | No threat model, missing security architecture | Add security sections | + +**Critical Principle Violations**: {count} + +--- + +## Stakeholder Traceability Analysis + +**Stakeholder Analysis Exists**: ✅ Yes / ❌ No (RECOMMENDED) + +**Stakeholder-Requirements Coverage**: +- Requirements traced to stakeholder goals: {percentage}% +- Orphan requirements (no stakeholder justification): {count} +- Requirement conflicts documented and resolved: ✅ Yes / ⚠️ Partial / ❌ No + +**RACI Governance Alignment**: +| Artifact | Role | Aligned with RACI? | Issues | +|----------|------|-------------------|--------| +| Risk Register | Risk Owners | ✅ Yes / ❌ No | Missing 3 risk owners from RACI | +| Data Model | Data Owners | ✅ Yes / ❌ No | None | +| SOBC | Benefits Owners | ✅ Yes / ❌ No | 2 benefits lack owner assignment | + +**Critical Issues**: +- Orphan requirements: {count} requirements not linked to stakeholder goals +- Unresolved conflicts: {count} requirement conflicts without resolution + +--- + +## Risk Management Analysis + +**Risk Register Exists**: ✅ Yes / ❌ No (RECOMMENDED) + +**Risk Coverage**: +| Risk ID | Category | Inherent | Residual | Response | Mitigation in Req? | Mitigation in Design? | +|---------|----------|----------|----------|----------|-------------------|---------------------| +| R-001 | Strategic | Very High | High | Treat | ✅ BR-003 | ✅ HLD Section 4 | +| R-005 | Technology | High | Medium | Treat | ❌ Missing | ❌ Missing | + +**High/Very High Risks Requiring Attention**: +| Risk ID | Description | Current Status | Required Action | +|---------|-------------|----------------|-----------------| +| R-005 | Cloud provider lock-in | No mitigation | Add multi-cloud requirements | +| R-012 | Data breach | Partial mitigation | Complete security architecture in HLD | + +**Risk-SOBC Alignment** (if SOBC exists): +- Strategic risks reflected in Strategic Case: ✅ Yes / ❌ No +- Financial risks in Economic Case cost contingency: ✅ Yes / ❌ No +- Risks included in Management Case Part E: ✅ Yes / ❌ No + +**Risk Governance**: +- Risk owners from stakeholder RACI: ✅ Yes / ⚠️ Partial / ❌ No +- Risk appetite compliance: {count} risks within tolerance + +--- + +## Business Case Analysis + +**SOBC Exists**: ✅ Yes / ❌ No (RECOMMENDED for major investments) + +**Benefits Traceability**: +| Benefit ID | Description | Stakeholder Goal | Requirements | Measurable? | Status | +|------------|-------------|------------------|--------------|-------------|--------| +| B-001 | Reduce costs 40% | CFO Goal G-1 | BR-002, NFR-P-003 | ✅ Yes | ✅ Complete | +| B-003 | Improve UX | CTO Goal G-5 | FR-008, NFR-A-001 | ❌ No | ❌ Not measurable | + +**Benefits Coverage**: +- Total benefits: {count} +- Benefits traced to stakeholder goals: {percentage}% +- Benefits supported by requirements: {percentage}% +- Benefits measurable and verifiable: {percentage}% + +**Option Analysis Quality**: +- Do Nothing baseline included: ✅ Yes / ❌ No +- Options analyzed: {count} options +- Recommended option: {option name} +- Justification: ✅ Strong / ⚠️ Weak / ❌ Missing + +**SOBC-Requirements Alignment**: +- Strategic Case drivers in requirements: ✅ Yes / ⚠️ Partial / ❌ No +- Economic Case benefits achievable with requirements: ✅ Yes / ⚠️ Questionable / ❌ No +- Financial Case budget adequate: ✅ Yes / ⚠️ Tight / ❌ Insufficient + +**Critical Issues**: +- Non-measurable benefits: {count} +- Benefits without requirement support: {count} +- Budget shortfall: £{amount} (requirements scope exceeds budget) + +--- + +## Data Model Analysis + +**Data Model Exists**: ✅ Yes / ❌ No (RECOMMENDED if DR-xxx exist) + +**DR-xxx Requirements Coverage**: +| Requirement ID | Description | Entities | Attributes | Status | +|----------------|-------------|----------|------------|--------| +| DR-001 | Store customer data | E-001: Customer | customer_id, email, name | ✅ Complete | +| DR-005 | GDPR erasure | E-001: Customer | [All PII] | ✅ Complete | +| DR-008 | Payment history | ❌ No entity | N/A | ❌ Missing | + +**Data Requirements Coverage**: +- Total DR-xxx requirements: {count} +- DR-xxx mapped to entities: {percentage}% +- Entities traced to DR-xxx: {percentage}% + +**Data Model Quality**: +- ERD exists and renderable: ✅ Yes / ❌ No +- Entities with complete specs: {count}/{total} +- PII identified: ✅ Yes / ⚠️ Partial / ❌ No +- GDPR compliance documented: ✅ Yes / ⚠️ Partial / ❌ No + +**Data Governance**: +| Entity | Data Owner (from RACI) | Data Steward | Technical Custodian | Status | +|--------|------------------------|--------------|---------------------|--------| +| E-001: Customer | CFO (from stakeholder RACI) | Data Governance Lead | Database Team | ✅ Complete | +| E-003: Payment | ❌ Not assigned | ❌ Not assigned | Database Team | ❌ Missing owners | + +**Data Model-Design Alignment**: +- Database schemas in DLD match entities: ✅ Yes / ⚠️ Partial / ❌ No / N/A +- CRUD matrix aligns with HLD components: ✅ Yes / ⚠️ Partial / ❌ No / N/A +- Data integration flows match upstream/downstream: ✅ Yes / ⚠️ Partial / ❌ No / N/A + +**Critical Issues**: +- DR-xxx requirements with no entity mapping: {count} +- PII not identified (GDPR risk): {count} entities +- Data owners not from RACI matrix: {count} entities + +--- + +## UK Government Compliance Analysis + +### Technology Code of Practice (TCoP) + +**Overall Score**: {score}/130 ({percentage}%) +**Status**: ✅ Compliant / ⚠️ Partial / ❌ Non-Compliant + +| Point | Requirement | Status | Score | Issues | +|-------|-------------|--------|-------|--------| +| 1 | Define User Needs | ✅ | 9/10 | Minor: User research from 2023 (update) | +| 5 | Use Cloud First | ✅ | 10/10 | AWS cloud-native | +| 6 | Make Things Secure | ❌ | 3/10 | Missing: Cyber Essentials, threat model | + +**Critical TCoP Issues**: {count} + +### AI Playbook (if AI system) + +**Risk Level**: HIGH-RISK / MEDIUM-RISK / LOW-RISK +**Overall Score**: {score}/160 ({percentage}%) +**Status**: ✅ Compliant / ⚠️ Partial / ❌ Non-Compliant + +**Blocking Issues**: +- [ ] DPIA not completed (MANDATORY for high-risk) +- [ ] No human-in-the-loop (REQUIRED for high-risk) +- [ ] ATRS not published (MANDATORY for central government) + +### ATRS (if AI system) + +**Completeness**: {percentage}% +**Status**: ✅ Ready for Publication / ⚠️ Incomplete / ❌ Missing + +**Missing Mandatory Fields**: +- [ ] Senior Responsible Owner +- [ ] Bias testing results +- [ ] Fallback procedures + +--- + +## MOD Secure by Design Analysis + +**MOD SbD Assessment Exists**: ✅ Yes / ❌ No (MANDATORY for MOD projects) + +**Overall SbD Maturity**: Level {0-5} (Target: Level 3+ for operational systems) + +### 7 SbD Principles Compliance + +| Principle | Status | Score | Issues | +|-----------|--------|-------|--------| +| 1. Understand and Define Context | ✅ | 9/10 | Minor: Data classification pending final review | +| 2. Apply Security from the Start | ⚠️ | 6/10 | Security architecture not in initial specs | +| 3. Apply Defence in Depth | ❌ | 3/10 | Missing: Network segmentation, IDS/IPS | +| 4. Follow Secure Design Patterns | ✅ | 8/10 | NCSC guidance applied, minor OWASP gaps | +| 5. Continuously Manage Risk | ✅ | 9/10 | Risk register active, continuous monitoring planned | +| 6. Secure the Supply Chain | ⚠️ | 5/10 | Missing: SBOM, supplier attestations | +| 7. Enable Through-Life Assurance | ⚠️ | 6/10 | Monitoring planned, incident response incomplete | + +**Overall Score**: {score}/70 ({percentage}%) + +### NIST Cybersecurity Framework Coverage + +| Function | Status | Coverage | Critical Gaps | +|----------|--------|----------|---------------| +| Identify | ✅ | 90% | Asset inventory incomplete for contractor systems | +| Protect | ⚠️ | 65% | MFA not implemented, PAM missing | +| Detect | ❌ | 40% | No SIEM integration, limited monitoring | +| Respond | ⚠️ | 70% | Incident response plan exists, not tested | +| Recover | ✅ | 85% | Backup/DR tested, BC plan approved | + +**Overall CSF Score**: {percentage}% + +### Continuous Assurance Process + +**CAAT (Cyber Activity and Assurance Tracker)**: +- CAAT registered: ✅ Yes / ❌ No (MANDATORY) +- Registration date: {date} +- Self-assessment question sets completed: {count}/{total} +- Based on 7 SbD Principles: ✅ Yes / ⚠️ Partial / ❌ No +- Continuously updated: ✅ Yes / ⚠️ Sporadic / ❌ One-time only +- Last update: {date} + +**Key Roles**: +- Delivery Team Security Lead (DTSL) appointed: ✅ Yes / ❌ No (REQUIRED) +- DTSL name: {name} +- Security Assurance Coordinator (SAC) appointed: ✅ Yes / ❌ No / N/A +- Project Security Officer (PSyO) for SECRET+: ✅ Yes / ❌ No / N/A + +### Three Lines of Defence + +| Line | Responsibility | Implementation | Status | +|------|----------------|----------------|--------| +| First Line | Delivery team owns security (DTSL) | DTSL appointed, day-to-day management | ✅ Effective | +| Second Line | Technical Coherence assurance | Quarterly reviews scheduled | ⚠️ Partial | +| Third Line | Independent audit (NAO, GIAA) | Pen test planned Q2 | ⚠️ Planned | + +**Overall Governance**: ✅ Strong / ⚠️ Adequate / ❌ Weak + +### Supplier Attestation (if vendor-delivered) + +**Supplier Attestation Required**: ✅ Yes / ❌ No / N/A + +**Attestation Status**: +- Suppliers attest systems are secure (ISN 2023/10): ✅ Yes / ❌ No +- Supplier-owned continuous assurance: ✅ Yes / ❌ No +- Supplier security requirements in contracts: ✅ Yes / ⚠️ Partial / ❌ No +- Contract includes CAAT self-assessment obligations: ✅ Yes / ❌ No + +### Classification-Specific Requirements + +**Data Classification**: OFFICIAL / OFFICIAL-SENSITIVE / SECRET / TOP SECRET + +**Classification Requirements Met**: +| Requirement | Status | Evidence | +|-------------|--------|----------| +| Personnel security clearances | ✅ / ❌ | All SC cleared for OFFICIAL-SENSITIVE | +| Cryptography (CESG-approved) | ✅ / ❌ | AES-256, TLS 1.3 | +| Network security (air-gap/assured) | ✅ / ⚠️ / ❌ | Assured connectivity approved | +| Physical security | ✅ / ❌ | Enhanced access controls in place | +| Cyber Essentials / Cyber Essentials Plus | ✅ / ❌ | Cyber Essentials Plus certified | + +### Critical Issues (Deployment Blockers) + +**Blocking Issues**: +- [ ] CAAT not registered (MANDATORY for all programmes) +- [ ] No DTSL appointed (required from Discovery phase) +- [ ] SECRET+ data without SC cleared personnel +- [ ] No encryption at rest or in transit +- [ ] No threat model or risk assessment +- [ ] Critical vulnerabilities unpatched +- [ ] Supplier attestation missing for vendor-delivered system + +**Deployment Readiness**: ✅ Ready / ⚠️ Issues to resolve / ❌ BLOCKED + +--- + +## Traceability Analysis + +**Traceability Matrix**: ✅ Exists / ❌ Missing + +**Forward Traceability** (Requirements → Design → Tests): +- Requirements → HLD: {percentage}% +- HLD → DLD: {percentage}% +- DLD → Tests: {percentage}% + +**Backward Traceability** (Tests → Requirements): +- Orphan components (not linked to requirements): {count} + +**Gap Summary**: +- {count} requirements with no design coverage +- {count} design elements with no requirement justification +- {count} components with no test coverage + +--- + +## Vendor Procurement Analysis + +### SOW Quality +**Status**: ✅ Complete / ⚠️ Issues / ❌ Insufficient + +**Issues**: +- [ ] SOW missing NFR-P-xxx performance requirements +- [ ] Acceptance criteria ambiguous for deliverable 3 +- [ ] Timeline unrealistic for scope (6 months vs 50 requirements) + +### Vendor Evaluation +**Evaluation Criteria Defined**: ✅ Yes / ❌ No + +**Alignment Check**: +- All MUST requirements in scoring? ✅ Yes / ❌ No +- Scoring methodology fair? ✅ Yes / ⚠️ Issues / ❌ No +- Technical evaluation covers all areas? ✅ Yes / ⚠️ Gaps / ❌ No + +### Vendor Design Review +**HLD Review Completed**: ✅ Yes / ❌ No +**DLD Review Completed**: ✅ Yes / ❌ No + +**Coverage Analysis**: +| SOW Requirement | HLD Coverage | DLD Coverage | Status | +|-----------------|--------------|--------------|--------| +| Cloud infrastructure | ✅ | ✅ | Complete | +| Security architecture | ❌ | ❌ | Missing | +| Performance (10K TPS) | ⚠️ | ❌ | Insufficient | + +--- + +## Security & Compliance Summary + +### Security Posture +- Security requirements defined: ✅ Yes / ❌ No +- Threat model documented: ✅ Yes / ❌ No +- Security architecture in HLD: ✅ Yes / ⚠️ Partial / ❌ No +- Security implementation in DLD: ✅ Yes / ⚠️ Partial / ❌ No +- Security testing plan: ✅ Yes / ❌ No + +**Security Coverage**: {percentage}% + +### Compliance Posture +- Regulatory requirements identified: ✅ Yes / ❌ No +- GDPR/UK GDPR compliance: ✅ Yes / ⚠️ Partial / ❌ No +- Industry compliance (PCI-DSS, HIPAA, etc.): ✅ Yes / ⚠️ Partial / ❌ No / N/A +- Audit readiness: ✅ Yes / ⚠️ Partial / ❌ No + +**Compliance Coverage**: {percentage}% + +--- + +## Recommendations + +### Critical Actions (MUST resolve before implementation/procurement) + +1. **[P1] Add Cloud-First architecture**: Current design violates Cloud-First principle. Redesign with AWS/Azure/GCP. +2. **[R1] Cover security requirements**: NFR-S-003, NFR-S-007, NFR-S-012 have no design coverage. Add security architecture to HLD. +3. **[UK1] Complete DPIA**: HIGH-RISK AI system requires completed DPIA before deployment (AI Playbook MANDATORY). + +### High Priority Actions (SHOULD resolve before implementation/procurement) + +1. **[T1] Document API contracts**: Add OpenAPI specifications for all REST APIs. +2. **[T2] Add performance architecture**: NFR-P-002 (10K TPS) not addressed in design. Add performance section to HLD. +3. **[V1] Update SOW acceptance criteria**: Deliverable 3 acceptance criteria too vague. Add measurable criteria. + +### Medium Priority Actions (Improve quality) + +1. **[Q1] Consolidate duplicate requirements**: Merge NFR-S-001 and NFR-S-005 (identical). +2. **[Q2] Fix terminology drift**: "User" vs "Customer" used inconsistently. Standardize. +3. **[D1] Complete traceability matrix**: Add backward traceability from tests to requirements. + +### Low Priority Actions (Optional improvements) + +1. **[S1] Improve requirement wording**: Replace "fast" with measurable criteria (e.g., "< 200ms p95"). +2. **[S2] Add edge case documentation**: Document edge cases for error handling. + +--- + +## Metrics Dashboard + +### Requirement Quality +- Total Requirements: {count} +- Ambiguous Requirements: {count} +- Duplicate Requirements: {count} +- Untestable Requirements: {count} +- **Quality Score**: {percentage}% + +### Architecture Alignment +- Principles Compliant: {count}/{total} +- Principles Violations: {count} +- **Alignment Score**: {percentage}% + +### Traceability +- Requirements Covered: {count}/{total} +- Orphan Components: {count} +- **Traceability Score**: {percentage}% + +### Stakeholder Traceability (if applicable) +- Requirements traced to stakeholder goals: {percentage}% +- Orphan requirements: {count} +- Conflicts resolved: {percentage}% +- RACI governance alignment: {percentage}% +- **Stakeholder Score**: {percentage}% + +### Risk Management (if applicable) +- High/Very High risks mitigated: {percentage}% +- Risk owners from RACI: {percentage}% +- Risks reflected in design: {percentage}% +- Risk-SOBC alignment: {percentage}% +- **Risk Management Score**: {percentage}% + +### Business Case (if applicable) +- Benefits traced to stakeholder goals: {percentage}% +- Benefits supported by requirements: {percentage}% +- Benefits measurable: {percentage}% +- Budget adequacy: ✅ Adequate / ⚠️ Tight / ❌ Insufficient +- **Business Case Score**: {percentage}% + +### Data Model (if applicable) +- DR-xxx requirements mapped to entities: {percentage}% +- Entities traced to DR-xxx: {percentage}% +- PII identified: {percentage}% +- Data governance complete: {percentage}% +- Data model-design alignment: {percentage}% +- **Data Model Score**: {percentage}% + +### UK Government Compliance (if applicable) +- TCoP Score: {score}/130 ({percentage}%) +- AI Playbook Score: {score}/160 ({percentage}%) +- ATRS Completeness: {percentage}% +- **UK Gov Compliance Score**: {percentage}% + +### MOD Compliance (if applicable) +- 7 SbD Principles Score: {score}/70 ({percentage}%) +- NIST CSF Coverage: {percentage}% +- CAAT registered and updated: ✅ Yes / ❌ No +- Three Lines of Defence: {percentage}% +- **MOD SbD Score**: {percentage}% + +### Overall Governance Health +**Score**: {percentage}% +**Grade**: A / B / C / D / F + +**Grade Thresholds**: +- A (90-100%): Excellent governance, ready to proceed +- B (80-89%): Good governance, minor issues +- C (70-79%): Adequate governance, address high-priority issues +- D (60-69%): Poor governance, major rework needed +- F (<60%): Insufficient governance, do not proceed + +--- + +## Next Steps + +### Immediate Actions + +1. **If CRITICAL issues exist**: ❌ **DO NOT PROCEED** with implementation/procurement until resolved. + - Run: `/arckit:requirements` to fix requirements issues + - Run: `/arckit:hld-review` to address design gaps + - Run: `/arckit:ai-playbook` (if AI system) to complete mandatory assessments + +2. **If only HIGH/MEDIUM issues**: ⚠️ **MAY PROCEED** with caution, but address issues in parallel. + - Document exceptions for HIGH issues + - Create remediation plan for MEDIUM issues + +3. **If only LOW issues**: ✅ **READY TO PROCEED** + - Address LOW issues during implementation as improvements + +### Suggested Commands + +Based on findings, consider running: + +**Governance Foundation**: +- `/arckit:principles` - Create/update architecture principles +- `/arckit:stakeholders` - Analyze stakeholder drivers, goals, conflicts (RECOMMENDED) +- `/arckit:risk` - Create risk register using Orange Book framework (RECOMMENDED) +- `/arckit:sobc` - Create Strategic Outline Business Case using Green Book 5-case model (RECOMMENDED for major investments) + +**Requirements & Design**: +- `/arckit:requirements` - Refine requirements to address ambiguity/gaps +- `/arckit:data-model` - Create data model with ERD, GDPR compliance (RECOMMENDED if DR-xxx exist) +- `/arckit:hld-review` - Re-review HLD after addressing issues +- `/arckit:dld-review` - Re-review DLD after addressing issues + +**UK Government Compliance**: +- `/arckit:tcop` - Complete TCoP assessment for UK Gov projects +- `/arckit:ai-playbook` - Complete AI Playbook assessment for AI systems +- `/arckit:atrs` - Generate ATRS record for algorithmic tools +- `/arckit:secure` - UK Government Secure by Design review + +**MOD Compliance**: +- `/arckit:mod-secure` - MOD Secure by Design assessment with CAAT (MANDATORY for MOD projects) + +**Vendor Procurement**: +- `/arckit:sow` - Generate statement of work for RFP +- `/arckit:evaluate` - Update vendor evaluation criteria + +**Analysis & Traceability**: +- `/arckit:traceability` - Generate/update traceability matrix +- `/arckit:analyze` - Re-run this analysis after fixes + +### Re-run Analysis + +After making changes, re-run analysis: +```bash +/arckit:analyze +```text + +Expected improvement in scores after addressing findings. + +--- + +## Detailed Findings + +(Expand top findings with examples and specific recommendations) + +### Finding R1: Duplicate Security Requirements (HIGH) + +**Location**: `ARC-*-REQ-*.md:L45-52` and `ARC-*-REQ-*.md:L120-125` + +**Details**: + +```text +NFR-S-001: System MUST encrypt data at rest using AES-256 +NFR-S-005: All stored data SHALL be encrypted with AES-256 encryption +``` + +**Issue**: These are duplicate requirements with inconsistent language (MUST vs SHALL). + +**Impact**: Confuses implementation team, wastes evaluation points in vendor scoring. + +**Recommendation**: + +1. Keep NFR-S-001 (clearer wording) +2. Delete NFR-S-005 +3. Update traceability matrix + +**Estimated Effort**: 10 minutes + +--- + +### Finding P1: Violates Cloud-First Principle (CRITICAL) + +**Location**: `ARC-*-REQ-*.md:L120`, Architecture Principles violation + +**Details**: + +```text +FR-025: System SHALL deploy to on-premise servers in corporate datacenter +``` + +**Issue**: Violates "Cloud-First" architecture principle defined in `projects/000-global/ARC-000-PRIN-*.md`. Principle states "MUST use public cloud (AWS/Azure/GCP) unless explicitly justified exception." + +**Impact**: Architecture doesn't align with organization standards. Blocks procurement approval. + +**Recommendation**: + +1. Change FR-025 to require AWS/Azure/GCP deployment +2. OR: Document formal exception with justification (security, regulatory, etc.) +3. Get exception approved by Architecture Review Board + +**Estimated Effort**: 2 hours (requirement change + design update) + +--- + +(Continue with detailed findings for top 10-20 issues) + +--- + +## Appendix: Analysis Methodology + +**Artifacts Analyzed**: + +- {list of files} + +**Detection Rules Applied**: + +- {count} duplication checks +- {count} ambiguity patterns +- {count} principle validations +- {count} traceability checks + +**Analysis Runtime**: {duration} + +**Analysis Version**: ArcKit v{version} + +--- + +**END OF ANALYSIS REPORT** + + +``` + +--- + +**CRITICAL - Auto-Populate Document Control Fields**: + +Before completing the document, populate ALL document control fields in the header: + +**Construct Document ID**: + +- **Document ID**: `ARC-{PROJECT_ID}-ANAL-v{VERSION}` (e.g., `ARC-001-ANAL-v1.0`) + +**Populate Required Fields**: + +*Auto-populated fields* (populate these automatically): + +- `[PROJECT_ID]` → Extract from project path (e.g., "001" from "projects/001-project-name") +- `[VERSION]` → "1.0" (or increment if previous version exists) +- `[DATE]` / `[YYYY-MM-DD]` → Current date in YYYY-MM-DD format +- `[DOCUMENT_TYPE_NAME]` → "Governance Analysis Report" +- `ARC-[PROJECT_ID]-ANAL-v[VERSION]` → Construct using format above +- `[COMMAND]` → "arckit.analyze" + +*User-provided fields* (extract from project metadata or user input): + +- `[PROJECT_NAME]` → Full project name from project metadata or user input +- `[OWNER_NAME_AND_ROLE]` → Document owner (prompt user if not in metadata) +- `[CLASSIFICATION]` → Default to "OFFICIAL" for UK Gov, "PUBLIC" otherwise (or prompt user) + +*Calculated fields*: + +- `[YYYY-MM-DD]` for Review Date → Current date + 30 days + +*Pending fields* (leave as [PENDING] until manually updated): + +- `[REVIEWER_NAME]` → [PENDING] +- `[APPROVER_NAME]` → [PENDING] +- `[DISTRIBUTION_LIST]` → Default to "Project Team, Architecture Team" or [PENDING] + +**Populate Revision History**: + +```markdown +| 1.0 | {DATE} | ArcKit AI | Initial creation from `/arckit:analyze` command | [PENDING] | [PENDING] | +``` + +**Populate Generation Metadata Footer**: + +The footer should be populated with: + +```markdown +**Generated by**: ArcKit `/arckit:analyze` command +**Generated on**: {DATE} {TIME} GMT +**ArcKit Version**: {ARCKIT_VERSION} +**Project**: {PROJECT_NAME} (Project {PROJECT_ID}) +**AI Model**: [Use actual model name, e.g., "claude-sonnet-4-5-20250929"] +**Generation Context**: [Brief note about source documents used] +``` + +--- + +Before writing the file, read `${CLAUDE_PLUGIN_ROOT}/references/quality-checklist.md` and verify all **Common Checks** plus the **ANAL** per-type checks pass. Fix any failures before proceeding. + +### 7. Write Analysis Report to File + +Save the complete analysis report generated in Step 6 to: + +**`projects/{project-dir}/ARC-{PROJECT_ID}-ANAL-v1.0.md`** + +The saved report must include: + +- ✅ All sections from Executive Summary to Detailed Findings +- ✅ Complete metrics dashboard +- ✅ Actionable recommendations with priorities +- ✅ Next steps and suggested commands +- ✅ Traceability to source artifacts + +**CRITICAL - Show Summary Only**: +After writing the file, show ONLY the concise summary below. Do NOT output the full analysis report content in your response, as analysis reports can be 1000+ lines with detailed findings and metrics tables. + +After writing the file, provide a summary message to the user: + +```text +✅ Governance Analysis Complete + +**Project**: {project-name} +**Report Location**: projects/{project-dir}/ARC-{PROJECT_ID}-ANAL-v1.0.md + +**Overall Status**: ✅ Ready / ⚠️ Issues Found / ❌ Critical Issues +**Governance Health Score**: {score}/100 ({grade}) + +**Issue Summary**: +- Critical Issues: {count} +- High Priority Issues: {count} +- Medium Priority Issues: {count} +- Low Priority Issues: {count} + +**Key Metrics**: +- Requirements Coverage: {percentage}% +- Principles Compliance: {percentage}% +- Traceability Score: {percentage}% +- Stakeholder Alignment: {percentage}% +- Risk Management: {percentage}% +- UK Gov Compliance: {percentage}% (if applicable) +- MOD SbD Compliance: {percentage}% (if applicable) + +**Top 3 Critical Issues**: +1. {issue} - {location} +2. {issue} - {location} +3. {issue} - {location} + +**Recommendation**: {PROCEED / RESOLVE CRITICAL ISSUES FIRST / MAJOR REWORK NEEDED} + +**Next Steps**: +- {action} +- {action} +- {action} + +📄 Full analysis report saved to: projects/{project-dir}/ARC-{PROJECT_ID}-ANAL-v1.0.md +``` + +### 8. Provide Remediation Guidance + +After outputting the report, ask: + +> **Would you like me to suggest concrete remediation steps for the top {N} critical/high priority issues?** +> +> I can provide: +> +> 1. Specific edits to fix requirements +> 2. Design review guidance +> 3. Command sequences to address gaps +> 4. Templates for missing artifacts +> +> (I will NOT make changes automatically - you must approve each action) + +## Operating Principles + +### Context Efficiency + +- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation +- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis +- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow +- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts + +### Analysis Guidelines + +- **DO NOT modify existing artifacts** (non-destructive analysis) +- **DO write analysis report** to `projects/{project-dir}/ARC-{PROJECT_ID}-ANAL-v1.0.md` +- **NEVER hallucinate missing sections** (if absent, report them accurately) +- **Prioritize principle violations** (these are always CRITICAL) +- **Prioritize UK Gov compliance issues** (mandatory for public sector) +- **Use examples over exhaustive rules** (cite specific instances, not generic patterns) +- **Report zero issues gracefully** (emit success report with metrics) +- **Be specific**: Cite line numbers, requirement IDs, exact quotes +- **Be actionable**: Every finding should have a clear recommendation +- **Be fair**: Flag real issues, not nitpicks + +### Enterprise Architecture Focus + +Unlike Spec Kit's focus on code implementation, ArcKit analyze focuses on: + +- **Governance compliance**: Principles, standards, policies +- **Requirements quality**: Completeness, testability, traceability +- **Procurement readiness**: SOW quality, vendor evaluation fairness +- **Design alignment**: Requirements → design traceability +- **UK Government compliance**: TCoP, AI Playbook, ATRS (if applicable) +- **Security & compliance**: Not just mentioned, but architected +- **Decision quality**: Objective, defensible, auditable + +## Example Usage + +User: `/arckit:analyze` + +You should: + +1. Identify project (if multiple, ask which) +2. Load artifacts progressively: + - Architecture principles + - Stakeholder drivers (if exists - RECOMMENDED) + - Risk register (if exists - RECOMMENDED) + - SOBC business case (if exists - RECOMMENDED) + - Requirements (BR, FR, NFR, INT, DR) + - Data model (if exists - RECOMMENDED if DR-xxx) + - Designs (HLD, DLD) + - UK Gov assessments (TCoP, AI Playbook, ATRS) + - MOD assessment (SbD with CAAT) + - Traceability matrix +3. Run detection passes: + - Requirements quality (duplication, ambiguity, underspecification) + - Stakeholder traceability (requirements to goals, conflict resolution, RACI alignment) + - Risk coverage (high/very high risks mitigated, risk-requirements alignment, risk-SOBC alignment) + - Business case alignment (benefits to stakeholders, benefits to requirements, costs adequacy) + - Data model consistency (DR-xxx to entities, data governance, design alignment) + - Principles alignment (violations, coverage) + - Traceability (coverage gaps, orphans) + - UK Gov compliance (TCoP, AI Playbook, ATRS) + - MOD compliance (7 SbD Principles, NIST CSF, CAAT, Three Lines of Defence) + - Consistency (terminology, data model, tech stack) + - Security & compliance coverage +4. Assign severity (CRITICAL, HIGH, MEDIUM, LOW) +5. Generate comprehensive report with: + - Executive summary + - Findings table + - Coverage matrices + - Stakeholder traceability analysis + - Risk management analysis + - Business case analysis + - Data model analysis + - UK Gov compliance dashboard + - MOD compliance dashboard + - Metrics dashboard + - Next steps and recommendations +6. Ask if user wants remediation guidance + +Example output: "Architecture Governance Analysis Report" with 18 findings (3 CRITICAL, 6 HIGH, 7 MEDIUM, 2 LOW), 87% requirements coverage, 92% stakeholder traceability, 85% risk mitigation, TCoP score 98/130 (75%), MOD SbD score 58/70 (83%), recommendation: "Resolve 3 CRITICAL issues (1 stakeholder orphan, 2 high risks unmitgated) before procurement" + +## Important Notes + +- This is **non-destructive analysis** - existing artifacts are not modified +- Analysis report is saved to `projects/{project-dir}/ARC-{PROJECT_ID}-ANAL-v1.0.md` for audit trail +- Run `/arckit:analyze` after major changes to requirements, designs, or assessments +- Ideal times to run: + - Before issuing SOW/RFP to vendors + - After receiving vendor proposals + - Before design review meetings + - Before implementation kickoff + - Before deployment to production +- Analysis identifies issues; you decide how to resolve them +- Re-run after fixing issues to verify improvements +- Target: 90%+ governance health score before proceeding + +- **Markdown escaping**: When writing less-than or greater-than comparisons, always include a space after `<` or `>` (e.g., `< 3 seconds`, `> 99.9% uptime`) to prevent markdown renderers from interpreting them as HTML tags or emoji + +## Related Commands + +After analysis, you may need: + +**Governance Foundation**: + +- `/arckit:principles` - Create/update architecture principles +- `/arckit:stakeholders` - Analyze stakeholder drivers and conflicts +- `/arckit:risk` - Create Orange Book risk register +- `/arckit:sobc` - Create Green Book business case + +**Requirements & Data**: + +- `/arckit:requirements` - Fix requirements issues +- `/arckit:data-model` - Create data model with ERD and GDPR compliance + +**Design Reviews**: + +- `/arckit:hld-review` - Re-review high-level design +- `/arckit:dld-review` - Re-review detailed design + +**UK Government Compliance**: + +- `/arckit:tcop` - Complete TCoP assessment +- `/arckit:ai-playbook` - Complete AI Playbook assessment +- `/arckit:atrs` - Generate ATRS record +- `/arckit:secure` - UK Government Secure by Design review + +**MOD Compliance**: + +- `/arckit:mod-secure` - MOD Secure by Design assessment with CAAT + +**Traceability**: + +- `/arckit:traceability` - Update traceability matrix diff --git a/arckit-copilot/commands/atrs.md b/arckit-copilot/commands/atrs.md new file mode 100644 index 00000000..5c207716 --- /dev/null +++ b/arckit-copilot/commands/atrs.md @@ -0,0 +1,407 @@ +--- +description: Generate Algorithmic Transparency Recording Standard (ATRS) record for AI/algorithmic tools +argument-hint: "" +effort: high +--- + +You are helping a UK government organization create an Algorithmic Transparency Recording Standard (ATRS) record for an AI or algorithmic tool. + +## User Input + +```text +$ARGUMENTS +``` + +## Instructions + +> **Note**: The ArcKit Project Context hook has already detected all projects, artifacts, external documents, and global policies. Use that context below — no need to scan directories manually. + +1. **Understand ATRS requirements**: + - ATRS is **MANDATORY** for all central government departments and arm's length bodies + - Two-tier structure: Tier 1 (public summary) + Tier 2 (detailed technical) + - Published records on GOV.UK repository + - Must be clear, accurate, and comprehensive + +2. **Identify the algorithmic tool**: + - Tool name and purpose + - Type of algorithm (rule-based, ML, generative AI, etc.) + - Government function (benefits, healthcare, policing, etc.) + - Current phase (pre-deployment, beta, production, retired) + - Users (staff and/or citizens) + +3. **Determine risk level** (similar to AI Playbook): + - **HIGH-RISK**: Automated decisions affecting rights, benefits, legal status, healthcare + - **MEDIUM-RISK**: Semi-automated with human review, significant resource allocation + - **LOW-RISK**: Administrative, productivity tools, recommendations with human control + +4. **Read existing artifacts from the project context:** + + **MANDATORY** (warn if missing): + - **PRIN** (Architecture Principles, in 000-global) + - Extract: AI governance standards, technology constraints, compliance requirements + - If missing: warn user to run `/arckit:principles` first + - **REQ** (Requirements) + - Extract: AI/ML-related FR requirements, NFR (security, fairness), DR (data requirements) + - If missing: warn user to run `/arckit:requirements` first + + **RECOMMENDED** (read if available, note if missing): + - **AIPB** (AI Playbook Assessment) + - Extract: Risk level, human oversight model, ethical assessment scores, gaps + + **OPTIONAL** (read if available, skip silently if missing): + - **DATA** (Data Model) + - Extract: Training data sources, personal data, data quality, storage + - **DPIA** (Data Protection Impact Assessment) + - Extract: Data protection assessment, lawful basis, privacy risks + + **Read the template** (with user override support): + - **First**, check if `.arckit/templates/uk-gov-atrs-template.md` exists in the project root + - **If found**: Read the user's customized template (user override takes precedence) + - **If not found**: Read `${CLAUDE_PLUGIN_ROOT}/templates/uk-gov-atrs-template.md` (default) + + > **Tip**: Users can customize templates with `/arckit:customize atrs` + +5. **Read external documents and policies**: + - Read any **external documents** listed in the project context (`external/` files) — extract previous ATRS submissions, algorithmic impact assessments, model documentation, fairness testing results + - Read any **enterprise standards** in `projects/000-global/external/` — extract organization-wide algorithmic transparency policies, AI ethics frameworks, cross-project ATRS standards + - If no external docs exist but they would improve the record, ask: "Do you have any existing ATRS records from similar systems or algorithmic documentation? I can read PDFs directly. Place them in `projects/{project-dir}/external/` and re-run, or skip." + - **Citation traceability**: When referencing content from external documents, follow the citation instructions in `${CLAUDE_PLUGIN_ROOT}/references/citation-instructions.md`. Place inline citation markers (e.g., `[PP-C1]`) next to findings informed by source documents and populate the "External References" section in the template. + +6. **Complete TIER 1 - Summary Information** (for general public): + - Use clear, simple, jargon-free language + - Explain what the tool does in plain English + - Include basic contact information + - Make it accessible to non-technical readers + +**Key Tier 1 Fields**: + +- **Name**: Tool identifier +- **Description**: 1-2 sentence plain English summary +- **Website URL**: Link to more information +- **Contact Email**: Public contact +- **Organization**: Department/agency name +- **Function**: Area (benefits, healthcare, policing, etc.) +- **Phase**: Pre-deployment/Beta/Production/Retired +- **Geographic Region**: England/Scotland/Wales/NI/UK-wide + +7. **Complete TIER 2 - Detailed Information** (for specialists): + +### Section 1: Owner and Responsibility + +- Organization and team +- Senior Responsible Owner (name, role, accountability) +- External suppliers (names, Companies House numbers, roles) +- Procurement procedure type (G-Cloud, DOS, open tender, etc.) +- Data access terms for suppliers + +### Section 2: Description and Rationale + +- Detailed technical description +- Algorithm type (rule-based, ML, generative AI, etc.) +- AI model details (if applicable): provider, version, fine-tuning +- Scope and boundaries (intended use and out-of-scope) +- Benefits and impact metrics +- Previous process (how it was done before) +- Alternatives considered (and why rejected) + +### Section 3: Decision-Making Process + +- Process integration (role in workflow) +- Provided information (outputs and format) +- Frequency and scale of usage +- **Human decisions and review**: + - Human-in-the-loop (review every decision) + - Human-on-the-loop (periodic review) + - Human-in-command (can override) + - Fully automated (must justify) +- Required training for staff +- Appeals and contestability (how users can contest decisions) + +### Section 4: Data + +- Data sources (types, origins, fields used) +- Personal data and special category data +- Data sharing arrangements +- Data quality and maintenance +- Data storage location and security (UK/EU/USA, cloud provider) +- Encryption, access controls, audit logging +- Cyber Essentials / ISO 27001 certification + +### Section 5: Impact Assessments + +- **DPIA (Data Protection Impact Assessment)**: Status, date, outcome, risks +- **EqIA (Equality Impact Assessment)**: Protected characteristics, impacts, mitigations +- **Human Rights Assessment**: ECHR articles, safeguards +- **Other assessments**: Environmental, accessibility, security + +### Section 6: Fairness, Bias, and Discrimination + +- Bias testing completed (methodology, date) +- Fairness metrics (demographic parity, equalized odds, etc.) +- Results by protected characteristic (gender, ethnicity, age, disability) +- Known limitations and biases +- Training data bias review +- Ongoing bias monitoring (frequency, metrics, alert thresholds) + +### Section 7: Technical Details + +- Model performance metrics (accuracy, precision, recall, F1) +- Performance by demographic group +- Model explainability approach (SHAP, LIME, etc.) +- Model versioning and change management +- Model monitoring and drift detection +- Retraining schedule + +### Section 8: Testing and Assurance + +- Testing approach (unit, integration, UAT, A/B, red teaming) +- Edge cases and failure modes +- Fallback procedures +- Security testing (pen testing, AI-specific threats): + - Prompt injection (for LLMs) + - Data poisoning + - Model inversion + - Adversarial attacks +- Independent assurance and external audit + +### Section 9: Transparency and Explainability + +- Public disclosure (website, GOV.UK, model card, open source) +- User communication (how users are informed) +- Information provided to users (that algorithm is used, how it works, how to contest) +- Model card published + +### Section 10: Governance and Oversight + +- Governance structure (board/committee composition, responsibilities) +- Risk register and top risks +- Incident management (response plan, process, contact) +- Audit trail (logging, retention, review) + +### Section 11: Compliance + +- Legal basis (primary legislation, regulatory compliance) +- Data protection (controller, DPO, ICO registration, legal basis) +- Standards compliance (TCoP, GDS Service Standard, Data Ethics Framework, ISO) +- Procurement compliance (route, value, IR35) + +### Section 12: Performance and Outcomes + +- Success metrics and KPIs +- Benefits realized (with evidence) +- User feedback and satisfaction +- Continuous improvement log + +### Section 13: Review and Updates + +- Review schedule (frequency, next review date) +- Triggers for unscheduled review +- Version history +- Contact for updates + +8. **Provide risk-appropriate guidance**: + +**For HIGH-RISK algorithmic tools** (affecting rights, benefits, healthcare): + +- **CRITICAL**: DPIA is MANDATORY before deployment +- **CRITICAL**: EqIA is MANDATORY +- Human-in-the-loop STRONGLY RECOMMENDED +- Bias testing across ALL protected characteristics REQUIRED +- ATRS publication on GOV.UK MANDATORY +- Quarterly reviews RECOMMENDED +- Independent audit STRONGLY RECOMMENDED + +**For MEDIUM-RISK tools**: + +- DPIA likely required +- EqIA recommended +- Human oversight required (human-on-the-loop minimum) +- Bias testing recommended +- ATRS publication MANDATORY +- Annual reviews + +**For LOW-RISK tools**: + +- DPIA assessment (may determine not required) +- Basic fairness checks +- Human oversight recommended +- ATRS publication MANDATORY +- Periodic reviews + +9. **Link to existing ArcKit artifacts**: + - Map to requirements from `ARC-*-REQ-*.md` + - Reference AI Playbook assessment (if exists) + - Reference TCoP assessment (if exists) + - Reference design reviews (HLD/DLD) + +10. **Flag missing mandatory items**: + +**BLOCKERS** (must complete before publication): + +- [ ] DPIA completed (for high-risk) +- [ ] EqIA completed (for high-risk) +- [ ] Senior Responsible Owner identified +- [ ] Human oversight model defined +- [ ] Bias testing completed (for ML/AI) +- [ ] Public-facing description written +- [ ] Contact details provided + +**WARNINGS** (should complete): + +- [ ] Alternatives considered documented +- [ ] Training program defined +- [ ] Incident response plan +- [ ] Review schedule set + +--- + +**CRITICAL - Auto-Populate Document Control Fields**: + +Before completing the document, populate ALL document control fields in the header: + +**Construct Document ID**: + +- **Document ID**: `ARC-{PROJECT_ID}-ATRS-v{VERSION}` (e.g., `ARC-001-ATRS-v1.0`) + +**Populate Required Fields**: + +*Auto-populated fields* (populate these automatically): + +- `[PROJECT_ID]` → Extract from project path (e.g., "001" from "projects/001-project-name") +- `[VERSION]` → "1.0" (or increment if previous version exists) +- `[DATE]` / `[YYYY-MM-DD]` → Current date in YYYY-MM-DD format +- `[DOCUMENT_TYPE_NAME]` → "Algorithmic Transparency Record" +- `ARC-[PROJECT_ID]-ATRS-v[VERSION]` → Construct using format above +- `[COMMAND]` → "arckit.atrs" + +*User-provided fields* (extract from project metadata or user input): + +- `[PROJECT_NAME]` → Full project name from project metadata or user input +- `[OWNER_NAME_AND_ROLE]` → Document owner (prompt user if not in metadata) +- `[CLASSIFICATION]` → Default to "OFFICIAL" for UK Gov, "PUBLIC" otherwise (or prompt user) + +*Calculated fields*: + +- `[YYYY-MM-DD]` for Review Date → Current date + 30 days + +*Pending fields* (leave as [PENDING] until manually updated): + +- `[REVIEWER_NAME]` → [PENDING] +- `[APPROVER_NAME]` → [PENDING] +- `[DISTRIBUTION_LIST]` → Default to "Project Team, Architecture Team" or [PENDING] + +**Populate Revision History**: + +```markdown +| 1.0 | {DATE} | ArcKit AI | Initial creation from `/arckit:atrs` command | [PENDING] | [PENDING] | +``` + +**Populate Generation Metadata Footer**: + +The footer should be populated with: + +```markdown +**Generated by**: ArcKit `/arckit:atrs` command +**Generated on**: {DATE} {TIME} GMT +**ArcKit Version**: {ARCKIT_VERSION} +**Project**: {PROJECT_NAME} (Project {PROJECT_ID}) +**AI Model**: [Use actual model name, e.g., "claude-sonnet-4-5-20250929"] +**Generation Context**: [Brief note about source documents used] +``` + +--- + +Before writing the file, read `${CLAUDE_PLUGIN_ROOT}/references/quality-checklist.md` and verify all **Common Checks** plus the **ATRS** per-type checks pass. Fix any failures before proceeding. + +11. **Generate comprehensive ATRS record**: + +Output location: `projects/{project-dir}/ARC-{PROJECT_ID}-ATRS-v1.0.md` + +Use the template structure from `uk-gov-atrs-template.md` + +**Format**: + +- Tier 1: Clear, simple, jargon-free language +- Tier 2: Technical detail sufficient for specialists +- All mandatory fields completed +- Links to supporting documentation +- Publication checklist at end + +12. **Provide publication guidance**: + +After generating the ATRS record: + +- Summary of completeness (what percentage of fields are complete) +- List of blocking issues (must resolve before publication) +- List of warnings (should address) +- Next steps: + 1. Complete missing mandatory fields + 2. Get SRO approval + 3. Legal/compliance review + 4. DPO review + 5. Publish on GOV.UK ATRS repository + 6. Publish on department website + 7. Set review date + +## Example Usage + +User: `/arckit:atrs Generate ATRS record for our benefits eligibility chatbot using GPT-4` + +You should: + +- Identify tool: Benefits eligibility chatbot, Generative AI (LLM) +- Determine risk: **HIGH-RISK** (affects access to benefits - fundamental right) +- Read existing requirements, AI Playbook assessment (if exists) +- Complete Tier 1 (public summary): + - Name: DWP Benefits Eligibility Chatbot + - Description: "An AI-powered chatbot that helps people understand their eligibility for benefits by answering questions about their circumstances in plain English." + - Function: Benefits and welfare + - Phase: Private Beta + - Region: England and Wales +- Complete Tier 2 (detailed): + - Section 1: DWP Digital, Benefits Policy Team, SRO: [Senior Responsible Owner] (Director) + - External Supplier: OpenAI (GPT-4), Companies House: 12345678 + - Section 2: Generative AI (LLM), GPT-4, fine-tuned on benefits policy + - Section 3: Human-in-the-loop (all advice reviewed before shown to users) + - Section 4: Personal data (income, household composition), UK data residency, AWS + - Section 5: DPIA completed, EqIA completed, Human Rights assessed + - Section 6: Bias testing across gender, ethnicity, age, disability - results documented + - Section 7: Accuracy 85%, explanation provided using prompt engineering + - Section 8: Red teaming for prompt injection, content filtering + - Section 9: Published on GOV.UK, users informed in-app + - Section 10: AI Governance Board oversight, monthly reviews + - Section 11: UK GDPR, Data Protection Act 2018, Public Task legal basis + - Section 12: KPI: User satisfaction 78%, reduced call center volume 15% + - Section 13: Quarterly review, next review 2025-07-01 +- Flag completeness: 95% complete +- **BLOCKING**: Need to add fallback procedure for system failures +- **WARNING**: Model card not yet published (recommended) +- Write to `projects/NNN-benefits-chatbot/ARC-NNN-ATRS-v1.0.md` +- Provide next steps: "Complete fallback procedures, then ready for SRO approval and GOV.UK publication" + +## Important Notes + +- ATRS publication is **MANDATORY** for central government +- Records must be published on GOV.UK ATRS repository: https://www.gov.uk/algorithmic-transparency-records +- ATRS is PUBLIC - do not include sensitive information (security vulnerabilities, personal data, commercially sensitive details) +- Use plain English in Tier 1 - imagine explaining to a family member +- Tier 2 should be detailed enough for technical scrutiny +- Update ATRS record when significant changes occur (new version, scope change, incidents) +- Regular reviews required (annually minimum, quarterly for high-risk) +- Contact algorithmic-transparency@dsit.gov.uk for guidance + +- **Markdown escaping**: When writing less-than or greater-than comparisons, always include a space after `<` or `>` (e.g., `< 3 seconds`, `> 99.9% uptime`) to prevent markdown renderers from interpreting them as HTML tags or emoji + +## Related Frameworks + +- **AI Playbook** - responsible AI deployment (use `/arckit:ai-playbook` first for AI systems) +- **Technology Code of Practice** - broader technology governance (use `/arckit:tcop`) +- **Data Ethics Framework** - responsible data use +- **GDS Service Standard** - service design and delivery + +## Resources + +- ATRS Guidance: https://www.gov.uk/government/publications/guidance-for-organisations-using-the-algorithmic-transparency-recording-standard +- ATRS Template: https://www.gov.uk/government/publications/algorithmic-transparency-template +- ATRS Repository: https://www.gov.uk/algorithmic-transparency-records +- Contact: algorithmic-transparency@dsit.gov.uk diff --git a/arckit-copilot/commands/aws-research.md b/arckit-copilot/commands/aws-research.md new file mode 100644 index 00000000..6dc0a1fa --- /dev/null +++ b/arckit-copilot/commands/aws-research.md @@ -0,0 +1,97 @@ +--- +description: Research AWS services and architecture patterns using AWS Knowledge MCP for authoritative guidance +argument-hint: "" +tags: [aws, amazon, cloud, architecture, mcp, research, well-architected, security-hub] +effort: high +handoffs: + - command: diagram + description: Create AWS architecture diagrams + - command: devops + description: Design AWS CodePipeline CI/CD + - command: finops + description: Create AWS cost management strategy + - command: adr + description: Record AWS service selection decisions +--- + +# AWS Technology Research + +## User Input + +```text +$ARGUMENTS +``` + +## Instructions + +This command performs AWS-specific technology research using the AWS Knowledge MCP server to match project requirements to AWS services, architecture patterns, Well-Architected guidance, Security Hub controls, and UK Government compliance. + +**This command delegates to the `arckit-aws-research` agent** which runs as an autonomous subprocess. The agent makes 15-30+ MCP calls (search_documentation, read_documentation, get_regional_availability, recommend) to gather authoritative AWS documentation — running in its own context window to avoid polluting the main conversation with large documentation chunks. + +### What to Do + +1. **Determine the project**: If the user specified a project name/number, note it. Otherwise, identify the most recent project in `projects/`. + +2. **Launch the agent**: Launch the **arckit-aws-research** agent in `acceptEdits` mode with the following prompt: + + ```text + Research AWS services and architecture patterns for the project in projects/{project-dir}/. + + User's additional context: {$ARGUMENTS} + + Follow your full process: read requirements, research AWS services per category, Well-Architected assessment, Security Hub mapping, UK Government compliance, cost estimation, write document, return summary. + ``` + +3. **Report the result**: When the agent completes, relay its summary to the user. + +### Alternative: Direct Execution + +If the Task tool is unavailable or the user prefers inline execution, fall back to the full research process: + +1. Check prerequisites (requirements document must exist) +2. **Read the template** (with user override support): + - **First**, check if `.arckit/templates/aws-research-template.md` exists in the project root + - **If found**: Read the user's customized template (user override takes precedence) + - **If not found**: Read `${CLAUDE_PLUGIN_ROOT}/templates/aws-research-template.md` (default) + + - **Tip**: Users can customize templates with `/arckit:customize aws-research` +3. Extract AWS service needs from requirements (compute, data, integration, security, AI/ML) +4. Use MCP tools for each category: service discovery, deep dive, regional availability (eu-west-2), architecture patterns, Well-Architected assessment, Security Hub mapping, code samples. If MCP tools are unavailable, use WebSearch with `site:docs.aws.amazon.com` and WebFetch on result URLs for equivalent research (STANDALONE mode) +5. UK Government: G-Cloud, data residency, NCSC compliance +6. Cost estimation with optimization (Reserved Instances, Savings Plans, Spot, Graviton) +7. Generate Mermaid architecture diagram +Before writing the file, read `${CLAUDE_PLUGIN_ROOT}/references/quality-checklist.md` and verify all **Common Checks** plus the **AWRS** per-type checks pass. Fix any failures before proceeding. + +8. Write to `projects/{project-dir}/research/ARC-{PROJECT_ID}-AWRS-v1.0.md` using Write tool +9. Show summary only (not full document) + +### Output + +The agent writes the full research document to file and returns a summary including: + +- AWS services recommended per category +- Architecture pattern and reference +- Security alignment (Security Hub, Well-Architected) +- UK Government suitability (G-Cloud, eu-west-2, classification) +- Estimated monthly cost +- Next steps (`/arckit:diagram`, `/arckit:secure`, `/arckit:devops`) + +## Integration with Other Commands + +- **Input**: Requires requirements document (`ARC-*-REQ-*.md`) +- **Input**: Uses data model (`ARC-*-DATA-*.md`) for database selection +- **Output**: Feeds into `/arckit:diagram` (AWS-specific diagrams) +- **Output**: Feeds into `/arckit:secure` (validates against Secure by Design) +- **Output**: Feeds into `/arckit:devops` (AWS CodePipeline design) +- **Output**: Feeds into `/arckit:finops` (AWS cost management strategy) + +## Resources + +- **AWS Knowledge MCP**: https://awslabs.github.io/mcp/servers/aws-knowledge-mcp-server +- **AWS Architecture Center**: https://aws.amazon.com/architecture/ +- **AWS Well-Architected**: https://aws.amazon.com/architecture/well-architected/ +- **Digital Marketplace (AWS)**: https://www.digitalmarketplace.service.gov.uk/g-cloud/search?q=amazon+web+services + +## Important Notes + +- **Markdown escaping**: When writing less-than or greater-than comparisons, always include a space after `<` or `>` (e.g., `< 3 seconds`, `> 99.9% uptime`) to prevent markdown renderers from interpreting them as HTML tags or emoji diff --git a/arckit-copilot/commands/azure-research.md b/arckit-copilot/commands/azure-research.md new file mode 100644 index 00000000..4fc1095a --- /dev/null +++ b/arckit-copilot/commands/azure-research.md @@ -0,0 +1,98 @@ +--- +description: Research Azure services and architecture patterns using Microsoft Learn MCP for authoritative guidance +argument-hint: "" +tags: [azure, microsoft, cloud, architecture, mcp, research, well-architected, security-benchmark] +effort: high +handoffs: + - command: diagram + description: Create Azure architecture diagrams + - command: devops + description: Design Azure DevOps pipeline + - command: finops + description: Create Azure cost management strategy + - command: adr + description: Record Azure service selection decisions +--- + +# Azure Technology Research + +## User Input + +```text +$ARGUMENTS +``` + +## Instructions + +This command performs Azure-specific technology research using the Microsoft Learn MCP server to match project requirements to Azure services, architecture patterns, Well-Architected guidance, Security Benchmark controls, and UK Government compliance. + +**This command delegates to the `arckit-azure-research` agent** which runs as an autonomous subprocess. The agent makes 15-30+ MCP calls (microsoft_docs_search, microsoft_docs_fetch, microsoft_code_sample_search) to gather authoritative Azure documentation — running in its own context window to avoid polluting the main conversation with large documentation chunks. + +### What to Do + +1. **Determine the project**: If the user specified a project name/number, note it. Otherwise, identify the most recent project in `projects/`. + +2. **Launch the agent**: Launch the **arckit-azure-research** agent in `acceptEdits` mode with the following prompt: + + ```text + Research Azure services and architecture patterns for the project in projects/{project-dir}/. + + User's additional context: {$ARGUMENTS} + + Follow your full process: read requirements, research Azure services per category, Well-Architected assessment, Security Benchmark mapping, UK Government compliance, cost estimation, write document, return summary. + ``` + +3. **Report the result**: When the agent completes, relay its summary to the user. + +### Alternative: Direct Execution + +If the Task tool is unavailable or the user prefers inline execution, fall back to the full research process: + +1. Check prerequisites (requirements document must exist) +2. **Read the template** (with user override support): + - **First**, check if `.arckit/templates/azure-research-template.md` exists in the project root + - **If found**: Read the user's customized template (user override takes precedence) + - **If not found**: Read `${CLAUDE_PLUGIN_ROOT}/templates/azure-research-template.md` (default) + + - **Tip**: Users can customize templates with `/arckit:customize azure-research` +3. Extract Azure service needs from requirements (compute, data, integration, security, AI/ML) +4. Use MCP tools for each category: service discovery, deep dive, architecture patterns, Well-Architected assessment, Security Benchmark mapping, code samples. If MCP tools are unavailable, use WebSearch with `site:learn.microsoft.com` and WebFetch on result URLs for equivalent research (STANDALONE mode) +5. UK Government: G-Cloud, UK South/West data residency, NCSC compliance +6. Cost estimation with optimization (Reserved Instances, Azure Hybrid Benefit, Spot VMs) +7. Generate Mermaid architecture diagram +Before writing the file, read `${CLAUDE_PLUGIN_ROOT}/references/quality-checklist.md` and verify all **Common Checks** plus the **AZRS** per-type checks pass. Fix any failures before proceeding. + +8. Write to `projects/{project-dir}/research/ARC-{PROJECT_ID}-AZRS-v1.0.md` using Write tool +9. Show summary only (not full document) + +### Output + +The agent writes the full research document to file and returns a summary including: + +- Azure services recommended per category +- Architecture pattern and reference +- Security alignment (Security Benchmark, Well-Architected) +- UK Government suitability (G-Cloud, UK regions, classification) +- Estimated monthly cost +- Next steps (`/arckit:diagram`, `/arckit:secure`, `/arckit:devops`) + +## Integration with Other Commands + +- **Input**: Requires requirements document (`ARC-*-REQ-*.md`) +- **Input**: Uses data model (`ARC-*-DATA-*.md`) for database selection +- **Output**: Feeds into `/arckit:diagram` (Azure-specific diagrams) +- **Output**: Feeds into `/arckit:secure` (validates against Secure by Design) +- **Output**: Feeds into `/arckit:devops` (Azure DevOps pipeline design) +- **Output**: Feeds into `/arckit:finops` (Azure cost management strategy) + +## Resources + +- **Microsoft Learn MCP**: https://github.com/MicrosoftDocs/mcp +- **Azure Architecture Center**: https://learn.microsoft.com/azure/architecture/ +- **Azure Well-Architected**: https://learn.microsoft.com/azure/well-architected/ +- **Azure Security Benchmark**: https://learn.microsoft.com/security/benchmark/azure/ +- **Digital Marketplace (Azure)**: https://www.digitalmarketplace.service.gov.uk/g-cloud/search?q=azure + +## Important Notes + +- **Markdown escaping**: When writing less-than or greater-than comparisons, always include a space after `<` or `>` (e.g., `< 3 seconds`, `> 99.9% uptime`) to prevent markdown renderers from interpreting them as HTML tags or emoji diff --git a/arckit-copilot/commands/backlog.md b/arckit-copilot/commands/backlog.md new file mode 100644 index 00000000..3b1fb154 --- /dev/null +++ b/arckit-copilot/commands/backlog.md @@ -0,0 +1,1789 @@ +--- +description: Generate prioritised product backlog from ArcKit artifacts - convert requirements to user stories, organise into sprints +argument-hint: "" +alwaysShow: true +effort: high +handoffs: + - command: trello + description: Export backlog to Trello board + - command: traceability + description: Map user stories back to requirements +--- + +# Generate Product Backlog + +You are creating a **prioritised product backlog** for an ArcKit project, converting design artifacts into sprint-ready user stories. + +## User Input + +```text +$ARGUMENTS +``` + +## Arguments + +**SPRINT_LENGTH** (optional): Sprint duration (default: `2w`) + +- Valid: `1w`, `2w`, `3w`, `4w` + +**SPRINTS** (optional): Number of sprints to plan (default: `8`) + +- Generates sprint plan for first N sprints + +**VELOCITY** (optional): Team velocity in story points per sprint (default: `20`) + +- Adjusts sprint capacity planning + +**FORMAT** (optional): Output formats (default: `markdown`) + +- Valid: `markdown`, `csv`, `json`, `all` + +**PRIORITY** (optional): Prioritization approach (default: `multi`) + +- `moscow` - MoSCoW only +- `risk` - Risk-based only +- `value` - Value-based only +- `dependency` - Dependency-based only +- `multi` - Multi-factor (recommended) + +--- + +## What This Command Does + +Scans all ArcKit artifacts and automatically: + +1. **Converts requirements to user stories** + - Business Requirements (BR-xxx) → Epics + - Functional Requirements (FR-xxx) → User Stories (GDS format) + - Non-Functional Requirements (NFR-xxx) → Technical Tasks + - Integration Requirements (INT-xxx) → Integration Stories + - Data Requirements (DR-xxx) → Data Tasks + +2. **Generates GDS-compliant user stories** + + ```text + As a [persona] + I want [capability] + So that [goal] + + Acceptance Criteria: + - It's done when [measurable outcome 1] + - It's done when [measurable outcome 2] + ``` + +3. **Prioritizes using multi-factor scoring** + - MoSCoW priorities (Must/Should/Could/Won't) + - Risk-based (from risk register) + - Value-based (from business case) + - Dependency-based (technical foundation first) + +4. **Organizes into sprint plan** + - Respects dependencies (auth before features) + - Balances work types (60% features, 20% technical, 15% testing, 5% buffer) + - Creates realistic sprint backlogs + +5. **Maintains traceability** + - Requirements → Stories → Sprints → Code + - Links to HLD components + - Maps to epics and business goals + +**Output**: `projects/{project-dir}/ARC-{PROJECT_ID}-BKLG-v1.0.md` (+ optional CSV/JSON) + +**Time Savings**: 75%+ reduction (4-6 weeks → 3-5 days) + +--- + +## Process + +### Step 1: Identify Project Context + +> **Note**: The ArcKit Project Context hook has already detected all projects, artifacts, external documents, and global policies. Use that context below — no need to scan directories manually. + +Use the **ArcKit Project Context** (above) to find the project matching the user's input (by name or number). If no match, create a new project: + +1. Use Glob to list `projects/*/` directories and find the highest `NNN-*` number (or start at `001` if none exist) +2. Calculate the next number (zero-padded to 3 digits, e.g., `002`) +3. Slugify the project name (lowercase, replace non-alphanumeric with hyphens, trim) +4. Use the Write tool to create `projects/{NNN}-{slug}/README.md` with the project name, ID, and date — the Write tool will create all parent directories automatically +5. Also create `projects/{NNN}-{slug}/external/README.md` with a note to place external reference documents here +6. Set `PROJECT_ID` = the 3-digit number, `PROJECT_PATH` = the new directory path + +Extract project metadata: + +- Project name +- Current phase (from artifacts) +- Team size (if documented) + +### Step 2: Read existing artifacts from the project context + +**MANDATORY** (warn if missing): + +- **REQ** (Requirements) — primary source + - Extract: All BR/FR/NFR/INT/DR requirement IDs, descriptions, priorities, acceptance criteria + - If missing: warn user to run `/arckit:requirements` first — backlog is derived from requirements + +**RECOMMENDED** (read if available, note if missing): + +- **STKE** (Stakeholder Analysis) + - Extract: User personas for "As a..." statements, stakeholder priorities +- **RISK** (Risk Register) + - Extract: Risk priorities for risk-based prioritization, security threats +- **SOBC** (Business Case) + - Extract: Value priorities, ROI targets for value-based prioritization +- **PRIN** (Architecture Principles, in 000-global) + - Extract: Quality standards for Definition of Done +- **HLDR** (High-Level Design Review) or **DLDR** (Detailed Design Review) + - Extract: Component names and responsibilities for story mapping +- HLD/DLD in `projects/{project-dir}/vendors/*/hld-v*.md` or `dld-v*.md` — Vendor designs + - Extract: Component mapping, detailed component info + +**OPTIONAL** (read if available, skip silently if missing): + +- **DPIA** (Data Protection Impact Assessment) + - Extract: Privacy-related tasks and constraints +- `test-strategy.md` — Test requirements (optional external document) + - Extract: Test types and coverage needs + +### Step 2b: Read external documents and policies + +- Read any **external documents** listed in the project context (`external/` files) — extract existing user stories, velocity data, sprint history, team capacity, component architecture from vendor HLD/DLD documents +- Read any **enterprise standards** in `projects/000-global/external/` — extract enterprise backlog standards, Definition of Ready/Done templates, cross-project estimation benchmarks +- If no external docs exist but they would improve backlog accuracy, ask: "Do you have any vendor design documents or existing backlog exports? I can read PDFs and images directly. Place them in `projects/{project-dir}/external/` and re-run, or skip." +- **Citation traceability**: When referencing content from external documents, follow the citation instructions in `${CLAUDE_PLUGIN_ROOT}/references/citation-instructions.md`. Place inline citation markers (e.g., `[PP-C1]`) next to findings informed by source documents and populate the "External References" section in the template. + +### Step 2c: Interactive Configuration + +Before generating the backlog, use the **AskUserQuestion** tool to gather user preferences. **Skip any question where the user has already specified their choice via the arguments above** (e.g., if they wrote `PRIORITY=risk`, do not ask about prioritization). + +**Gathering rules** (apply to all questions in this section): + +- Ask the most important question first; fill in secondary details from context or reasonable defaults. +- **Maximum 2 rounds of questions.** After that, pick the best option from available context. +- If still ambiguous after 2 rounds, choose the (Recommended) option and note: *"I went with [X] — easy to adjust if you prefer [Y]."* + +**Question 1** — header: `Priority`, multiSelect: false +> "Which prioritization approach should be used for the backlog?" + +- **Multi-factor (Recommended)**: Combines MoSCoW, risk, value, and dependency scoring for balanced prioritization +- **MoSCoW**: Must/Should/Could/Won't categorization only +- **Value vs Effort**: Prioritize by business value relative to implementation effort +- **Risk-based**: Prioritize highest-risk items first to reduce uncertainty early + +**Question 2** — header: `Format`, multiSelect: false +> "What output format do you need?" + +- **All formats (Recommended)**: Markdown report + CSV (Jira/Azure DevOps import) + JSON (API integration) +- **Markdown only**: Standard report document +- **CSV only**: For direct import into Jira, Azure DevOps, or GitHub Projects +- **JSON only**: For programmatic access and custom integrations + +Apply the user's selections to the corresponding parameters throughout this command. For example, if they chose "MoSCoW", use only MoSCoW prioritization in Step 7 instead of the full multi-factor algorithm. If they chose "CSV only", generate only the CSV output in Step 13. + +### Step 3: Parse Requirements + +For each requirement in the requirements document (`ARC-*-REQ-*.md`), extract: + +**Business Requirements (BR-xxx)**: + +```markdown +**BR-001**: User Management +- Description: [text] +- Priority: Must Have +``` + +→ Becomes an **Epic** + +**Functional Requirements (FR-xxx)**: + +```markdown +**FR-001**: User Registration +- Description: [text] +- Priority: Must Have +- Acceptance Criteria: [list] +``` + +→ Becomes a **User Story** + +**Non-Functional Requirements (NFR-xxx)**: + +```markdown +**NFR-005**: Response time < 2 seconds +- Implementation: Caching layer +- Priority: Should Have +``` + +→ Becomes a **Technical Task** + +**Integration Requirements (INT-xxx)**: + +```markdown +**INT-003**: Integrate with Stripe API +- Priority: Must Have +``` + +→ Becomes an **Integration Story** + +**Data Requirements (DR-xxx)**: + +```markdown +**DR-002**: Store user payment history +- Priority: Should Have +``` + +→ Becomes a **Data Task** + +Create a mapping table: + +```text +Requirement ID → Story Type → Priority → Dependencies +``` + +### Step 4: Generate User Stories from Functional Requirements + +For **each FR-xxx**, create a user story in GDS format: + +#### 4.1: Identify the Actor (User Persona) + +Look in the stakeholder analysis (`ARC-*-STKE-*.md`) for user types: + +- Service users +- Administrators +- System operators +- API consumers +- Third-party integrators + +Match the FR to the appropriate persona based on: + +- Who performs this action? +- Who benefits from this capability? + +Examples: + +- FR about "user login" → "new user" or "registered user" +- FR about "admin dashboard" → "system administrator" +- FR about "API endpoint" → "API consumer" + +If no persona matches, use generic: + +- "user" for user-facing features +- "system" for backend/integration features +- "administrator" for admin features + +#### 4.2: Extract the Action (I want...) + +From the FR description, identify the core capability: + +- **Action verbs**: create, view, update, delete, process, integrate, export, import, search, filter, etc. +- **Object**: what is being acted upon + +Examples: + +- FR: "System shall allow users to register" → "create an account" +- FR: "System shall process payments" → "pay with my credit card" +- FR: "System shall export reports to CSV" → "export my data as CSV" + +#### 4.3: Infer the Goal (So that...) + +Why does the user need this capability? Look for: + +1. Explicit goal in FR description +2. Parent BR description +3. Business case benefits +4. User needs from stakeholder analysis + +If goal not explicit, infer from context: + +- Registration → "access the service" +- Payment → "complete my transaction" +- Export → "analyze data offline" +- Notification → "stay informed of updates" + +#### 4.4: Generate Acceptance Criteria + +Convert FR's acceptance criteria to "It's done when..." format: + +**Original FR acceptance criteria**: + +```text +- Email verification required +- Password must be 8+ characters +- GDPR consent must be captured +``` + +**Convert to GDS format**: + +```text +Acceptance Criteria: +- It's done when email verification is sent within 1 minute +- It's done when password meets security requirements (8+ chars, special char) +- It's done when GDPR consent is captured and stored +- It's done when confirmation email is received +``` + +**Rules for acceptance criteria**: + +- Start with "It's done when..." +- Make measurable and testable +- Include success cases +- Include key error cases +- Reference NFRs (security, performance, compliance) +- Typically 3-6 criteria per story + +#### 4.5: Estimate Story Points + +Use Fibonacci sequence: **1, 2, 3, 5, 8, 13** + +**Estimation guidelines**: + +- **1 point**: Trivial, < 2 hours + - Config change + - Simple UI text update + - Add logging statement + +- **2 points**: Simple, half day + - Small API endpoint (GET with no logic) + - Basic UI form with validation + - Database query with simple filter + +- **3 points**: Moderate, 1 day + - API endpoint with business logic + - UI component with state management + - Database migration + - Integration with simple external API + +- **5 points**: Complex, 2-3 days + - Multi-step workflow + - Complex business logic + - UI feature with multiple components + - Integration with authentication + - Data migration script + +- **8 points**: Very complex, 1 week + - Major feature spanning multiple components + - Complex integration (payment gateway, SSO) + - Significant refactoring + - Multi-table data model + +- **13 points**: Epic-level, 2 weeks + - Too large - break down into smaller stories + - Example: "Build entire admin dashboard" + +**Factors that increase points**: + +- Multiple components involved (API + UI + database) +- Security requirements (authentication, encryption) +- Third-party integration (external APIs) +- Data migration or transformation +- Complex business logic +- Regulatory compliance (GDPR, PCI-DSS) +- Performance optimisation needed + +**Estimation algorithm**: + +```text +Base points = 3 (typical story) + +If FR involves: + + Multiple components: +2 + + Security/auth: +2 + + External integration: +2 + + Data migration: +2 + + Complex validation: +1 + + Performance requirements: +2 + + GDPR/compliance: +1 + +Total = Base + modifiers +Round to nearest Fibonacci number +Cap at 13 (break down if larger) +``` + +#### 4.6: Identify Component (from HLD) + +Map story to HLD component: + +- Read `vendors/{vendor}/hld-v*.md` for component list +- Match FR to component based on: + - Component responsibilities + - Component name (e.g., "User Service", "Payment Service") + - FR description keywords + +Example component mapping: + +```text +FR-001: User Registration → User Service +FR-005: Process Payment → Payment Service +FR-010: Send Email → Notification Service +FR-015: Generate Report → Reporting Service +``` + +If no HLD exists, infer component from FR: + +- Authentication/user features → "User Service" +- Payment features → "Payment Service" +- Data/reporting → "Data Service" +- Integrations → "Integration Service" + +#### 4.7: Create Technical Tasks + +Break down story into implementation tasks: + +**For a typical FR**, create 2-4 tasks: + +```text +Story-001: Create user account (8 points) + +Tasks: +- Task-001-A: Design user table schema (2 points) + - PostgreSQL schema with email, password_hash, created_at + - Add GDPR consent fields + - Create indexes on email + +- Task-001-B: Implement registration API endpoint (3 points) + - POST /api/users/register + - Email validation + - Password hashing (bcrypt) + - Return JWT token + +- Task-001-C: Implement email verification service (3 points) + - Generate verification token + - Send email via SendGrid + - Verify token endpoint + - Mark user as verified +``` + +**Task estimation**: + +- Tasks should sum to story points +- Typical split: 30% database, 40% API, 30% UI +- Add testing tasks if needed + +#### 4.8: Complete User Story Format + +**Final story structure**: + +```markdown +### Story-{FR-ID}: {Short Title} + +**As a** {persona} +**I want** {capability} +**So that** {goal} + +**Acceptance Criteria**: +- It's done when {measurable outcome 1} +- It's done when {measurable outcome 2} +- It's done when {measurable outcome 3} +- It's done when {measurable outcome 4} + +**Technical Tasks**: +- Task-{ID}-A: {task description} ({points} points) +- Task-{ID}-B: {task description} ({points} points) +- Task-{ID}-C: {task description} ({points} points) + +**Requirements Traceability**: {FR-xxx, NFR-xxx, etc.} +**Component**: {from HLD} +**Story Points**: {1,2,3,5,8,13} +**Priority**: {Must Have | Should Have | Could Have | Won't Have} +**Sprint**: {calculated in Step 6} +**Dependencies**: {other story IDs that must be done first} +``` + +**Example - Complete Story**: + +```markdown +### Story-001: Create user account + +**As a** new user +**I want** to create an account with email and password +**So that** I can access the service and save my preferences + +**Acceptance Criteria**: +- It's done when I can enter email and password on registration form +- It's done when email verification is sent within 1 minute +- It's done when account is created after I verify my email +- It's done when GDPR consent is captured and stored +- It's done when invalid email shows error message +- It's done when weak password shows strength requirements + +**Technical Tasks**: +- Task-001-A: Design user table schema with GDPR fields (2 points) +- Task-001-B: Implement POST /api/users/register endpoint (3 points) +- Task-001-C: Implement email verification service using SendGrid (3 points) + +**Requirements Traceability**: FR-001, NFR-008 (GDPR), NFR-012 (Email) +**Component**: User Service (from HLD) +**Story Points**: 8 +**Priority**: Must Have +**Sprint**: 1 (calculated) +**Dependencies**: None (foundation story) +``` + +### Step 5: Generate Epics from Business Requirements + +For **each BR-xxx**, create an epic: + +#### 5.1: Epic Structure + +```markdown +## Epic {BR-ID}: {BR Title} + +**Business Requirement**: {BR-ID} +**Priority**: {Must Have | Should Have | Could Have} +**Business Value**: {High | Medium | Low} - {description from business case} +**Risk**: {Critical | High | Medium | Low} - {from risk register} +**Dependencies**: {other epic IDs that must be done first} +**Total Story Points**: {sum of all stories in epic} +**Estimated Duration**: {points / velocity} sprints + +**Description**: +{BR description from ARC-*-REQ-*.md} + +**Success Criteria**: +{BR acceptance criteria} + +**Stories in this Epic**: +{List all FR stories that map to this BR} + +--- +``` + +#### 5.2: Group Stories into Epics + +Use this mapping logic: + +1. **Explicit BR → FR mapping**: + - If FR references a BR (e.g., "Relates to BR-001"), group there + +2. **Semantic grouping**: + - User-related FRs → "User Management" epic + - Payment-related FRs → "Payment Processing" epic + - Integration FRs → "External Integrations" epic + +3. **HLD component grouping**: + - All stories for "User Service" → User Management epic + - All stories for "Payment Service" → Payment Processing epic + +**Example Epic**: + +```markdown +## Epic 1: User Management (BR-001) + +**Business Requirement**: BR-001 +**Priority**: Must Have +**Business Value**: High - Foundation for all user-facing features +**Risk**: Medium - GDPR compliance required +**Dependencies**: None (foundation epic) +**Total Story Points**: 34 +**Estimated Duration**: 2 sprints (at 20 points/sprint) + +**Description**: +System must provide comprehensive user management including registration, +authentication, profile management, and password reset. Must comply with +UK GDPR and provide audit trail for all user data access. + +**Success Criteria**: +- Users can create accounts with email verification +- Users can login and logout securely +- User sessions expire after 30 minutes of inactivity +- Password reset functionality available +- GDPR consent captured and audit trail maintained + +**Stories in this Epic**: +1. Story-001: Create user account (8 points) - Sprint 1 +2. Story-002: User login (5 points) - Sprint 1 +3. Story-003: User logout (2 points) - Sprint 1 +4. Story-004: Password reset (5 points) - Sprint 2 +5. Story-005: Update user profile (3 points) - Sprint 2 +6. Story-006: Delete user account (5 points) - Sprint 2 +7. Story-007: View audit log (3 points) - Sprint 2 +8. Story-008: Export user data (GDPR) (3 points) - Sprint 2 + +**Total**: 34 story points across 8 stories + +--- +``` + +### Step 6: Create Technical Tasks from NFRs + +For **each NFR-xxx**, create a technical task: + +#### 6.1: Technical Task Structure + +```markdown +### Task-{NFR-ID}: {Short Title} + +**Type**: Technical Task (NFR) +**Requirement**: {NFR-ID} +**Priority**: {Must Have | Should Have | Could Have} +**Story Points**: {1,2,3,5,8,13} +**Sprint**: {calculated in Step 7} + +**Description**: +{What needs to be implemented to satisfy this NFR} + +**Acceptance Criteria**: +- It's done when {measurable outcome 1} +- It's done when {measurable outcome 2} +- It's done when {measurable outcome 3} + +**Dependencies**: {stories/tasks that must exist first} +**Component**: {affected component from HLD} +``` + +#### 6.2: NFR → Task Examples + +**Performance NFR**: + +```markdown +### Task-NFR-005: Implement Redis caching layer + +**Type**: Technical Task (NFR) +**Requirement**: NFR-005 (Response time < 2 seconds P95) +**Priority**: Should Have +**Story Points**: 5 +**Sprint**: 2 + +**Description**: +Implement Redis caching to meet response time requirements. Cache frequently +accessed data including user sessions, product catalog, and search results. + +**Acceptance Criteria**: +- It's done when Redis is deployed and configured in all environments +- It's done when cache hit rate > 80% for user sessions +- It's done when P95 response time < 2 seconds for cached endpoints +- It's done when cache invalidation strategy is implemented +- It's done when cache monitoring dashboard shows hit/miss rates + +**Dependencies**: Task-001-A (database schema must exist), Story-002 (login creates sessions) +**Component**: Infrastructure, User Service, Product Service +``` + +**Security NFR**: + +```markdown +### Task-NFR-012: Implement rate limiting + +**Type**: Technical Task (NFR) +**Requirement**: NFR-012 (DDoS protection) +**Priority**: Must Have +**Story Points**: 3 +**Sprint**: 1 + +**Description**: +Implement API rate limiting to prevent abuse and DDoS attacks. +Limit: 100 requests per minute per IP, 1000 per hour. + +**Acceptance Criteria**: +- It's done when rate limiter middleware is implemented +- It's done when 429 status code returned when limit exceeded +- It's done when rate limits vary by endpoint (stricter for auth) +- It's done when rate limit headers included in responses +- It's done when rate limit bypass available for known good IPs + +**Dependencies**: Task-001-B (API must exist) +**Component**: API Gateway +``` + +**Compliance NFR**: + +```markdown +### Task-NFR-008: Implement GDPR audit logging + +**Type**: Technical Task (NFR) +**Requirement**: NFR-008 (GDPR compliance) +**Priority**: Must Have +**Story Points**: 5 +**Sprint**: 2 + +**Description**: +Implement comprehensive audit logging for all user data access to comply +with UK GDPR Article 30 (records of processing activities). + +**Acceptance Criteria**: +- It's done when all user data access is logged (who, what, when, why) +- It's done when logs stored immutably (append-only) +- It's done when logs retained for 7 years +- It's done when logs available for GDPR data subject access requests +- It's done when logs include IP address, user agent, action type + +**Dependencies**: Task-001-A (user table must exist), Story-001 (users must exist) +**Component**: Audit Service, User Service +``` + +### Step 7: Prioritization + +Apply **multi-factor prioritization algorithm**: + +#### 7.1: Calculate Priority Score + +For each story/task, calculate: + +```text +Priority Score = ( + MoSCoW_Weight * 40% + + Risk_Weight * 20% + + Value_Weight * 20% + + Dependency_Weight * 20% +) +``` + +**MoSCoW Weight**: + +- Must Have = 4 +- Should Have = 3 +- Could Have = 2 +- Won't Have = 1 + +**Risk Weight** (from `ARC-*-RISK-*.md`): + +- Critical risk = 4 +- High risk = 3 +- Medium risk = 2 +- Low risk = 1 + +**Value Weight** (from `ARC-*-SOBC-*.md`): + +- High ROI/impact = 4 +- Medium ROI/impact = 3 +- Low ROI/impact = 2 +- No ROI data = 1 + +**Dependency Weight**: + +- Blocks many items (>5) = 4 +- Blocks some items (3-5) = 3 +- Blocks few items (1-2) = 2 +- Blocks nothing = 1 + +**Example calculation**: + +```text +Story-001: Create user account + MoSCoW: Must Have = 4 + Risk: Medium (GDPR) = 2 + Value: High (foundation) = 4 + Dependency: Blocks many (all user features) = 4 + +Priority Score = (4 * 0.4) + (2 * 0.2) + (4 * 0.2) + (4 * 0.2) + = 1.6 + 0.4 + 0.8 + 0.8 + = 3.6 + +Story-025: Export user preferences + MoSCoW: Could Have = 2 + Risk: Low = 1 + Value: Low = 2 + Dependency: Blocks nothing = 1 + +Priority Score = (2 * 0.4) + (1 * 0.2) + (2 * 0.2) + (1 * 0.2) + = 0.8 + 0.2 + 0.4 + 0.2 + = 1.6 +``` + +#### 7.2: Sort Backlog + +Sort all stories/tasks by Priority Score (descending): + +```text +Story-001: Create user account (3.6) +Story-002: User login (3.4) +Task-NFR-012: Rate limiting (3.2) +Story-015: Connect to Stripe (3.0) +Story-016: Process payment (3.0) +... +Story-025: Export preferences (1.6) +``` + +#### 7.3: Dependency Enforcement + +After sorting by priority, adjust for **mandatory dependencies**: + +**Foundation Stories** (always Sprint 1): + +- Authentication (user registration, login) +- Database setup +- CI/CD pipeline +- Testing framework + +**Dependency Rules**: + +2. **Technical foundation before features**: + - Auth system before user-facing features + - Database before data operations + - API gateway before API endpoints + +3. **Integration points before dependent features**: + - Stripe API integration before payment UI + - Email service before notifications + - Search service before search features + +4. **Parent stories before child stories**: + - "Create user account" before "Update user profile" + - "Process payment" before "View payment history" + +**Dependency adjustment algorithm**: + +```text +For each story S in sorted backlog: + If S has dependencies D1, D2, ..., Dn: + For each dependency Di: + If Di is not scheduled yet or scheduled after S: + Move Di before S + Recursively check Di's dependencies +``` + +**Example - Before dependency adjustment**: + +```text +Sprint 1: + Story-016: Process payment (3.0) - depends on Story-015 + +Sprint 2: + Story-015: Connect to Stripe (3.0) +``` + +**After dependency adjustment**: + +```text +Sprint 1: + Story-015: Connect to Stripe (3.0) - no dependencies + +Sprint 2: + Story-016: Process payment (3.0) - depends on Story-015 ✓ +``` + +### Step 8: Sprint Planning + +Organise stories into sprints with capacity planning: + +#### 8.1: Sprint Parameters + +**Default values** (overridden by arguments): + +- Velocity: 20 story points per sprint +- Sprint length: 2 weeks +- Number of sprints: 8 + +**Capacity allocation per sprint**: + +- 60% Feature stories (12 points) +- 20% Technical tasks (4 points) +- 15% Testing tasks (3 points) +- 5% Bug buffer (1 point) + +#### 8.2: Sprint 1 - Foundation Sprint + +**Sprint 1 is special** - always includes: + +**Must-have foundation items**: + +1. User authentication (registration + login) +2. Database setup +3. CI/CD pipeline +4. Testing framework +5. Basic security (rate limiting, CORS) + +**Example Sprint 1**: + +```markdown +## Sprint 1: Foundation (Weeks 1-2) + +**Velocity**: 20 story points +**Theme**: Technical foundation and core infrastructure + +### Must Have Stories (12 points): +- Story-001: Create user account (8 points) [Epic: User Management] +- Story-002: User login (5 points) [Epic: User Management] + → Reduced to fit capacity, move Story-003 to Sprint 2 + +### Technical Tasks (4 points): +- Task-DB-001: Setup PostgreSQL database (2 points) [Epic: Infrastructure] +- Task-CI-001: Setup CI/CD pipeline with GitHub Actions (2 points) [Epic: DevOps] + +### Testing Tasks (3 points): +- Task-TEST-001: Setup Jest testing framework (1 point) [Epic: Testing] +- Test-001: Unit tests for user registration (included in Story-001) +- Test-002: Integration test for login flow (included in Story-002) + +### Security Tasks (1 point): +- Task-NFR-012: Implement rate limiting (1 point) [Epic: Security] + +**Total Allocated**: 20 points + +### Sprint Goals: +✅ Users can create accounts and login +✅ Database deployed to dev/staging/prod +✅ CI/CD pipeline operational (deploy on merge) +✅ Unit testing framework ready +✅ Basic security controls in place + +### Dependencies Satisfied: +✅ None (foundation sprint) + +### Dependencies Created for Sprint 2: +→ User authentication (Story-001, Story-002) +→ Database schema (Task-DB-001) +→ CI/CD (Task-CI-001) +→ Testing (Task-TEST-001) + +### Risks: +⚠️ GDPR compliance review needed for Story-001 +⚠️ Email service selection (SendGrid vs AWS SES) for Story-001 +⚠️ Team may be unfamiliar with CI/CD tools + +### Definition of Done: +- [ ] All code reviewed and approved +- [ ] Unit tests written (80% coverage minimum) +- [ ] Integration tests written for critical paths +- [ ] Security scan passed (no critical/high issues) +- [ ] Deployed to dev environment +- [ ] Demo-able to stakeholders at sprint review +- [ ] Documentation updated (API docs, README) +``` + +#### 8.3: Subsequent Sprints (2-N) + +For each sprint after Sprint 1: + +**Step 1: Calculate available capacity** + +```text +Total capacity = Velocity (default 20 points) +Feature capacity = 60% = 12 points +Technical capacity = 20% = 4 points +Testing capacity = 15% = 3 points +Buffer = 5% = 1 point +``` + +**Step 2: Select stories by priority** + +Starting from top of prioritised backlog: + +```text +For each unscheduled story S (sorted by priority): + If S's dependencies are all scheduled in earlier sprints: + If S's points <= remaining_capacity_for_type: + Add S to current sprint + Reduce remaining capacity + Else: + Try next story (S won't fit) + Else: + Skip S (dependencies not met) + +Continue until sprint is full or no more stories fit +``` + +**Step 3: Balance work types** + +Ensure sprint has mix of: + +- Feature stories (user-facing value) +- Technical tasks (infrastructure, NFRs) +- Testing tasks (quality) + +If sprint has too many of one type, swap with next sprint. + +**Step 4: Validate dependencies** + +For each story in sprint: + +- Check all dependencies are in earlier sprints +- If dependency missing, move it to current sprint (adjust capacity) + +**Example Sprint 2**: + +```markdown +## Sprint 2: Core Features (Weeks 3-4) + +**Velocity**: 20 story points +**Theme**: Payment integration and core workflows + +### Feature Stories (12 points): +- Story-015: Connect to Stripe API (8 points) [Epic: Payment Processing] + - Dependencies: ✅ Story-001 (users must be authenticated) +- Story-003: Password reset (5 points) [Epic: User Management] + - Dependencies: ✅ Story-001, Story-002 + → Only 13 points for features (adjusted) + +### Technical Tasks (4 points): +- Task-NFR-005: Implement Redis caching layer (3 points) [Epic: Performance] + - Dependencies: ✅ Task-DB-001 (database must exist) +- Task-NFR-008: GDPR audit logging (2 points) [Epic: Compliance] + - Dependencies: ✅ Story-001 (users must exist) + → Only 5 points for technical (adjusted) + +### Testing Tasks (3 points): +- Task-TEST-002: Setup integration tests (Supertest) (2 points) +- Test-015: Stripe integration tests (included in Story-015) + +**Total Allocated**: 20 points (13+5+2) + +### Sprint Goals: +✅ Stripe payment integration operational +✅ Password reset workflow complete +✅ Caching layer improves performance +✅ GDPR audit trail in place + +### Dependencies Satisfied: +✅ Sprint 1: User authentication, database, CI/CD + +### Dependencies Created for Sprint 3: +→ Stripe integration (Story-015) - needed for payment workflows +→ Caching infrastructure (Task-NFR-005) - improves all features + +### Risks: +⚠️ Stripe sandbox environment access needed +⚠️ PCI-DSS compliance requirements for Story-015 +⚠️ Redis cluster setup for production + +### Testing Focus: +- Integration tests for Stripe API (webhooks, payments) +- GDPR audit log verification +- Cache invalidation testing +``` + +#### 8.4: Generate All Sprint Plans + +Continue for all N sprints (default 8): + +```markdown +## Sprint 3: Feature Build (Weeks 5-6) +[... sprint details ...] + +## Sprint 4: Integration (Weeks 7-8) +[... sprint details ...] + +## Sprint 5: Advanced Features (Weeks 9-10) +[... sprint details ...] + +## Sprint 6: Security Hardening (Weeks 11-12) +[... sprint details ...] + +## Sprint 7: Performance Optimization (Weeks 13-14) +[... sprint details ...] + +## Sprint 8: UAT Preparation (Weeks 15-16) +[... sprint details ...] + +## Future Sprints (Beyond Week 16) + +**Remaining Backlog**: {X} story points +**Estimated Duration**: {X / velocity} sprints + +**High Priority Items for Sprint 9+**: +- Story-045: Advanced reporting (8 points) +- Story-052: Mobile app integration (13 points) +- Task-NFR-025: Multi-region deployment (8 points) +[... list remaining high-priority items ...] +``` + +### Step 9: Generate Traceability Matrix + +Create comprehensive traceability table: + +```markdown +## Appendix A: Requirements Traceability Matrix + +| Requirement | Type | User Stories | Sprint | Status | Notes | +|-------------|------|-------------|--------|--------|-------| +| BR-001 | Business | Story-001, Story-002, Story-003, Story-004, Story-005, Story-006, Story-007, Story-008 | 1-2 | Planned | User Management epic | +| FR-001 | Functional | Story-001 | 1 | Planned | User registration | +| FR-002 | Functional | Story-002 | 1 | Planned | User login | +| FR-003 | Functional | Story-003 | 2 | Planned | Password reset | +| FR-005 | Functional | Story-016 | 2 | Planned | Process payment | +| NFR-005 | Non-Functional | Task-NFR-005 | 2 | Planned | Caching for performance | +| NFR-008 | Non-Functional | Task-NFR-008 | 2 | Planned | GDPR audit logging | +| NFR-012 | Non-Functional | Task-NFR-012 | 1 | Planned | Rate limiting | +| INT-003 | Integration | Story-015 | 2 | Planned | Stripe integration | +| DR-002 | Data | Task-DR-002 | 3 | Planned | Payment history schema | +[... all requirements mapped ...] + +**Coverage Summary**: +- Total Requirements: {N} +- Mapped to Stories: {N} (100%) +- Scheduled in Sprints 1-8: {N} ({X}%) +- Remaining for Future Sprints: {N} ({X}%) +``` + +### Step 9b: Load Mermaid Syntax Reference + +Read `${CLAUDE_PLUGIN_ROOT}/skills/mermaid-syntax/references/flowchart.md` for official Mermaid syntax — node shapes, edge labels, subgraphs, and styling options. + +### Step 10: Generate Dependency Graph + +Create visual dependency representation: + +```markdown +## Appendix B: Dependency Graph + +### Sprint 1 → Sprint 2 Dependencies + +```mermaid +flowchart TD + subgraph S1[Sprint 1 - Foundation] + S001[Story-001: User Registration] + S002[Story-002: User Login] + TDB[Task-DB-001: Database Setup] + TCI[Task-CI-001: CI/CD Pipeline] + end + + subgraph S2[Sprint 2] + S015[Story-015: Needs authenticated users] + S003[Story-003: Needs user accounts] + TNFR5[Task-NFR-005: Needs database for caching] + TNFR8[Task-NFR-008: Needs database for audit log] + end + + subgraph Future[All Future Work] + FW[Deploy mechanism required] + end + + S001 -->|blocks| S015 + S001 -->|blocks| S003 + S002 -->|blocks| S015 + TDB -->|blocks| TNFR5 + TDB -->|blocks| TNFR8 + TCI -->|blocks| FW + + style S1 fill:#E3F2FD + style S2 fill:#FFF3E0 + style Future fill:#E8F5E9 +```text + +### Sprint 2 → Sprint 3 Dependencies + +```mermaid +flowchart TD + subgraph S2[Sprint 2 - Core Features] + S015[Story-015: Stripe Integration] + NFR5[Task-NFR-005: Redis Caching] + NFR8[Task-NFR-008: GDPR Audit Log] + end + + subgraph S3[Sprint 3] + S016[Story-016: Payment processing needs Stripe] + end + + subgraph S4[Sprint 4] + S025[Story-025: Payment history needs payments] + S030[Story-030: GDPR data export] + end + + subgraph S3Plus[Sprint 3+] + ALL[All features benefit from caching] + end + + S015 -->|blocks| S016 + S015 -->|blocks| S025 + NFR5 -->|improves| ALL + NFR8 -->|enables| S030 + + style S2 fill:#E3F2FD + style S3 fill:#FFF3E0 + style S4 fill:#E8F5E9 + style S3Plus fill:#F3E5F5 +```text + +[... continue for all sprints ...] + +``` + +### Step 11: Generate Epic Overview + +Create epic summary table: + +```markdown +## Appendix C: Epic Overview + +| Epic ID | Epic Name | Priority | Stories | Points | Sprints | Status | Dependencies | +|---------|-----------|----------|---------|--------|---------|--------|--------------| +| EPIC-001 | User Management | Must Have | 8 | 34 | 1-2 | Planned | None | +| EPIC-002 | Payment Processing | Must Have | 12 | 56 | 2-4 | Planned | EPIC-001 | +| EPIC-003 | Stripe Integration | Must Have | 6 | 28 | 2-3 | Planned | EPIC-001 | +| EPIC-004 | Reporting | Should Have | 10 | 42 | 5-6 | Planned | EPIC-002 | +| EPIC-005 | Admin Dashboard | Should Have | 8 | 35 | 4-5 | Planned | EPIC-001 | +| EPIC-006 | Email Notifications | Should Have | 5 | 18 | 3-4 | Planned | EPIC-001 | +| EPIC-007 | Mobile API | Could Have | 7 | 29 | 7-8 | Planned | EPIC-002 | +| EPIC-008 | Advanced Search | Could Have | 6 | 24 | 6-7 | Planned | EPIC-004 | +[... all epics ...] + +**Total**: {N} epics, {N} stories, {N} story points +``` + +### Step 12: Generate Definition of Done + +Extract from `ARC-000-PRIN-*.md` or use defaults: + +```markdown +## Appendix D: Definition of Done + +Every story must meet these criteria before marking "Done": + +### Code Quality +- [ ] Code reviewed by 2+ team members +- [ ] No merge conflicts +- [ ] Follows coding standards (linting passed) +- [ ] No code smells or technical debt introduced + +### Testing +- [ ] Unit tests written (minimum 80% coverage) +- [ ] Integration tests written for API endpoints +- [ ] Manual testing completed +- [ ] Acceptance criteria verified and signed off + +### Security +- [ ] Security scan passed (no critical/high vulnerabilities) +- [ ] OWASP Top 10 checks completed +- [ ] Secrets not hardcoded (use environment variables) +- [ ] Authentication and authorisation tested + +### Performance +- [ ] Performance tested (meets NFR thresholds) +- [ ] No N+1 query issues +- [ ] Caching implemented where appropriate +- [ ] Response times < 2 seconds (P95) + +### Compliance +- [ ] GDPR requirements met (if handling user data) +- [ ] Accessibility tested (WCAG 2.1 AA) +- [ ] Audit logging in place (if required) + +### Documentation +- [ ] API documentation updated (OpenAPI/Swagger) +- [ ] Code comments for complex logic +- [ ] README updated if needed +- [ ] Runbook updated (if operational changes) + +### Deployment +- [ ] Deployed to dev environment +- [ ] Deployed to staging environment +- [ ] Database migrations tested (if applicable) +- [ ] Configuration updated in all environments + +### Stakeholder +- [ ] Demoed to Product Owner at sprint review +- [ ] Acceptance criteria validated by PO +- [ ] User feedback incorporated (if available) + +--- + +**Note**: This DoD applies to all stories. Additional criteria may be added per story based on specific requirements. +``` + +### Step 13: Generate Output Files + +#### 13.1: Primary Output - ARC-*-BKLG-*.md + +Create comprehensive markdown file at `projects/{project-dir}/ARC-{PROJECT_ID}-BKLG-v1.0.md`: + +```markdown +# Product Backlog: {Project Name} + +**Generated**: {date} +**Project**: {project-name} +**Phase**: Beta (Implementation) +**Team Velocity**: {velocity} points/sprint +**Sprint Length**: {sprint_length} +**Total Sprints Planned**: {sprints} + +--- + +## Executive Summary + +**Total Stories**: {N} +**Total Epics**: {N} +**Total Story Points**: {N} +**Estimated Duration**: {N / velocity} sprints ({N} weeks) + +### Priority Breakdown +- Must Have: {N} stories ({N} points) - {X}% +- Should Have: {N} stories ({N} points) - {X}% +- Could Have: {N} stories ({N} points) - {X}% + +### Epic Breakdown +{List all epics with point totals} + +--- + +## How to Use This Backlog + +### For Product Owners: +1. Review epic priorities - adjust based on business needs +2. Refine story acceptance criteria before sprint planning +3. Validate user stories with actual users +4. Adjust sprint sequence based on stakeholder priorities + +### For Development Teams: +1. Review stories in upcoming sprint (Sprint Planning) +2. Break down stories into tasks if needed +3. Estimate effort using team velocity +4. Identify technical blockers early +5. Update story status as work progresses + +### For Scrum Masters: +1. Track velocity after each sprint +2. Adjust future sprint loading based on actual velocity +3. Monitor dependency chains +4. Escalate blockers early +5. Facilitate backlog refinement sessions + +### Backlog Refinement: +- **Weekly**: Review and refine next 2 sprints +- **Bi-weekly**: Groom backlog beyond 2 sprints +- **Monthly**: Reassess epic priorities +- **Per sprint**: Update based on completed work and learnings + +--- + +## Epics + +{Generate all epic sections from Step 5} + +--- + +## Prioritized Backlog + +{Generate all user stories from Step 4, sorted by priority from Step 7} + +--- + +## Sprint Plan + +{Generate all sprint plans from Step 8} + +--- + +## Appendices + +{Include all appendices from Steps 9-12} + +--- + +**Note**: This backlog was auto-generated from ArcKit artifacts. Review and refine with your team before sprint planning begins. Story points are estimates - re-estimate based on your team's velocity and capacity. + +--- + +**End of Backlog** +``` + +#### 13.2: CSV Export (if requested) + +Create `backlog.csv` for Jira/Azure DevOps import: + +```csv +Type,Key,Epic,Summary,Description,Acceptance Criteria,Priority,Story Points,Sprint,Status,Component,Requirements +Epic,EPIC-001,,"User Management","Foundation epic for user management including registration, authentication, profile management",,Must Have,34,1-2,To Do,User Service,BR-001 +Story,STORY-001,EPIC-001,"Create user account","As a new user I want to create an account so that I can access the service","It's done when I can enter email and password; It's done when email verification is sent; It's done when account is created after verification; It's done when GDPR consent is recorded",Must Have,8,1,To Do,User Service,"FR-001, NFR-008, NFR-012" +Task,TASK-001-A,STORY-001,"Design user table schema","PostgreSQL schema for users table with email, password_hash, GDPR consent fields",,Must Have,2,1,To Do,User Service,FR-001 +Task,TASK-001-B,STORY-001,"Implement registration API","POST /api/users/register endpoint with email validation and password hashing",,Must Have,3,1,To Do,User Service,FR-001 +[... all items ...] +``` + +#### 13.3: JSON Export (if requested) + +Create `backlog.json` for programmatic access: + +```json +{ + "project": "{project-name}", + "generated": "{ISO date}", + "team_velocity": 20, + "sprint_length": "2 weeks", + "total_sprints": 8, + "summary": { + "total_stories": 87, + "total_epics": 12, + "total_points": 342, + "must_have_points": 180, + "should_have_points": 98, + "could_have_points": 64 + }, + "epics": [ + { + "id": "EPIC-001", + "title": "User Management", + "business_requirement": "BR-001", + "priority": "Must Have", + "points": 34, + "sprints": "1-2", + "stories": ["STORY-001", "STORY-002", "STORY-003", "..."] + } + ], + "stories": [ + { + "id": "STORY-001", + "epic": "EPIC-001", + "title": "Create user account", + "as_a": "new user", + "i_want": "to create an account", + "so_that": "I can access the service", + "acceptance_criteria": [ + "It's done when I can enter email and password", + "It's done when email verification is sent", + "It's done when account is created after verification", + "It's done when GDPR consent is recorded" + ], + "priority": "Must Have", + "story_points": 8, + "sprint": 1, + "status": "To Do", + "requirements": ["FR-001", "NFR-008", "NFR-012"], + "component": "User Service", + "dependencies": [], + "tasks": [ + { + "id": "TASK-001-A", + "title": "Design user table schema", + "points": 2 + }, + { + "id": "TASK-001-B", + "title": "Implement registration API", + "points": 3 + }, + { + "id": "TASK-001-C", + "title": "Implement email verification", + "points": 3 + } + ] + } + ], + "sprints": [ + { + "number": 1, + "duration": "Weeks 1-2", + "theme": "Foundation", + "velocity": 20, + "stories": ["STORY-001", "STORY-002"], + "tasks": ["TASK-DB-001", "TASK-CI-001"], + "goals": [ + "Users can create accounts and login", + "Database deployed to all environments", + "CI/CD pipeline operational", + "Unit testing framework ready" + ], + "dependencies_satisfied": [], + "dependencies_created": ["User auth", "Database", "CI/CD"], + "risks": ["GDPR compliance review needed", "Email service selection"] + } + ], + "traceability": [ + { + "requirement": "FR-001", + "type": "Functional", + "stories": ["STORY-001"], + "sprint": 1, + "status": "Planned" + } + ] +} +``` + +--- + +**CRITICAL - Auto-Populate Document Control Fields**: + +Before completing the document, populate ALL document control fields in the header: + +**Construct Document ID**: + +- **Document ID**: `ARC-{PROJECT_ID}-BKLG-v{VERSION}` (e.g., `ARC-001-BKLG-v1.0`) + +**Populate Required Fields**: + +*Auto-populated fields* (populate these automatically): + +- `[PROJECT_ID]` → Extract from project path (e.g., "001" from "projects/001-project-name") +- `[VERSION]` → "1.0" (or increment if previous version exists) +- `[DATE]` / `[YYYY-MM-DD]` → Current date in YYYY-MM-DD format +- `[DOCUMENT_TYPE_NAME]` → "Product Backlog" +- `ARC-[PROJECT_ID]-BKLG-v[VERSION]` → Construct using format above +- `[COMMAND]` → "arckit.backlog" + +*User-provided fields* (extract from project metadata or user input): + +- `[PROJECT_NAME]` → Full project name from project metadata or user input +- `[OWNER_NAME_AND_ROLE]` → Document owner (prompt user if not in metadata) +- `[CLASSIFICATION]` → Default to "OFFICIAL" for UK Gov, "PUBLIC" otherwise (or prompt user) + +*Calculated fields*: + +- `[YYYY-MM-DD]` for Review Date → Current date + 30 days + +*Pending fields* (leave as [PENDING] until manually updated): + +- `[REVIEWER_NAME]` → [PENDING] +- `[APPROVER_NAME]` → [PENDING] +- `[DISTRIBUTION_LIST]` → Default to "Project Team, Architecture Team" or [PENDING] + +**Populate Revision History**: + +```markdown +| 1.0 | {DATE} | ArcKit AI | Initial creation from `/arckit:backlog` command | [PENDING] | [PENDING] | +``` + +**Populate Generation Metadata Footer**: + +The footer should be populated with: + +```markdown +**Generated by**: ArcKit `/arckit:backlog` command +**Generated on**: {DATE} {TIME} GMT +**ArcKit Version**: {ARCKIT_VERSION} +**Project**: {PROJECT_NAME} (Project {PROJECT_ID}) +**AI Model**: [Use actual model name, e.g., "claude-sonnet-4-5-20250929"] +**Generation Context**: [Brief note about source documents used] +``` + +--- + +Before writing the file, read `${CLAUDE_PLUGIN_ROOT}/references/quality-checklist.md` and verify all **Common Checks** plus the **BKLG** per-type checks pass. Fix any failures before proceeding. + +### Step 14: Final Output + +Write all files to `projects/{project-dir}/`: + +**Always create**: + +- `ARC-{PROJECT_ID}-BKLG-v1.0.md` - Primary output + +**Create if FORMAT includes**: + +- `ARC-{PROJECT_ID}-BKLG-v1.0.csv` - If FORMAT=csv or FORMAT=all +- `ARC-{PROJECT_ID}-BKLG-v1.0.json` - If FORMAT=json or FORMAT=all + +**CRITICAL - Show Summary Only**: +After writing the file(s), show ONLY the confirmation message below. Do NOT output the full backlog content in your response. The backlog document can be 1000+ lines and will exceed token limits. + +**Confirmation message**: + +```text +✅ Product backlog generated successfully! + +📁 Output files: + - projects/{project-dir}/ARC-{PROJECT_ID}-BKLG-v1.0.md ({N} KB) + - projects/{project-dir}/ARC-{PROJECT_ID}-BKLG-v1.0.csv ({N} KB) + - projects/{project-dir}/ARC-{PROJECT_ID}-BKLG-v1.0.json ({N} KB) + +📊 Backlog Summary: + - Total stories: {N} + - Total epics: {N} + - Total story points: {N} + - Estimated duration: {N} sprints ({N} weeks at {velocity} points/sprint) + +🎯 Next Steps: + 1. Review backlog with your team + 2. Refine acceptance criteria and story points + 3. Validate dependencies and priorities + 4. Begin sprint planning for Sprint 1 + 5. Track actual velocity and adjust future sprints + +⚠️ Important: Story point estimates are AI-generated. Your team should re-estimate based on actual velocity and capacity. + +📚 Integration: + - Import ARC-{PROJECT_ID}-BKLG-v1.0.csv to Jira, Azure DevOps, or GitHub Projects + - Use ARC-{PROJECT_ID}-BKLG-v1.0.json for custom integrations + - Link to /arckit:traceability for requirements tracking +``` + +--- + +## Important Notes + +### Story Point Accuracy + +AI-generated story points are **estimates only**. Teams should: + +1. Review and re-estimate based on their velocity +2. Use team poker for consensus +3. Track actual vs estimated over sprints +4. Adjust future estimates based on learnings + +### Velocity Calibration + +Initial velocity (default 20) is assumed. After Sprint 1: + +1. Calculate actual velocity: sum of "Done" story points +2. Adjust Sprint 2+ capacity accordingly +3. Track velocity trend (improving, stable, declining) +4. Account for team changes (vacation, new members) + +### Backlog Grooming + +This backlog is a starting point. Teams should: + +- **Weekly**: Refine next 2 sprints (details, estimates) +- **Bi-weekly**: Groom backlog beyond 2 sprints (priorities) +- **Monthly**: Review epic priorities (business changes) +- **Per sprint**: Update based on completed work + +### Dependency Management + +Dependencies are identified automatically but may need adjustment: + +- Technical dependencies (X must exist before Y) +- Business dependencies (A delivers value before B) +- Resource dependencies (same person needed for both) + +### Risk Management + +High-risk items are prioritised early to: + +- Prove technical feasibility +- Identify blockers early +- Reduce uncertainty +- Allow time for mitigation + +--- + +- **Markdown escaping**: When writing less-than or greater-than comparisons, always include a space after `<` or `>` (e.g., `< 3 seconds`, `> 99.9% uptime`) to prevent markdown renderers from interpreting them as HTML tags or emoji + +## Error Handling + +If artifacts are missing: + +**No requirements document**: + +```text +❌ Error: No ARC-*-REQ-*.md file found in projects/{project-dir}/ + +Cannot generate backlog without requirements. Please run: + /arckit:requirements + +Then re-run /arckit:backlog +``` + +**No stakeholder analysis**: + +```text +⚠️ Warning: No ARC-*-STKE-*.md file found. Using generic personas. + +For better user stories, run: + /arckit:stakeholders + +Then re-run /arckit:backlog +``` + +**No HLD**: + +```text +⚠️ Warning: hld-v*.md not found. Stories will not be mapped to components. + +For better component mapping, run: + /arckit:hld or /arckit:diagram + +Then re-run /arckit:backlog +``` + +Continue with available artifacts, note limitations in output. + +--- + +## Time Savings + +**Manual backlog creation**: + +- Convert requirements: 2-3 weeks +- Prioritize and sequence: 1 week +- Sprint planning: 1 week +- **Total: 4-6 weeks (80-120 hours)** + +**With /arckit:backlog**: + +- Run command: 2-5 minutes +- Review and refine: 1-2 days +- Team refinement: 2-3 days +- **Total: 3-5 days (24-40 hours)** + +**Time savings: 75-85%** + +--- + +## Examples + +### Example 1: Basic Usage + +```bash +/arckit:backlog +``` + +Output: + +- Creates `ARC-{PROJECT_ID}-BKLG-v1.0.md` with 8 sprints at 20 points/sprint +- Uses multi-factor prioritization +- Includes all available artifacts + +### Example 2: Custom Velocity and Sprints + +```bash +/arckit:backlog VELOCITY=25 SPRINTS=12 +``` + +Output: + +- 12 sprints planned +- 25 story points per sprint +- Adjusts capacity allocation (60/20/15/5) + +### Example 3: Export All Formats + +```bash +/arckit:backlog FORMAT=all +``` + +Output: + +- `ARC-{PROJECT_ID}-BKLG-v1.0.md` (markdown) +- `ARC-{PROJECT_ID}-BKLG-v1.0.csv` (Jira import) +- `ARC-{PROJECT_ID}-BKLG-v1.0.json` (API integration) + +### Example 4: Risk-Based Priority Only + +```bash +/arckit:backlog PRIORITY=risk +``` + +Output: + +- Prioritizes solely by risk level +- High-risk items first +- Ignores MoSCoW, value, dependencies + +--- + +## Integration with Other Commands + +### Inputs From + +- `/arckit:requirements` → All stories +- `/arckit:hld` → Component mapping +- `/arckit:stakeholders` → User personas +- `/arckit:risk-register` → Risk priorities +- `/arckit:threat-model` → Security stories +- `/arckit:business-case` → Value priorities +- `/arckit:principles` → Definition of Done + +### Outputs To + +- `/arckit:traceability` → Requirements → Stories → Sprints +- `/arckit:test-strategy` → Test cases from acceptance criteria +- `/arckit:analyze` → Backlog completeness check + +--- + +## Success Criteria + +Backlog is complete when: + +✅ Every requirement (FR/NFR/INT/DR) maps to ≥1 story/task +✅ User stories follow GDS format +✅ Acceptance criteria are measurable +✅ Story points are reasonable (1-13 range) +✅ Dependencies are identified and respected +✅ Priorities align with business case +✅ Sprint plan is realistic +✅ Traceability is maintained +✅ Output formats are tool-compatible + +--- + +Now generate the backlog following this comprehensive process. diff --git a/arckit-copilot/commands/conformance.md b/arckit-copilot/commands/conformance.md new file mode 100644 index 00000000..b0d85b30 --- /dev/null +++ b/arckit-copilot/commands/conformance.md @@ -0,0 +1,439 @@ +--- +description: Assess architecture conformance — ADR decision implementation, cross-decision consistency, design-principles alignment, architecture drift, technical debt, and custom constraint rules +argument-hint: "" +effort: high +--- + +## User Input + +```text +$ARGUMENTS +``` + +## Goal + +Generate a systematic **Architecture Conformance Assessment** that checks whether the *decided* architecture (ADRs, principles, approved designs) matches the *designed/implemented* architecture (HLD, DLD, DevOps artifacts). This command fills the gap between `/arckit:health` (quick metadata scan) and `/arckit:analyze` (deep governance across all dimensions) by focusing specifically on **decided-vs-designed conformance**, architecture drift, and architecture technical debt (ATD). + +**This is a point-in-time assessment** — run at key project gates or after major design changes to track conformance over time. + +## Prerequisites + +### Architecture Principles (MANDATORY) + +a. **PRIN** (Architecture Principles, in `projects/000-global/`) (MUST exist): + +- If NOT found: ERROR "Run /arckit:principles first to define governance standards for your organization" + +### Architecture Decision Records (MANDATORY) + +b. **ADR** (Architecture Decision Records, in `projects/{project-dir}/decisions/`) (MUST exist): + +- If NOT found: ERROR "Run /arckit:adr first — conformance assessment requires at least one accepted ADR" + +### Project Artifacts (RECOMMENDED) + +More artifacts = better conformance assessment: + +- **REQ** (Requirements) in `projects/{project-dir}/` — Requirements to cross-reference +- `projects/{project-dir}/vendors/{vendor}/hld-v*.md` — High-Level Design +- `projects/{project-dir}/vendors/{vendor}/dld-v*.md` — Detailed Low-Level Design +- **HLDR** (HLD Review) in `projects/{project-dir}/reviews/` — Design review findings +- **DLDR** (DLD Review) in `projects/{project-dir}/reviews/` — Detailed review findings +- **PRIN-COMP** (Principles Compliance) in `projects/{project-dir}/` — Prior compliance assessment +- **TRAC** (Traceability Matrix) in `projects/{project-dir}/` — Requirements traceability +- **RISK** (Risk Register) in `projects/{project-dir}/` — Risk context for exceptions +- **DEVOPS** (DevOps Strategy) in `projects/{project-dir}/` — CI/CD and deployment patterns + +### Custom Constraint Rules (OPTIONAL) + +c. `.arckit/conformance-rules.md` in the project root (if exists): + +- Contains user-defined ArchCNL-style constraint rules +- Format: Natural language rules with MUST/MUST NOT/SHOULD/SHOULD NOT keywords +- Example: "All API services MUST use OAuth 2.0 for authentication" +- Example: "Database connections MUST NOT use plaintext credentials" + +**Note**: Assessment is possible with minimal artifacts (principles + ADRs), but accuracy improves significantly with HLD/DLD and review documents. + +## Operating Constraints + +**Non-Destructive Assessment**: Do NOT modify existing artifacts. Generate a new conformance assessment document only. + +**Evidence-Based Assessment**: Every finding must cite specific file:section:line references. Avoid vague statements like "design addresses this" — be specific. + +**Honest Assessment**: Do not inflate conformance scores. FAIL is better than false PASS. Untracked technical debt should be surfaced, not hidden. + +**Architecture Principles Authority**: The architecture principles (`ARC-000-PRIN-*.md` in `projects/000-global/`) are non-negotiable. Any design that contradicts principles is automatically a FAIL. + +**ADR Decision Authority**: Accepted ADR decisions are binding. Designs that ignore or contradict accepted decisions are non-conformant. + +## Execution Steps + +> **Note**: The ArcKit Project Context hook has already detected all projects, artifacts, external documents, and global policies. Use that context below — no need to scan directories manually. + +### 0. Read the Template + +**Read the template** (with user override support): + +- **First**, check if `.arckit/templates/conformance-assessment-template.md` exists in the project root +- **If found**: Read the user's customized template (user override takes precedence) +- **If not found**: Read `${CLAUDE_PLUGIN_ROOT}/templates/conformance-assessment-template.md` (default) + +> **Tip**: Users can customize templates with `/arckit:customize conformance` + +### 1. Validate Prerequisites + +**Check Architecture Principles**: + +- Look for `ARC-000-PRIN-*.md` in `projects/000-global/` +- If NOT found: ERROR "Architecture principles not found. Run /arckit:principles first." + +**Check ADRs**: + +- Look for `ARC-*-ADR-*.md` files in `projects/{project-dir}/decisions/` +- If NONE found: ERROR "No ADRs found. Run /arckit:adr first — conformance assessment requires at least one accepted ADR." + +### 1b. Read external documents and policies + +- Read any **external documents** listed in the project context (`external/` files) — extract audit findings, compliance gaps, certification evidence, remediation plans +- Read any **enterprise standards** in `projects/000-global/external/` — extract enterprise compliance frameworks, cross-project conformance benchmarks +- If no external docs exist but they would improve the assessment, note this as an assessment limitation +- **Citation traceability**: When referencing content from external documents, follow the citation instructions in `${CLAUDE_PLUGIN_ROOT}/references/citation-instructions.md`. Place inline citation markers (e.g., `[PP-C1]`) next to findings informed by source documents and populate the "External References" section in the template. + +### 2. Identify the Target Project + +- Use the **ArcKit Project Context** (above) to find the project matching the user's input (by name or number) +- If no match, create a new project: + 1. Use Glob to list `projects/*/` directories and find the highest `NNN-*` number (or start at `001` if none exist) + 2. Calculate the next number (zero-padded to 3 digits, e.g., `002`) + 3. Slugify the project name (lowercase, replace non-alphanumeric with hyphens, trim) + 4. Use the Write tool to create `projects/{NNN}-{slug}/README.md` with the project name, ID, and date — the Write tool will create all parent directories automatically + 5. Also create `projects/{NNN}-{slug}/external/README.md` with a note to place external reference documents here + 6. Set `PROJECT_ID` = the 3-digit number, `PROJECT_PATH` = the new directory path + +### 3. Load All Relevant Artifacts + +Read the following artifacts. Do NOT read entire files — extract relevant sections for each conformance check. + +**Architecture Principles** (`projects/000-global/ARC-000-PRIN-*.md`): + +- Extract ALL principles dynamically (name, statement, rationale, implications) + +**ADRs** (`projects/{project-dir}/decisions/ARC-*-ADR-*.md`): + +- For EACH ADR, extract: title, status (Accepted/Superseded/Deprecated/Proposed), decision text, context, consequences (positive and negative), related ADRs +- Track supersession chains (which ADR supersedes which) + +**Design Documents** (if exist): + +- `projects/{project-dir}/vendors/{vendor}/hld-v*.md` — Architecture overview, technology stack, patterns, components +- `projects/{project-dir}/vendors/{vendor}/dld-v*.md` — Detailed implementation, API specs, infrastructure + +**Review Documents** (if exist): + +- `ARC-*-HLDR-*.md` in `reviews/` — HLD review conditions, findings +- `ARC-*-DLDR-*.md` in `reviews/` — DLD review conditions, findings + +**Other Artifacts** (if exist): + +- `ARC-*-REQ-*.md` — Requirements for traceability +- `ARC-*-PRIN-COMP-*.md` — Prior principles compliance (for trend comparison) +- `ARC-*-TRAC-*.md` — Traceability matrix +- `ARC-*-RISK-*.md` — Risk register (for exception context) +- `ARC-*-DEVOPS-*.md` — DevOps strategy (for technology stack drift check) + +**Custom Rules** (if exist): + +- `.arckit/conformance-rules.md` in the project root + +### 4. Execute Conformance Checks + +Run ALL 12 conformance checks. Each check produces a PASS/FAIL/NOT ASSESSED status with evidence. + +--- + +#### Check ADR-IMPL: ADR Decision Implementation (Severity: HIGH) + +For EACH ADR with status "Accepted": + +1. Extract the **Decision** section text +2. Search HLD and DLD for evidence that the decision is implemented +3. Check that the technology/pattern/approach chosen in the ADR appears in the design +4. **PASS**: Design documents reference or implement the ADR decision +5. **FAIL**: Decision is accepted but not reflected in design documents +6. **NOT ASSESSED**: No HLD/DLD available to check against + +**Evidence format**: `ADR "Title" (file:line) → HLD Section X (file:line) — [IMPLEMENTED/NOT FOUND]` + +--- + +#### Check ADR-CONFL: Cross-ADR Consistency (Severity: HIGH) + +1. Compare all Accepted ADRs for contradictions: + - Technology choices that conflict (e.g., ADR-001 chooses PostgreSQL, ADR-005 chooses MongoDB for same purpose) + - Pattern choices that conflict (e.g., ADR-002 mandates event-driven, ADR-007 mandates synchronous API calls for same integration) + - Scope overlaps where decisions disagree +2. **PASS**: No contradictions found between accepted ADRs +3. **FAIL**: Contradictions identified — list conflicting ADR pairs with specific conflicts + +**Evidence format**: `ADR-001 (file:line) CONFLICTS WITH ADR-005 (file:line) — [description]` + +--- + +#### Check ADR-SUPER: Superseded ADR Enforcement (Severity: MEDIUM) + +1. Identify all Superseded ADRs +2. Check that HLD/DLD does NOT reference patterns/technologies from superseded decisions +3. Check that the superseding ADR's decision IS reflected instead +4. **PASS**: No residue from superseded decisions found in design +5. **FAIL**: Design still references superseded decision patterns/technologies + +**Evidence format**: `Superseded ADR "Title" (file:line) — residue found in HLD Section X (file:line)` + +--- + +#### Check PRIN-DESIGN: Principles-to-Design Alignment (Severity: HIGH) + +For EACH architecture principle: + +1. Extract the principle statement and implications +2. Search HLD/DLD for design elements that satisfy or violate the principle +3. Apply **binary pass/fail** constraint checking (unlike principles-compliance which uses RAG scoring): + - Does the design VIOLATE this principle? → FAIL + - Does the design SATISFY this principle? → PASS + - Insufficient evidence to determine? → NOT ASSESSED +4. This is a **hard constraint check**, not a maturity assessment + +**Note**: This differs from `/arckit:principles-compliance` which provides RAG scoring with remediation plans. This check is a binary gate: does the design conform or not? + +**Evidence format**: `Principle "Name" — HLD Section X (file:line) [SATISFIES/VIOLATES] — [description]` + +--- + +#### Check COND-RESOLVE: Review Condition Resolution (Severity: HIGH) + +1. Read HLD/DLD review documents (HLDR, DLDR) +2. Look for conditions — typically flagged as "APPROVED WITH CONDITIONS", "CONDITIONAL", "CONDITIONS", or specific condition markers +3. For each condition found: + - Search for evidence of resolution in subsequent artifacts or updated designs + - Check if condition has been addressed in a newer version of the reviewed document +4. **PASS**: All review conditions have resolution evidence +5. **FAIL**: Unresolved conditions found — list each with its source and status + +**Evidence format**: `Condition "[text]" (file:line) — [RESOLVED in file:line / UNRESOLVED]` + +--- + +#### Check EXCPT-EXPIRY: Exception Register Expiry (Severity: HIGH) + +1. Search for exception registers in principles-compliance assessment, risk register, and review documents +2. Look for patterns: "Exception", "EXC-", "Approved exception", "Waiver", "Exemption" +3. For each exception found, check if the expiry date has passed (compare to today's date) +4. **PASS**: No expired exceptions found (or no exceptions exist) +5. **FAIL**: Expired exceptions found that haven't been renewed or remediated + +**Evidence format**: `Exception "EXC-NNN" (file:line) — expired [DATE], [REMEDIATED/STILL ACTIVE]` + +--- + +#### Check EXCPT-REMEDI: Exception Remediation Progress (Severity: MEDIUM) + +1. For each active (non-expired) exception found: + - Check if a remediation plan exists + - Check if there's evidence of progress toward remediation + - Check if remediation timeline is realistic given remaining time to expiry +2. **PASS**: All active exceptions have remediation plans with evidence of progress +3. **FAIL**: Exceptions missing remediation plans or showing no progress + +**Evidence format**: `Exception "EXC-NNN" — remediation plan [EXISTS/MISSING], progress [EVIDENCE/NONE]` + +--- + +#### Check DRIFT-TECH: Technology Stack Drift (Severity: MEDIUM) + +1. Extract technology choices from ADRs (databases, frameworks, languages, cloud services, tools) +2. Extract technology references from HLD, DLD, and DevOps strategy +3. Compare: do the technologies in design documents match ADR decisions? +4. Look for technologies appearing in design that were NOT decided via ADR (undocumented technology adoption) +5. **PASS**: Technology stack in design matches ADR decisions +6. **FAIL**: Technologies in design don't match ADR decisions, or undocumented technologies found + +**Evidence format**: `Technology "[name]" — ADR (file:line) says [X], Design (file:line) uses [Y]` + +--- + +#### Check DRIFT-PATTERN: Architecture Pattern Drift (Severity: MEDIUM) + +1. Extract architecture patterns from ADRs and HLD (microservices, event-driven, REST, CQRS, etc.) +2. Check DLD for consistent pattern application across all components +3. Look for components that deviate from the chosen pattern without an ADR justifying the deviation +4. **PASS**: Patterns consistently applied across all design artifacts +5. **FAIL**: Inconsistent pattern application found + +**Evidence format**: `Pattern "[name]" chosen in ADR/HLD (file:line) — DLD component [X] (file:line) uses [different pattern]` + +--- + +#### Check RULE-CUSTOM: Custom Constraint Rules (Severity: Variable) + +1. Read `.arckit/conformance-rules.md` if it exists +2. For each rule defined: + - Parse the rule (natural language with MUST/MUST NOT/SHOULD/SHOULD NOT) + - Search design artifacts for evidence of compliance or violation + - Assign severity based on keyword: MUST/MUST NOT → HIGH, SHOULD/SHOULD NOT → MEDIUM +3. **PASS**: Rule satisfied with evidence +4. **FAIL**: Rule violated — cite specific violation +5. **NOT ASSESSED**: Insufficient artifacts to check rule +6. If no custom rules file exists: mark as NOT ASSESSED with note "No custom rules defined" + +**Evidence format**: `Rule "[text]" — [SATISFIED/VIOLATED] at (file:line)` + +--- + +#### Check ATD-KNOWN: Known Technical Debt (Severity: LOW) + +1. Catalogue acknowledged architecture technical debt from: + - **ADR negative consequences**: "Consequences" sections listing accepted downsides + - **Risk register accepted risks**: Risks accepted as trade-offs (ACCEPT treatment) + - **Review conditions**: Deferred items from HLD/DLD reviews + - **Workarounds**: Temporary solutions documented in design + - **Scope reductions**: Quality/features removed for timeline/budget +2. Classify each debt item into ATD categories: + - DEFERRED-FIX: Known deficiency deferred to later phase + - ACCEPTED-RISK: Risk consciously accepted as trade-off + - WORKAROUND: Temporary solution deviating from intended pattern + - DEPRECATED-PATTERN: Superseded pattern not yet migrated + - SCOPE-REDUCTION: Quality/feature removed for timeline/budget + - EXCEPTION: Approved principle exception with expiry +3. **PASS**: Known debt is documented and tracked (this check always passes if debt is acknowledged) +4. **NOT ASSESSED**: No artifacts available to catalogue debt + +**Evidence format**: `ATD-NNN: "[description]" — Category: [category], Source: (file:line)` + +--- + +#### Check ATD-UNTRACK: Untracked Technical Debt (Severity: MEDIUM) + +1. Look for potential architecture technical debt NOT explicitly acknowledged: + - Technologies in design but not in ADR decisions (ad-hoc adoption) + - TODO/FIXME/HACK/WORKAROUND markers in design documents + - Inconsistencies between HLD and DLD suggesting shortcuts + - Design elements contradicting principles without an exception + - Review findings not addressed in subsequent versions +2. **PASS**: No untracked debt detected +3. **FAIL**: Potential untracked debt identified — list items for team review + +**Evidence format**: `Potential ATD: "[description]" found at (file:line) — not documented in any ADR/risk/exception` + +--- + +### 5. Calculate Conformance Score + +**Scoring**: + +- Count PASS, FAIL, NOT ASSESSED for each check +- Calculate overall conformance percentage: `(PASS count / (PASS + FAIL count)) × 100` +- Exclude NOT ASSESSED from the denominator + +**Deviation Tier Assignment** — for each FAIL finding, assign a tier based on result + severity: + +- 🔴 RED: FAIL + HIGH severity — escalate immediately, blocks next gate +- 🟡 YELLOW: FAIL + MEDIUM severity — negotiate remediation within 30 days, include fallback +- 🟢 GREEN: FAIL + LOW severity — acceptable deviation, document and monitor + +**Tier-Specific Response Requirements**: + +- For each 🔴 RED finding: explain the architecture risk, propose an alternative approach, recommend escalation to architecture board/CTO +- For each 🟡 YELLOW finding: provide specific remediation steps + timeline, include a fallback position if remediation is deferred +- For each 🟢 GREEN finding: document the deviation rationale, set a review date, no blocking action required + +**Overall Recommendation**: + +- **CONFORMANT**: All checks PASS (or NOT ASSESSED), no FAIL findings +- **CONFORMANT WITH CONDITIONS**: No RED findings, YELLOW/GREEN findings have remediation plans, conformance >= 80% +- **NON-CONFORMANT**: Any RED finding, or conformance < 80% + +### 6. Generate Document + +Use the document ID `ARC-{PROJECT_ID}-CONF-v{VERSION}` (e.g., `ARC-001-CONF-v1.0`). + +Before writing the file, read `${CLAUDE_PLUGIN_ROOT}/references/quality-checklist.md` and verify all **Common Checks** plus the **CONF** per-type checks pass. Fix any failures before proceeding. + +**Use the Write tool** to save the document to `projects/{project-dir}/ARC-{PROJECT_ID}-CONF-v{VERSION}.md`. + +Populate the template with all conformance check results, following the structure defined in the template. + +**IMPORTANT**: Use Write tool, not output to user. Document will be 500-2000 lines depending on the number of ADRs, principles, and findings. + +**Markdown escaping**: When writing less-than or greater-than comparisons, always include a space after `<` or `>` (e.g., `< 3 seconds`, `> 99.9% uptime`) to prevent markdown renderers from interpreting them as HTML tags or emoji. + +### 7. Show Summary to User + +Display concise summary (NOT full document): + +```text +✅ Architecture conformance assessment generated + +📊 **Conformance Summary**: + - Overall Score: [X]% ([CONFORMANT / CONFORMANT WITH CONDITIONS / NON-CONFORMANT]) + - Checks Passed: [X] / [Y] + - Checks Failed: [X] + - Not Assessed: [X] + +[IF RED findings:] +🔴 **RED — Escalate** ([N]): + - [Check ID]: [Brief description] + [List all RED findings] + +[IF YELLOW findings:] +🟡 **YELLOW — Negotiate** ([N]): + - [Check ID]: [Brief description] + [List all YELLOW findings] + +[IF GREEN findings:] +🟢 **GREEN — Acceptable** ([N]): + - [Check ID]: [Brief description] + [List all GREEN findings] + +[IF ATD items found:] +📦 **Architecture Technical Debt**: [X] known items, [Y] potential untracked items + +📄 **Document**: projects/{project-dir}/ARC-{PROJECT_ID}-CONF-v{VERSION}.md + +🔍 **Recommendation**: + [CONFORMANT]: ✅ Architecture conforms to decisions and principles + [CONFORMANT WITH CONDITIONS]: ⚠️ No critical deviations — [N] YELLOW findings need remediation by [date] + [NON-CONFORMANT]: ❌ [N] RED findings require escalation before proceeding + +**Next Steps**: +1. Review detailed findings in the generated document +2. [IF RED findings:] Escalate critical deviations to architecture board immediately +3. [IF YELLOW findings:] Agree remediation plans or fallback positions within 30 days +4. [IF ATD items:] Review technical debt register with architecture board +5. [IF custom rules missing:] Consider creating `.arckit/conformance-rules.md` for project-specific rules +6. Schedule next conformance check at [next gate/phase] +``` + +## Post-Generation Actions + +After generating the assessment document: + +1. **Suggest Follow-up Commands**: + + ```text + 📋 **Related Commands**: + - /arckit:principles-compliance - Detailed RAG scoring of principle compliance + - /arckit:analyze - Comprehensive governance gap analysis + - /arckit:traceability - Requirements traceability matrix + - /arckit:health - Quick metadata health check + ``` + +2. **Track in Project**: + Suggest adding remediation actions to project tracking: + - Create backlog items for FAIL findings + - Schedule architecture technical debt review + - Set next conformance check date + +## Important Notes + +- **Markdown escaping**: When writing less-than or greater-than comparisons, always include a space after `<` or `>` (e.g., `< 3 seconds`, `> 99.9% uptime`) to prevent markdown renderers from interpreting them as HTML tags or emoji diff --git a/arckit-copilot/commands/customize.md b/arckit-copilot/commands/customize.md new file mode 100644 index 00000000..485d3333 --- /dev/null +++ b/arckit-copilot/commands/customize.md @@ -0,0 +1,230 @@ +--- +description: Copy plugin templates to project for customization +argument-hint: "