feat(document-review): add documentation review workflow#97
feat(document-review): add documentation review workflow#97mynhardtburger wants to merge 78 commits intoambient-code:mainfrom
Conversation
Add a new ACP workflow that systematically reviews project documentation for errors, inconsistencies, gaps, and quality issues. Includes 7 commands (/scan, /review, /verify, /test, /report, /fix, /speedrun) with a controller-based architecture following the bugfix workflow pattern. Key capabilities: - Auto-discovers and catalogs all documentation files - Evaluates docs against 7 quality dimensions (accuracy, completeness, consistency, clarity, currency, structure, examples) - Cross-references documentation claims against source code - Executes documented instructions to verify they work as written, with environment snapshot/revert for long-lived environments - Generates inline fix suggestions grouped by file Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Add the 7 command files and 7 skill files (controller + 6 phase skills) that were excluded from the prior commit by global .gitignore on .claude/. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
The template extraction and compact report format were applied to the wrong workflow (document-review). This reverts document-review to its original state and applies the changes to pr-doc-review instead. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Review and verify are independent after scan — split their output into separate findings files (findings-review.md, findings-verify.md) so they can run as parallel sub-agents. Remove test phase from the active workflow (skill retained). Update controller, report, fix, and templates accordingly. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
After review/verify sub-agents complete, a validation sub-agent checks their output for coverage (all inventory docs reviewed), structure (required fields present), and evidence quality (actual quotes, not vague claims). On failure, the failing sub-agent is re-dispatched with specific feedback. Max 1 retry per sub-agent. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
…eper checks Add four improvements adapted from redhat-docs-agent-tools techniques: - Incremental file processing to prevent context window exhaustion - Dual reviewer personas (developer lens / architect lens) by document type - Failure path coverage checks for procedural documents - Expanded code example validation with specific sub-checks and severities Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
After install-test succeeds and before cleanup, a new usage-test agent interacts with the installed project as a user would — executing documented API calls, CLI commands, and workflows against the live cluster. This verifies that usage documentation accurately reflects the actual experience. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
The validate skill was a structural quality gate that checked sub-agent output for coverage, filled placeholders, summary table math, required fields, and evidence quality. Modern models reliably follow output format instructions, making this check largely redundant. Removing it simplifies the controller significantly by eliminating the validation loop, retry logic, and related orchestration (~100 lines). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Converts the verify-docs Claude Code plugin into an ACP workflow. The workflow discovers project inputs, outputs, and behaviors from source code, then verifies documentation claims against those discoveries using a four-stage pipeline (reconnaissance, discovery, verification, reporting) with parallel subagents. Signed-off-by: myn <myn@redhat.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Rewrite the verify skill from a flat 5-step process into a 3-stage pipeline (reconnaissance, discovery, verification) that systematically discovers what a project does from source code before cross-referencing against documentation. Add 8 specialized discovery agent reference files for parallel code scanning: env vars, CLI args, config schemas, API endpoints, data models, file I/O, external deps, and build/deploy targets. Add shared inventory fragment format spec. Update findings template with stale references, low-confidence findings, inventory coverage, and code inventory appendix sections. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
- Replace mcp-atlassian MCP dependency with direct Jira REST API calls via curl, authenticated with JIRA_URL/JIRA_EMAIL/JIRA_API_TOKEN env vars - Add pandoc for Markdown-to-Jira wiki markup conversion so descriptions render correctly in Jira (headings, tables, code blocks, etc.) - Change Epic naming: Epic Name uses report date, Summary includes date and scope (repositories) - Use full report.md content as Epic description - Simplify labels to acp:document-review plus user-provided labels (remove per-finding severity/dimension labels) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
…fix version - All metadata fields (project, component, labels, team, initial status) must be explicitly set or acknowledged as blank before issue creation - Add initial status field to transition issues after creation (e.g., Backlog, New, To Do) - Remove fix_version field from skill and command Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Compatible with ubi9/python-311 container image — no brew or dnf needed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Epic description now contains only the header portion of report.md (before ## Summary). The full report is attached to the epic via the Jira attachments API. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Do not set Jira issue priority by default. Leave it undefined unless the user explicitly instructs otherwise. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
The old names were near-synonyms that didn't convey the distinction between doc-only quality analysis and doc-vs-code cross-referencing. Renames commands, skills, templates, artifact paths, and all references across ambient.json, CLAUDE.md, README.md, controller, report, fix, jira, and both renamed skills. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
The fix phase generated inline fix suggestions but is no longer needed. Removes the command, skill, and template files, and cleans up all references in ambient.json, CLAUDE.md, README.md, controller, and the report template. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
The new name is more descriptive — it immediately conveys that this command runs the entire review pipeline (scan → quality-review + code-check → report) rather than using gaming jargon. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Move Quick Start above Directory Structure and expand it with ACP loading instructions, env var reference table, and a warning about API token visibility in curl commands. Remove incorrect "optional" label from /code-check and "and fixes" from the summary since the workflow is read-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a Document Review workflow: ambient config, CLI command docs, a controller skill orchestrating scan → parallel quality/code checks → report (and optional jira), multiple discovery-agent skill specs, inventory/findings/report templates, and artifacts paths under Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Controller
participant Scan
participant QualityReview
participant CodeCheck
participant Report
participant Jira
User->>Controller: /full-review
activate Controller
Controller->>Scan: run /scan
activate Scan
Scan->>Scan: discover docs, write artifacts/inventory.md
Scan-->>Controller: inventory ready
deactivate Scan
par QualityReview & CodeCheck
Controller->>QualityReview: run /quality-review
activate QualityReview
QualityReview->>QualityReview: review docs, write artifacts/findings-quality-review.md
QualityReview-->>Controller: quality findings ready
deactivate QualityReview
and
Controller->>CodeCheck: run /code-check
activate CodeCheck
CodeCheck->>CodeCheck: dispatch discovery agents, verify docs vs code
CodeCheck->>CodeCheck: write artifacts/findings-code-check.md
CodeCheck-->>Controller: code findings ready
deactivate CodeCheck
end
Controller->>Report: run /report
activate Report
Report->>Report: merge & dedupe findings, compute stats
Report->>Report: write artifacts/report.md
Report-->>Controller: report ready
deactivate Report
Controller->>User: report complete (suggest /jira)
alt User runs /jira
User->>Jira: /jira [args]
activate Jira
Jira->>Jira: parse report, create Epic + child issues via REST API
Jira-->>User: issues created
deactivate Jira
end
deactivate Controller
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
- Remove stale "optional" label from code-check in CLAUDE.md - Replace nonexistent "fix version" with "initial status" in README - Fix step numbering skip (3→5) in controller full-review flow Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
Replace Custom Workflow instructions with standard workflow selection since the workflow is auto-discovered after merge. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
There was a problem hiding this comment.
Actionable comments posted: 12
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@workflows/document-review/.claude/commands/jira.md`:
- Around line 10-13: The prerequisites list for the /jira command is missing the
pandoc dependency required to convert Markdown to Jira wiki markup; update the
prerequisites section in workflows/document-review/.claude/commands/jira.md to
add that pandoc must be installed (e.g., install the pypandoc binary via pip
install pypandoc_binary) and reference that this requirement aligns with
workflows/document-review/.claude/skills/jira/SKILL.md so users have all
dependencies before running the /jira command.
In `@workflows/document-review/.claude/commands/scan.md`:
- Line 3: The controller path in
workflows/document-review/.claude/commands/scan.md currently hardcodes a
relative path (`.claude/skills/controller/SKILL.md`) which breaks when commands
run from repo root; update the path resolution in the `/scan` command to use an
absolute-from-repo path (e.g., resolve against process.cwd() or a repoRoot
constant) or call the dispatcher’s CWD-normalizer before loading the controller;
locate the controller import/require in the scan command and replace the
relative string with a deterministic resolver (or add a small helper to compute
repoRoot and join with `.claude/skills/controller/SKILL.md`) so controller
loading never depends on the current working directory.
In
`@workflows/document-review/.claude/skills/code-check/references/discovery-api-schema.md`:
- Line 51: The directive "Workflow is always `usage`" is ambiguous and
inconsistent with surrounding phase semantics; update the discovery-api-schema
reference to either remove this mode token or replace it with an explicit
instruction for the code-check phase (e.g., "During code-check, run discovery
only to collect schema references and emit validated tokens for the code-check
phase, not full execution"), and ensure the new text documents expected behavior
and agent outputs for the code-check phase so the discovery agent has a clear,
defined mode instead of the undefined `usage` token.
- Around line 15-35: The endpoint discovery patterns only match GET/POST and
miss other HTTP verbs and method-agnostic router forms; update the patterns to
include PUT, PATCH, DELETE, OPTIONS, HEAD and generic route/all/route(...) forms
across frameworks — e.g., add Go symbols like `router.Handle`, `r.PUT(`,
`r.PATCH(`, `r.DELETE(`, `r.OPTIONS(`, `r.HEAD(` and method-agnostic
`router.Route`/`router.All`; Python symbols like `@app.put(`, `@app.patch(`,
`@app.delete(`, `@app.options(`, `@app.head(` and Flask `@app.route(...,
methods=[...])`, Django `path(`/`re_path(` with method-dispatching views); and
Node/TypeScript symbols like `app.put(`, `app.patch(`, `app.delete(`,
`app.options(`, `app.head(`, `app.all(`, `router.route(` and NestJS
`@Put(`/`@Patch(`/`@Delete(`/`@All(`) so the discovery matches the “ALL
endpoints” contract in Line 3.
In
`@workflows/document-review/.claude/skills/code-check/references/discovery-build-deploy.md`:
- Line 58: The line/comment that pre-biases classification ("Workflow is almost
always `installation`") should be removed or replaced so the classifier defaults
to `both` for ambiguous cases; update the classification rule or constant that
enforces this bias (search for the string "Workflow is almost always
`installation`" or the rule that sets default workflow to `installation`) to
instead return `both` when uncertainty is detected, and ensure any unit tests or
documentation referencing the old bias are updated to reflect the new default.
In
`@workflows/document-review/.claude/skills/code-check/references/discovery-file-io.md`:
- Line 45: Update the wording that currently reads "Workflow: output files are
typically `usage`" to broaden the classification and add clarification: change
the phrase to indicate that output files can be `usage` or `installation`
depending on purpose, and add a short criteria sentence plus 2–3 examples
distinguishing the two (e.g., usage: logs, reports, generated examples;
installation: deployment manifests, installers, setup scripts). Locate and edit
the line containing the exact text "Workflow: output files are typically
`usage`" and adjust surrounding documentation to include the decision rule and
examples so readers can distinguish `usage` vs `installation` outputs.
In `@workflows/document-review/.claude/skills/code-check/SKILL.md`:
- Around line 167-188: The finding classification schema currently lists only
"Accuracy" and "Completeness" as allowed values while the doc text defines a
"Staleness" finding type, causing dropped/misclassified stale findings; update
the schema/enum that defines allowed finding types to include "Staleness" (and
ensure any validation or parsing logic accepts it), map "Staleness" to the
intended severity ("High") consistent with the list, and adjust any code that
branches on "Accuracy"/"Completeness" (e.g., validation routines or serializers)
to handle "Staleness" as a first-class classification.
In `@workflows/document-review/.claude/skills/controller/SKILL.md`:
- Line 38: The document contains conflicting phase-order rules: one sentence
says “Phases can be skipped or reordered” while another requires “Scan must run
first” (Controller rule). Make the rules consistent by updating the wording so
the Scan phase is explicitly required to run before other phases (and cannot be
reordered) or alternatively allow skipping but state the precedence clearly; for
example, change the ambiguous sentence to “Phases may be skipped or reordered
except the Scan phase, which must run first,” and update any duplicate instances
(the same text around the other occurrence) to the same phrasing so the
Controller’s phase-order policy is unambiguous.
In `@workflows/document-review/.claude/skills/report/SKILL.md`:
- Around line 74-77: Update Step 4's reporting to check both optional outputs
instead of only artifacts/findings-code-check.md: detect presence of
artifacts/findings-code-check.md and the corresponding quality-review output
(e.g., artifacts/findings-quality-review.md or the project's actual quality file
name), and emit explicit messages for all three cases—only code-check missing,
only quality-review missing (report partial scope), and both missing (no
optional phases performed). Modify the function or block that currently inspects
artifacts/findings-code-check.md (Step 4 logic) to perform both existence checks
and produce the appropriate reason text in the generated report.
- Around line 21-24: The guidance for missing findings is too narrow: update the
message that currently references only `artifacts/findings-quality-review.md`,
`artifacts/findings-code-check.md` and the `/quality-review` command to instead
mention all valid ways to generate findings (e.g., recommend running
`/quality-review`, `/code-check`, or `/full-review`), and adjust any explanatory
text to say "If neither artifacts/findings-quality-review.md nor
artifacts/findings-code-check.md exists, inform the user and recommend running
`/quality-review`, `/code-check`, or `/full-review` to generate report inputs."
Use the same artifact filenames and command names in the updated guidance so the
change is localized to the SKILL.md text.
In `@workflows/document-review/.claude/skills/scan/SKILL.md`:
- Around line 87-93: The fenced discovery pattern block (the triple-backtick
code fence containing the glob lines **/*.md, **/*.rst, **/*.adoc, docs/**/*,
doc/**/*) is missing a language tag and triggers markdownlint MD040; fix it by
adding a language identifier (e.g., text) to the opening code fence so it
becomes ```text, leaving the inner lines unchanged and keeping the closing
triple backticks as is.
In `@workflows/document-review/README.md`:
- Line 50: Update the README warning about `/jira` and JIRA_API_TOKEN to
recommend a safe default workflow (use a masked/secure prompt or credential
store / pre-configured auth helper) instead of instructing users to paste a raw
token into the session; mention `/jira` and JIRA_API_TOKEN explicitly, show how
to configure a secure option (e.g., masked input or environment-backed secret
manager) as the primary example, and keep raw-token-in-session as a clearly
marked discouraged fallback only.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 83d66f28-6614-4429-9f6f-ecac4a4ea2b7
📒 Files selected for processing (28)
workflows/document-review/.ambient/ambient.jsonworkflows/document-review/.claude/commands/code-check.mdworkflows/document-review/.claude/commands/full-review.mdworkflows/document-review/.claude/commands/jira.mdworkflows/document-review/.claude/commands/quality-review.mdworkflows/document-review/.claude/commands/report.mdworkflows/document-review/.claude/commands/scan.mdworkflows/document-review/.claude/skills/code-check/SKILL.mdworkflows/document-review/.claude/skills/code-check/references/discovery-api-schema.mdworkflows/document-review/.claude/skills/code-check/references/discovery-build-deploy.mdworkflows/document-review/.claude/skills/code-check/references/discovery-cli-args.mdworkflows/document-review/.claude/skills/code-check/references/discovery-config-schema.mdworkflows/document-review/.claude/skills/code-check/references/discovery-data-models.mdworkflows/document-review/.claude/skills/code-check/references/discovery-env-vars.mdworkflows/document-review/.claude/skills/code-check/references/discovery-external-deps.mdworkflows/document-review/.claude/skills/code-check/references/discovery-file-io.mdworkflows/document-review/.claude/skills/code-check/references/inventory-format.mdworkflows/document-review/.claude/skills/controller/SKILL.mdworkflows/document-review/.claude/skills/jira/SKILL.mdworkflows/document-review/.claude/skills/quality-review/SKILL.mdworkflows/document-review/.claude/skills/report/SKILL.mdworkflows/document-review/.claude/skills/scan/SKILL.mdworkflows/document-review/CLAUDE.mdworkflows/document-review/README.mdworkflows/document-review/templates/findings-code-check.mdworkflows/document-review/templates/findings-quality-review.mdworkflows/document-review/templates/inventory.mdworkflows/document-review/templates/report.md
| @@ -0,0 +1,7 @@ | |||
| # /scan | |||
|
|
|||
| Read `.claude/skills/controller/SKILL.md` and follow it. | |||
There was a problem hiding this comment.
Make controller path unambiguous to avoid command breakage
Line 3 hardcodes a relative path that only works if the runtime CWD is the workflow root. If commands run from repo root, controller loading fails and /scan cannot execute. Use an absolute-from-repo path (or document/normalize CWD in the dispatcher) consistently across all command files.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workflows/document-review/.claude/commands/scan.md` at line 3, The controller
path in workflows/document-review/.claude/commands/scan.md currently hardcodes a
relative path (`.claude/skills/controller/SKILL.md`) which breaks when commands
run from repo root; update the path resolution in the `/scan` command to use an
absolute-from-repo path (e.g., resolve against process.cwd() or a repoRoot
constant) or call the dispatcher’s CWD-normalizer before loading the controller;
locate the controller import/require in the scan command and replace the
relative string with a deterministic resolver (or add a small helper to compute
repoRoot and join with `.claude/skills/controller/SKILL.md`) so controller
loading never depends on the current working directory.
There was a problem hiding this comment.
Per the WORKFLOW_DEVELOPMENT_GUIDE.md, workflows are cloned into the session workspace: the workflow directory becomes the working directory. The path .claude/skills/controller/SKILL.md is correct because it's relative to the workflow root, which is exactly how Claude Code resolves paths. Changing these to workflows/document-review/.claude/skills/controller/SKILL.md would break the workflow when deployed via ACP, since that repo-root prefix won't exist in the session workspace.
This is applicable to a few of your comments on this PR.
There was a problem hiding this comment.
@mynhardtburger, thank you for the clarification — that makes complete sense. Since the workflow directory is cloned as the session workspace root via ACP, relative paths like .claude/skills/controller/SKILL.md are resolved against the workflow root (not the repo root), making them both correct and intentional. Using a repo-root-prefixed path like workflows/document-review/.claude/skills/controller/SKILL.md would indeed break the deployed workflow. I'll dismiss my related comments on this PR.
I've noted this for future reviews:
📝 Learnings were identified and not saved (knowledge base disabled). Enable
| 4. Note API versioning patterns (path prefix like `/v1/`, header-based, etc.) | ||
| 5. Check for middleware that applies auth, rate limiting, or other | ||
| cross-cutting concerns | ||
| 6. Workflow is always `usage` |
There was a problem hiding this comment.
Clarify or remove the Workflow is always \usage`` directive
Line 51 introduces a mode token that is undefined in this reference and inconsistent with surrounding phase semantics. This is likely to cause mis-execution or confusion for the discovery agent. Replace it with an explicit, documented instruction tied to the code-check phase behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@workflows/document-review/.claude/skills/code-check/references/discovery-api-schema.md`
at line 51, The directive "Workflow is always `usage`" is ambiguous and
inconsistent with surrounding phase semantics; update the discovery-api-schema
reference to either remove this mode token or replace it with an explicit
instruction for the code-check phase (e.g., "During code-check, run discovery
only to collect schema references and emit validated tokens for the code-check
phase, not full execution"), and ensure the new text documents expected behavior
and agent outputs for the code-check phase so the discovery agent has a clear,
defined mode instead of the undefined `usage` token.
| 3. Catalog CI/CD workflows and their triggers | ||
| 4. List all Kustomize overlays, Helm values, or Terraform variables | ||
| 5. Find deployment/setup scripts and their arguments | ||
| 6. Workflow is almost always `installation` |
There was a problem hiding this comment.
Avoid pre-biasing workflow classification to installation.
“Workflow is almost always installation” conflicts with the shared heuristic that ambiguous cases should default to both. This can systematically mis-tag items and degrade downstream merge/report accuracy.
Suggested adjustment
-6. Workflow is almost always `installation`
+6. Classify workflow using the inventory-format heuristics:
+ - `installation` for build/deploy-only usage
+ - `usage` for runtime usage
+ - `both` when present in both contexts
+ - when unclear, default to `both`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 6. Workflow is almost always `installation` | |
| 6. Classify workflow using the inventory-format heuristics: | |
| - `installation` for build/deploy-only usage | |
| - `usage` for runtime usage | |
| - `both` when present in both contexts | |
| - when unclear, default to `both` |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@workflows/document-review/.claude/skills/code-check/references/discovery-build-deploy.md`
at line 58, The line/comment that pre-biases classification ("Workflow is almost
always `installation`") should be removed or replaced so the classifier defaults
to `both` for ambiguous cases; update the classification rule or constant that
enforces this bias (search for the string "Workflow is almost always
`installation`" or the rule that sets default workflow to `installation`) to
instead return `both` when uncertainty is detected, and ensure any unit tests or
documentation referencing the old bias are updated to reflect the new default.
| - **Findings must exist.** If neither | ||
| `artifacts/findings-quality-review.md` nor | ||
| `artifacts/findings-code-check.md` exists, inform the user and | ||
| recommend running `/quality-review` first. |
There was a problem hiding this comment.
Missing-findings guidance is too narrow.
If both findings files are absent, recommending only /quality-review is incomplete; /code-check or /full-review are equally valid ways to generate report inputs.
Suggested adjustment
-- **Findings must exist.** If neither
- `artifacts/findings-quality-review.md` nor
- `artifacts/findings-code-check.md` exists, inform the user and
- recommend running `/quality-review` first.
+- **Findings must exist.** If neither
+ `artifacts/findings-quality-review.md` nor
+ `artifacts/findings-code-check.md` exists, inform the user and
+ recommend running `/quality-review`, `/code-check`, or `/full-review`.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - **Findings must exist.** If neither | |
| `artifacts/findings-quality-review.md` nor | |
| `artifacts/findings-code-check.md` exists, inform the user and | |
| recommend running `/quality-review` first. | |
| - **Findings must exist.** If neither | |
| `artifacts/findings-quality-review.md` nor | |
| `artifacts/findings-code-check.md` exists, inform the user and | |
| recommend running `/quality-review`, `/code-check`, or `/full-review`. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workflows/document-review/.claude/skills/report/SKILL.md` around lines 21 -
24, The guidance for missing findings is too narrow: update the message that
currently references only `artifacts/findings-quality-review.md`,
`artifacts/findings-code-check.md` and the `/quality-review` command to instead
mention all valid ways to generate findings (e.g., recommend running
`/quality-review`, `/code-check`, or `/full-review`), and adjust any explanatory
text to say "If neither artifacts/findings-quality-review.md nor
artifacts/findings-code-check.md exists, inform the user and recommend running
`/quality-review`, `/code-check`, or `/full-review` to generate report inputs."
Use the same artifact filenames and command names in the updated guidance so the
change is localized to the SKILL.md text.
| | `JIRA_TEAM` | No | `team=<name>` | Default team name | | ||
| | `JIRA_INITIAL_STATUS` | No | `status=<name>` | Workflow transition after creation (e.g., `Backlog`) | | ||
|
|
||
| > **Warning:** `/jira` uses `curl` to call the Jira REST API directly because the Atlassian MCP does not support creating epics or issues. Your `JIRA_API_TOKEN` will be visible in the session history. |
There was a problem hiding this comment.
Documented token exposure needs a safer default workflow.
The README currently normalizes secret leakage (“token will be visible in session history”). Please make the recommended path non-leaking by default (masked secret input / secure credential store / pre-configured auth helper), and keep raw-token-in-session as a discouraged fallback only.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workflows/document-review/README.md` at line 50, Update the README warning
about `/jira` and JIRA_API_TOKEN to recommend a safe default workflow (use a
masked/secure prompt or credential store / pre-configured auth helper) instead
of instructing users to paste a raw token into the session; mention `/jira` and
JIRA_API_TOKEN explicitly, show how to configure a secure option (e.g., masked
input or environment-backed secret manager) as the primary example, and keep
raw-token-in-session as a clearly marked discouraged fallback only.
…overage - Add explicit Severity field to quality-review and code-check finding templates and skill instructions so the report phase can merge findings without reclassifying them. - Change code-check result-type severity from hard-coded to guidance-based, letting the agent assess each finding individually. - Pipe Jira credentials via stdin (curl --config -) instead of passing them as command-line arguments, preventing exposure in process listings. - Add Java and Ruby search patterns to all 7 language-specific discovery agents and the recon framework-detection step. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (5)
workflows/document-review/.claude/skills/code-check/references/discovery-api-schema.md (2)
66-66:⚠️ Potential issue | 🟠 MajorReplace the hard-coded workflow mode with explicit phase behavior
Line 66 still uses
Workflow is always usage, which is ambiguous for discovery behavior and output expectations. Replace it with explicit code-check phase instructions and expected output semantics.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workflows/document-review/.claude/skills/code-check/references/discovery-api-schema.md` at line 66, Replace the hard-coded statement "Workflow is always `usage`" with an explicit description of the code-check phase and expected outputs: update the section that currently contains the line "Workflow is always `usage`" to specify the phase name (e.g., "code-check"), enumerate the behavior for that phase (what inputs are validated, what checks are run), and define expected output semantics (pass/fail results, error formats, and example response shape). Ensure the new wording replaces the ambiguous text and includes concrete expectations for discovery behavior and output to guide implementers and validators.
17-49:⚠️ Potential issue | 🟠 MajorEndpoint discovery still under-covers non-GET/POST routes
Line 17 through Line 49 still biases patterns toward GET/POST and generic
router.tokens. This can miss PUT/PATCH/DELETE/OPTIONS/HEAD and method-agnostic forms, which breaks the “ALL endpoints” requirement in Line 3.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workflows/document-review/.claude/skills/code-check/references/discovery-api-schema.md` around lines 17 - 49, Expand the endpoint-discovery patterns to cover non-GET/POST methods and method-agnostic forms: add tokens for PUT/PATCH/DELETE/OPTIONS/HEAD (e.g., r.PUT / r.PATCH / r.DELETE, `@app.put` / `@app.patch` / app.delete, app.options / app.head, Express .put/.patch/.delete/.options/.head, NestJS `@Put/`@Patch/@Delete/@Options/@Head, Spring `@PutMapping/`@PatchMapping/@DeleteMapping/@RequestMapping(method=...), JAX-RS `@PUT/`@PATCH/@DELETE/@OPTIONS/@HEAD), and include method-agnostic patterns like app.route(..., methods=[...]), router.add_route, router.Handle/router.HandleFunc with method params, match/any/resources/: and generic `router.`/`route(`/`app.route(` tokens so discovery meets the “ALL endpoints” requirement.workflows/document-review/templates/findings-code-check.md (1)
34-47:⚠️ Potential issue | 🟠 MajorInclude
Sourcein code-check finding recordsLine 34–47 should include a
Sourcefield (e.g.,code-check) so consolidated reporting can merge and trace findings consistently without inference.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workflows/document-review/templates/findings-code-check.md` around lines 34 - 47, The finding entry in the template block (the record that lists Severity/Dimension/Doc location/Code location/Documented claim/Actual behavior/Evidence/Fix) is missing a Source field; update the template in workflows/document-review/templates/findings-code-check.md to add a "Source: code-check" (or equivalent canonical source token) alongside the other metadata fields so automated reporting can merge these findings reliably—ensure the new "Source" line appears for the specific finding that references DEFAULT_MAX_RETRIES = 5 and the README Configuration claim.workflows/document-review/.claude/skills/code-check/SKILL.md (1)
170-205:⚠️ Potential issue | 🟠 MajorFinding schema still excludes
Staleness.Line 204 conflicts with Lines 170-193: you define stale findings but the allowed
Dimensionvalues omitStaleness, which risks dropped or mislabeled findings.As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."Proposed fix
-- **Dimension**: Accuracy or Completeness +- **Dimension**: Accuracy, Completeness, or Staleness🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workflows/document-review/.claude/skills/code-check/SKILL.md` around lines 170 - 205, The docs introduce a "Staleness" finding type but the allowed Dimension values exclude it, causing mismatches; update the findings schema to include "Staleness" as a valid Dimension value (and any related enums/validation logic) and adjust the findings generation/validation code and templates that reference the Dimension list (e.g., the findings schema/enum and the templates/findings-code-check.md usage of "Dimension") so that "Staleness" is accepted and rendered consistently in outputs and tests.workflows/document-review/README.md (1)
50-50:⚠️ Potential issue | 🟠 MajorUse a non-leaking default for
/jiracredentials.Line 50 still normalizes exposing
JIRA_API_TOKENin session history. The README should recommend a secure default workflow first (masked prompt / secret manager / pre-configured environment), and keep raw token pasting as an explicit discouraged fallback.As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."Suggested README adjustment
-> **Warning:** `/jira` uses `curl` to call the Jira REST API directly because the Atlassian MCP does not support creating epics or issues. Credentials are piped via stdin (`curl --config -`) to avoid exposure in process listings, but your `JIRA_API_TOKEN` will still be visible in the session history. +> **Security note:** `/jira` uses `curl` to call the Jira REST API directly because the Atlassian MCP does not support creating epics or issues. +> **Recommended (default):** provide `JIRA_API_TOKEN` via a secure secret source (masked input, environment-backed secret manager, or pre-configured auth helper) so the raw token is never pasted into chat/session text. +> Credentials are piped via stdin (`curl --config -`) to avoid exposure in process listings. +> **Discouraged fallback:** pasting a raw token into the session, which can expose it in session history.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workflows/document-review/README.md` at line 50, Update the README note for the /jira endpoint to recommend a non-leaking default: instruct users to obtain credentials via a masked prompt (interactive secret entry), a secret manager, or pre-configured environment variables/secrets injection in CI/CD, and present piping raw JIRA_API_TOKEN into curl as an explicit discouraged fallback; keep the warning about session history exposure and mention using secure storage or masked input as the preferred workflow (refer to the README section about /jira).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@workflows/document-review/.claude/skills/code-check/references/discovery-env-vars.md`:
- Around line 73-75: Update the test-file exclusion pattern list that currently
reads "Exclude matches in test files (paths containing `test/`, `_test.go`,
`test_`, `.test.`, `__tests__`) UNLESS the variable also appears in non-test
code" to include additional common test patterns—e.g., `tests/`, `spec/`,
`*_test.py`, `*.spec.ts`, `*.spec.js`—so the ruleset filters more test-only env
vars while preserving the existing exception that a variable is included if it
also appears in non-test code; locate and modify the text block containing that
exclusion list and ensure the wording and examples reflect the expanded
patterns.
In `@workflows/document-review/.claude/skills/jira/SKILL.md`:
- Around line 97-103: The curl examples currently inject raw JSON via -d '...'
which breaks if injected report text contains quotes/newlines; replace these
examples to show building the request body with a JSON serializer (e.g., jq -n
or similar) and then pass the serialized output to curl (using --data `@-` or
--data-binary `@-`) instead of inline -d; update the snippets around the curl
invocation (the lines using printf 'user = "%s:%s"' and the curl --config -
call) to demonstrate generating a safe JSON payload with jq and piping or
referencing that payload to curl so quotes/newlines are properly escaped.
In `@workflows/document-review/templates/findings-quality-review.md`:
- Around line 31-38: The findings template is missing a Source field which
downstream merging requires; update the finding block in
workflows/document-review/templates/findings-quality-review.md (the block
containing "**Severity:** Critical", "**Dimension:** Accuracy", "**File:**
`path/to/document.md:42`", "**Issue:** ...", "**Evidence:** ...", "**Fix:**
...") by adding a line "- **Source:** quality-review" so every finding entry
includes the required provenance field for machine merging.
---
Duplicate comments:
In
`@workflows/document-review/.claude/skills/code-check/references/discovery-api-schema.md`:
- Line 66: Replace the hard-coded statement "Workflow is always `usage`" with an
explicit description of the code-check phase and expected outputs: update the
section that currently contains the line "Workflow is always `usage`" to specify
the phase name (e.g., "code-check"), enumerate the behavior for that phase (what
inputs are validated, what checks are run), and define expected output semantics
(pass/fail results, error formats, and example response shape). Ensure the new
wording replaces the ambiguous text and includes concrete expectations for
discovery behavior and output to guide implementers and validators.
- Around line 17-49: Expand the endpoint-discovery patterns to cover
non-GET/POST methods and method-agnostic forms: add tokens for
PUT/PATCH/DELETE/OPTIONS/HEAD (e.g., r.PUT / r.PATCH / r.DELETE, `@app.put` /
`@app.patch` / app.delete, app.options / app.head, Express
.put/.patch/.delete/.options/.head, NestJS `@Put/`@Patch/@Delete/@Options/@Head,
Spring `@PutMapping/`@PatchMapping/@DeleteMapping/@RequestMapping(method=...),
JAX-RS `@PUT/`@PATCH/@DELETE/@OPTIONS/@HEAD), and include method-agnostic patterns
like app.route(..., methods=[...]), router.add_route,
router.Handle/router.HandleFunc with method params, match/any/resources/: and
generic `router.`/`route(`/`app.route(` tokens so discovery meets the “ALL
endpoints” requirement.
In `@workflows/document-review/.claude/skills/code-check/SKILL.md`:
- Around line 170-205: The docs introduce a "Staleness" finding type but the
allowed Dimension values exclude it, causing mismatches; update the findings
schema to include "Staleness" as a valid Dimension value (and any related
enums/validation logic) and adjust the findings generation/validation code and
templates that reference the Dimension list (e.g., the findings schema/enum and
the templates/findings-code-check.md usage of "Dimension") so that "Staleness"
is accepted and rendered consistently in outputs and tests.
In `@workflows/document-review/README.md`:
- Line 50: Update the README note for the /jira endpoint to recommend a
non-leaking default: instruct users to obtain credentials via a masked prompt
(interactive secret entry), a secret manager, or pre-configured environment
variables/secrets injection in CI/CD, and present piping raw JIRA_API_TOKEN into
curl as an explicit discouraged fallback; keep the warning about session history
exposure and mention using secure storage or masked input as the preferred
workflow (refer to the README section about /jira).
In `@workflows/document-review/templates/findings-code-check.md`:
- Around line 34-47: The finding entry in the template block (the record that
lists Severity/Dimension/Doc location/Code location/Documented claim/Actual
behavior/Evidence/Fix) is missing a Source field; update the template in
workflows/document-review/templates/findings-code-check.md to add a "Source:
code-check" (or equivalent canonical source token) alongside the other metadata
fields so automated reporting can merge these findings reliably—ensure the new
"Source" line appears for the specific finding that references
DEFAULT_MAX_RETRIES = 5 and the README Configuration claim.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 069f7d28-6da8-415c-bfe5-bd6c765baae6
📒 Files selected for processing (13)
workflows/document-review/.claude/skills/code-check/SKILL.mdworkflows/document-review/.claude/skills/code-check/references/discovery-api-schema.mdworkflows/document-review/.claude/skills/code-check/references/discovery-cli-args.mdworkflows/document-review/.claude/skills/code-check/references/discovery-config-schema.mdworkflows/document-review/.claude/skills/code-check/references/discovery-data-models.mdworkflows/document-review/.claude/skills/code-check/references/discovery-env-vars.mdworkflows/document-review/.claude/skills/code-check/references/discovery-external-deps.mdworkflows/document-review/.claude/skills/code-check/references/discovery-file-io.mdworkflows/document-review/.claude/skills/jira/SKILL.mdworkflows/document-review/.claude/skills/quality-review/SKILL.mdworkflows/document-review/README.mdworkflows/document-review/templates/findings-code-check.mdworkflows/document-review/templates/findings-quality-review.md
| 3. Exclude matches in test files (paths containing `test/`, `_test.go`, | ||
| `test_`, `.test.`, `__tests__`) UNLESS the variable also appears in | ||
| non-test code |
There was a problem hiding this comment.
Broaden test-file exclusion patterns to reduce false positives
Line 73’s exclusion list is too narrow and will capture many test-only environment variables, which pollutes inventory and report deduplication quality. Add common patterns like tests/, spec/, *_test.py, *.spec.ts, and *.spec.js.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@workflows/document-review/.claude/skills/code-check/references/discovery-env-vars.md`
around lines 73 - 75, Update the test-file exclusion pattern list that currently
reads "Exclude matches in test files (paths containing `test/`, `_test.go`,
`test_`, `.test.`, `__tests__`) UNLESS the variable also appears in non-test
code" to include additional common test patterns—e.g., `tests/`, `spec/`,
`*_test.py`, `*.spec.ts`, `*.spec.js`—so the ruleset filters more test-only env
vars while preserving the existing exception that a variable is included if it
also appears in non-test code; locate and modify the text block containing that
exclusion list and ensure the wording and examples reflect the expanded
patterns.
| ```bash | ||
| printf 'user = "%s:%s"\n' "$JIRA_EMAIL" "$JIRA_API_TOKEN" | \ | ||
| curl --config - \ | ||
| -X POST \ | ||
| -H "Content-Type: application/json" \ | ||
| -d '...' \ | ||
| "$JIRA_URL/rest/api/2/issue" |
There was a problem hiding this comment.
Use JSON-safe payload construction for Jira requests
Line 97–103 and Line 182–206 rely on raw -d '...' payload examples while injecting rich text from findings. Any quotes/newlines in report content can break JSON and fail issue creation. Build request bodies with a JSON serializer (jq -n/equivalent) and pass the generated payload to curl.
Suggested doc update pattern
- curl --config - \
- -X POST \
- -H "Content-Type: application/json" \
- -d '...' \
- "$JIRA_URL/rest/api/2/issue"
+ payload="$(jq -n \
+ --arg project "$PROJECT_KEY" \
+ --arg summary "$SUMMARY" \
+ --arg description "$DESCRIPTION_WIKI" \
+ '{
+ fields: {
+ project: { key: $project },
+ summary: $summary,
+ description: $description
+ }
+ }')"
+ curl --config - \
+ -X POST \
+ -H "Content-Type: application/json" \
+ --data "$payload" \
+ "$JIRA_URL/rest/api/2/issue"Also applies to: 182-206
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workflows/document-review/.claude/skills/jira/SKILL.md` around lines 97 -
103, The curl examples currently inject raw JSON via -d '...' which breaks if
injected report text contains quotes/newlines; replace these examples to show
building the request body with a JSON serializer (e.g., jq -n or similar) and
then pass the serialized output to curl (using --data `@-` or --data-binary `@-`)
instead of inline -d; update the snippets around the curl invocation (the lines
using printf 'user = "%s:%s"' and the curl --config - call) to demonstrate
generating a safe JSON payload with jq and piping or referencing that payload to
curl so quotes/newlines are properly escaped.
| - **Severity:** Critical | ||
| - **Dimension:** Accuracy | ||
| - **File:** `path/to/document.md:42` | ||
| - **Issue:** The documented command uses a flag that doesn't exist. | ||
| - **Evidence:** `pip install --global mypackage` | ||
| (`--global` is not a valid pip flag) | ||
| - **Fix:** Change `--global` to `--user` or remove the flag entirely. | ||
|
|
There was a problem hiding this comment.
Add an explicit Source field to each finding entry
Line 31–38 omits Source (quality-review/code-check), but downstream report merging depends on phase provenance. Add - **Source:** quality-review to the template’s finding block to keep artifact schema consistent and machine-mergeable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workflows/document-review/templates/findings-quality-review.md` around lines
31 - 38, The findings template is missing a Source field which downstream
merging requires; update the finding block in
workflows/document-review/templates/findings-quality-review.md (the block
containing "**Severity:** Critical", "**Dimension:** Accuracy", "**File:**
`path/to/document.md:42`", "**Issue:** ...", "**Evidence:** ...", "**Fix:**
...") by adding a line "- **Source:** quality-review" so every finding entry
includes the required provenance field for machine merging.
- Add a title, brief description, output path, and prerequisites or arguments to each command file so users see useful context when invoking commands. - Move /full-review out of the numbered phase list in CLAUDE.md into a separate Convenience Commands section since it is a composite command, not a distinct phase. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@workflows/document-review/.claude/commands/code-check.md`:
- Line 9: The command currently uses a relative controller path (the string
".claude/skills/controller/SKILL.md") which is cwd-dependent and can fail;
update the command to reference a repo-root-qualified controller path instead
(i.e., construct the path starting from the repository root rather than a
relative path) so the tool always resolves the controller regardless of working
directory, and ensure any tests or callers that invoke the command use the same
repo-root-qualified form.
In `@workflows/document-review/.claude/commands/full-review.md`:
- Line 6: The hardcoded controller reference
".claude/skills/controller/SKILL.md" depends on the current working directory
and can break the /full-review flow; update the code that builds or reads this
path to compute it relative to the script file (e.g., using __dirname or
import.meta.url with path.resolve/path.join) or resolve the repository root
reliably (git rev-parse or a repo-root helper) and then join "
.claude/skills/controller/SKILL.md", replacing any direct use of the literal
".claude/skills/controller/SKILL.md" so the dispatcher for /full-review always
finds the controller regardless of CWD.
In `@workflows/document-review/.claude/commands/report.md`:
- Line 10: The reference "Read `.claude/skills/controller/SKILL.md` and follow
it." uses a relative path that can fail at runtime; update that string to a
repo-root-qualified path (i.e., prepend the repository-root indicator so the
path resolves reliably at runtime) wherever it appears in the report.md,
ensuring the controller skill is referenced with the full repo-root-qualified
file path instead of the bare relative path.
In `@workflows/document-review/CLAUDE.md`:
- Line 11: The heading "Convenience Commands" currently uses a level-3 heading
(###) which skips an expected level-2 under the document structure; change the
heading markup for "Convenience Commands" from ### to ## so it properly follows
the previous section level and satisfies MD001, ensuring heading hierarchy is
consistent.
- Around line 18-20: The markdown uses relative paths that assume the workflow
CWD; update the references to be repo-root-qualified so they resolve regardless
of working directory—replace `.claude/skills/controller/SKILL.md` and
`.claude/skills/{name}/SKILL.md` with explicit repo-root paths (e.g., prefix
with `/` or the repository root variable you use) throughout this document so
users running commands from the repo root won't get broken links or
instructions; ensure any other similar references in this file follow the same
pattern.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: df9ab888-c0da-4897-92cc-bd912f091aac
📒 Files selected for processing (6)
workflows/document-review/.claude/commands/code-check.mdworkflows/document-review/.claude/commands/full-review.mdworkflows/document-review/.claude/commands/quality-review.mdworkflows/document-review/.claude/commands/report.mdworkflows/document-review/.claude/commands/scan.mdworkflows/document-review/CLAUDE.md
|
|
||
| **Requires:** `/scan` must have been run first. | ||
|
|
||
| Read `.claude/skills/controller/SKILL.md` and follow it. |
There was a problem hiding this comment.
Use a repo-root-qualified controller path to prevent command breakage.
This relative path depends on current working directory and can fail outside workflows/document-review/.
Proposed fix
-Read `.claude/skills/controller/SKILL.md` and follow it.
+Read `workflows/document-review/.claude/skills/controller/SKILL.md` and follow it.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Read `.claude/skills/controller/SKILL.md` and follow it. | |
| Read `workflows/document-review/.claude/skills/controller/SKILL.md` and follow it. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workflows/document-review/.claude/commands/code-check.md` at line 9, The
command currently uses a relative controller path (the string
".claude/skills/controller/SKILL.md") which is cwd-dependent and can fail;
update the command to reference a repo-root-qualified controller path instead
(i.e., construct the path starting from the repository root rather than a
relative path) so the tool always resolves the controller regardless of working
directory, and ensure any tests or callers that invoke the command use the same
repo-root-qualified form.
| Runs scan → quality-review + code-check (parallel) → report in one shot, | ||
| pausing only for critical decisions. Results are written to `artifacts/`. | ||
|
|
||
| Read `.claude/skills/controller/SKILL.md` and follow it. |
There was a problem hiding this comment.
Controller path should not depend on CWD.
If dispatcher runs from repo root, this reference may fail and block /full-review.
Proposed fix
-Read `.claude/skills/controller/SKILL.md` and follow it.
+Read `workflows/document-review/.claude/skills/controller/SKILL.md` and follow it.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Read `.claude/skills/controller/SKILL.md` and follow it. | |
| Read `workflows/document-review/.claude/skills/controller/SKILL.md` and follow it. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workflows/document-review/.claude/commands/full-review.md` at line 6, The
hardcoded controller reference ".claude/skills/controller/SKILL.md" depends on
the current working directory and can break the /full-review flow; update the
code that builds or reads this path to compute it relative to the script file
(e.g., using __dirname or import.meta.url with path.resolve/path.join) or
resolve the repository root reliably (git rev-parse or a repo-root helper) and
then join " .claude/skills/controller/SKILL.md", replacing any direct use of the
literal ".claude/skills/controller/SKILL.md" so the dispatcher for /full-review
always finds the controller regardless of CWD.
|
|
||
| **Requires:** `/scan` must have been run first. | ||
|
|
||
| Read `.claude/skills/controller/SKILL.md` and follow it. |
There was a problem hiding this comment.
Make controller reference deterministic (repo-root path).
Current path is CWD-sensitive and may fail in common execution contexts.
Proposed fix
-Read `.claude/skills/controller/SKILL.md` and follow it.
+Read `workflows/document-review/.claude/skills/controller/SKILL.md` and follow it.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Read `.claude/skills/controller/SKILL.md` and follow it. | |
| Read `workflows/document-review/.claude/skills/controller/SKILL.md` and follow it. |
| **Requires:** At least one of `/quality-review` or `/code-check` must have | ||
| been run first. | ||
|
|
||
| Read `.claude/skills/controller/SKILL.md` and follow it. |
There was a problem hiding this comment.
Fix controller skill path to avoid runtime resolution failures.
Use a repo-root-qualified path here as well.
Proposed fix
-Read `.claude/skills/controller/SKILL.md` and follow it.
+Read `workflows/document-review/.claude/skills/controller/SKILL.md` and follow it.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Read `.claude/skills/controller/SKILL.md` and follow it. | |
| Read `workflows/document-review/.claude/skills/controller/SKILL.md` and follow it. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workflows/document-review/.claude/commands/report.md` at line 10, The
reference "Read `.claude/skills/controller/SKILL.md` and follow it." uses a
relative path that can fail at runtime; update that string to a
repo-root-qualified path (i.e., prepend the repository-root indicator so the
path resolves reliably at runtime) wherever it appears in the report.md,
ensuring the controller skill is referenced with the full repo-root-qualified
file path instead of the bare relative path.
| The workflow controller lives at `.claude/skills/controller/SKILL.md`. | ||
| It defines how to execute phases, recommend next steps, and handle transitions. | ||
| Phase skills are at `.claude/skills/{name}/SKILL.md`. |
There was a problem hiding this comment.
Path references are ambiguous outside workflow-root CWD.
These references should be repo-root-qualified to avoid broken instructions when users execute from repository root.
Proposed fix
-The workflow controller lives at `.claude/skills/controller/SKILL.md`.
+The workflow controller lives at `workflows/document-review/.claude/skills/controller/SKILL.md`.
It defines how to execute phases, recommend next steps, and handle transitions.
-Phase skills are at `.claude/skills/{name}/SKILL.md`.
+Phase skills are at `workflows/document-review/.claude/skills/{name}/SKILL.md`.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| The workflow controller lives at `.claude/skills/controller/SKILL.md`. | |
| It defines how to execute phases, recommend next steps, and handle transitions. | |
| Phase skills are at `.claude/skills/{name}/SKILL.md`. | |
| The workflow controller lives at `workflows/document-review/.claude/skills/controller/SKILL.md`. | |
| It defines how to execute phases, recommend next steps, and handle transitions. | |
| Phase skills are at `workflows/document-review/.claude/skills/{name}/SKILL.md`. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workflows/document-review/CLAUDE.md` around lines 18 - 20, The markdown uses
relative paths that assume the workflow CWD; update the references to be
repo-root-qualified so they resolve regardless of working directory—replace
`.claude/skills/controller/SKILL.md` and `.claude/skills/{name}/SKILL.md` with
explicit repo-root paths (e.g., prefix with `/` or the repository root variable
you use) throughout this document so users running commands from the repo root
won't get broken links or instructions; ensure any other similar references in
this file follow the same pattern.
…flow - Add missing pandoc prerequisite to /jira command - Add Staleness to allowed Dimension values in code-check findings schema - Resolve conflicting phase-order rules in controller (dependency-aware) - Broaden missing-findings guidance in report skill to include all commands - Add symmetric check for missing quality-review findings in report Step 4 - Fix heading level increment in CLAUDE.md (h3 → h2) - Add language tag to fenced code block in scan skill Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
workflows/document-review/CLAUDE.md (1)
18-20:⚠️ Potential issue | 🟠 MajorUse repo-root-qualified paths for controller/skills references.
These workflow-relative paths are ambiguous outside
workflows/document-reviewand can misdirect users running from repo root. Use explicit repo-root paths here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workflows/document-review/CLAUDE.md` around lines 18 - 20, Update the ambiguous workflow-relative paths to be repo-root-qualified so they resolve correctly from the repository root: replace occurrences of `.claude/skills/controller/SKILL.md` with `workflows/document-review/.claude/skills/controller/SKILL.md` and replace `.claude/skills/{name}/SKILL.md` with `workflows/document-review/.claude/skills/{name}/SKILL.md` throughout the document (look for the lines that currently mention `.claude/skills/controller/SKILL.md` and `.claude/skills/{name}/SKILL.md`) so readers running commands from repo root get correct, unambiguous references.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@workflows/document-review/.claude/skills/code-check/SKILL.md`:
- Around line 197-206: Update the SKILL.md instructions so the finding fields
exactly match the template names in templates/findings-code-check.md: replace
the "File" field with "Doc location" and ensure "Code location" is used as in
the template; also confirm the output target remains
artifacts/findings-code-check.md and that every required field (Severity,
Dimension, Doc location, Code location) is listed verbatim to avoid
ingestion/dedup mismatches.
In `@workflows/document-review/.claude/skills/report/SKILL.md`:
- Around line 61-64: The cross-tab step that "Build a dimension × severity
cross-tabulation" currently hardcodes seven dimensions (Accuracy, Completeness,
Consistency, Clarity, Currency, Structure, Examples) and therefore omits the
`Staleness` dimension emitted by code-check; update the logic that builds the
table (the report step that consumes the merged findings from quality-review +
code-check) to derive the set of dimensions dynamically from the merged findings
instead of using a fixed list, or at minimum add `Staleness` to the hardcoded
list so the counts include it; locate the code/step that constructs the
dimension list (the “Build a dimension × severity cross-tabulation” block /
merged findings consumer) and change it to compute unique dimensions from
mergedFindings (or findings) then aggregate counts per severity and include
row/column totals.
In `@workflows/document-review/.claude/skills/scan/SKILL.md`:
- Around line 99-113: The SKILL mentions recording an "Audience" field in Step 3
but also mandates using the exact schema from templates/inventory.md in Step 6,
which currently lacks an Audience column; resolve the conflict by either adding
an Audience column to the templates/inventory.md schema or removing the Audience
requirement from Step 3 so the output shape is consistent—update SKILL.md to
reflect the chosen approach (modify the "Audience" entry or the Step 6
instruction), ensuring all references to "Audience", "Step 3", "Step 6", and
"templates/inventory.md" indicate the new single canonical schema.
---
Duplicate comments:
In `@workflows/document-review/CLAUDE.md`:
- Around line 18-20: Update the ambiguous workflow-relative paths to be
repo-root-qualified so they resolve correctly from the repository root: replace
occurrences of `.claude/skills/controller/SKILL.md` with
`workflows/document-review/.claude/skills/controller/SKILL.md` and replace
`.claude/skills/{name}/SKILL.md` with
`workflows/document-review/.claude/skills/{name}/SKILL.md` throughout the
document (look for the lines that currently mention
`.claude/skills/controller/SKILL.md` and `.claude/skills/{name}/SKILL.md`) so
readers running commands from repo root get correct, unambiguous references.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: f386e554-2934-4258-bf78-b865214fc6ce
📒 Files selected for processing (6)
workflows/document-review/.claude/commands/jira.mdworkflows/document-review/.claude/skills/code-check/SKILL.mdworkflows/document-review/.claude/skills/controller/SKILL.mdworkflows/document-review/.claude/skills/report/SKILL.mdworkflows/document-review/.claude/skills/scan/SKILL.mdworkflows/document-review/CLAUDE.md
| Follow the template at `templates/findings-code-check.md`. Write to | ||
| `artifacts/findings-code-check.md`. | ||
|
|
||
| Each finding must include: | ||
|
|
||
| - **Severity**: Critical, High, Medium, or Low (use the guidance in the | ||
| verification process above, but assess each finding individually) | ||
| - **Dimension**: Accuracy, Completeness, or Staleness | ||
| - **File**: Doc file path and line (e.g., `README.md:85`) | ||
| - **Code location**: Source file and line |
There was a problem hiding this comment.
Finding field names drift from the required template schema.
This section says to use File, but the referenced findings template uses Doc location + Code location. Keeping mismatched field names will make report ingestion/dedup fragile.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workflows/document-review/.claude/skills/code-check/SKILL.md` around lines
197 - 206, Update the SKILL.md instructions so the finding fields exactly match
the template names in templates/findings-code-check.md: replace the "File" field
with "Doc location" and ensure "Code location" is used as in the template; also
confirm the output target remains artifacts/findings-code-check.md and that
every required field (Severity, Dimension, Doc location, Code location) is
listed verbatim to avoid ingestion/dedup mismatches.
| Build a dimension × severity cross-tabulation from the deduplicated list: for | ||
| each of the 7 dimensions (Accuracy, Completeness, Consistency, Clarity, | ||
| Currency, Structure, Examples), count findings at each severity level (Critical, | ||
| High, Medium, Low). Include row and column totals. |
There was a problem hiding this comment.
Dimension cross-tab omits Staleness from merged findings.
/report merges quality-review + code-check outputs, but this step hardcodes only 7 dimensions. Since code-check emits Staleness, your statistics/rating table can silently undercount or misclassify findings.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workflows/document-review/.claude/skills/report/SKILL.md` around lines 61 -
64, The cross-tab step that "Build a dimension × severity cross-tabulation"
currently hardcodes seven dimensions (Accuracy, Completeness, Consistency,
Clarity, Currency, Structure, Examples) and therefore omits the `Staleness`
dimension emitted by code-check; update the logic that builds the table (the
report step that consumes the merged findings from quality-review + code-check)
to derive the set of dimensions dynamically from the merged findings instead of
using a fixed list, or at minimum add `Staleness` to the hardcoded list so the
counts include it; locate the code/step that constructs the dimension list (the
“Build a dimension × severity cross-tabulation” block / merged findings
consumer) and change it to compute unique dimensions from mergedFindings (or
findings) then aggregate counts per severity and include row/column totals.
| Record: | ||
|
|
||
| - **Path**: Relative path from project root | ||
| - **Format**: md, rst, txt, adoc, html, etc. | ||
| - **Size**: Approximate line count (use `wc -l` or count while reading) | ||
| - **Topic**: What the document covers (determined from title, headings, and | ||
| opening content — not inferred from path) | ||
| - **Audience**: Who this appears to be written for: | ||
| - End users (installation, usage, configuration) | ||
| - Developers (API reference, architecture, contributing) | ||
| - Operators (deployment, monitoring, troubleshooting) | ||
| - General (README, license, changelog) | ||
| - **Has executable instructions**: Whether the doc contains code blocks with | ||
| shell commands, installation steps, or usage examples | ||
|
|
There was a problem hiding this comment.
Conflicting output contract for Audience vs template schema.
Step 3 requires recording Audience, but Step 6 also requires following templates/inventory.md exactly; that template currently has no Audience column. This makes output shape ambiguous and can break downstream consumers expecting consistent inventory structure.
Also applies to: 137-138
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workflows/document-review/.claude/skills/scan/SKILL.md` around lines 99 -
113, The SKILL mentions recording an "Audience" field in Step 3 but also
mandates using the exact schema from templates/inventory.md in Step 6, which
currently lacks an Audience column; resolve the conflict by either adding an
Audience column to the templates/inventory.md schema or removing the Audience
requirement from Step 3 so the output shape is consistent—update SKILL.md to
reflect the chosen approach (modify the "Audience" entry or the Step 6
instruction), ensuring all references to "Audience", "Step 3", "Step 6", and
"templates/inventory.md" indicate the new single canonical schema.
…s-dependencies Verification of a document-review report against the maas codebase revealed a 17% false-positive rate (8/46 findings). Root causes: quality-review flagging issues without checking surrounding context or linked pages, code-check not confirming mismatches with direct reads, and the report lacking cross-dependency tracking between related findings. Changes: - quality-review: add "Before You Record a Finding" verification gates (check context, follow links, verify filesystem, respect component boundaries, calibrate procedural checks for doc snippets) - code-check: require Read/Grep confirmation before reporting mismatches - report: add Step 2.5 for cross-dependency detection with Dependencies field on findings - templates/report.md: add Repository root header and Dependencies field - CLAUDE.md: add false-positive cost principle Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mynhardt Burger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
The previous commit added a "Before You Record a Finding" checklist to prevent false positives. Targeted testing showed 3 of 3 testable false positives survived because agents skipped the passive checklist — it was too far from the point of action. This commit replaces the checklist approach with structural enforcement: verification evidence is now a required field in the finding format itself. The agent cannot produce a valid finding without documenting what it checked. Changes: - quality-review: replace 60-line checklist with brief "why" section + required "Context checked" field (Lines read, Links followed, Files verified) in the finding format - code-check: replace passive "verification rigor" paragraph with required "Code verified" field (Full definition read, Alternative sources checked) - report: explicitly list all required header fields (including Repository root) and finding fields (including Dependencies) inline rather than relying on template-only references - templates: add Context checked / Code verified fields to finding examples Targeted tests confirmed all 3 testable false positives are now prevented: - quickstart "no error guidance" → Links followed found troubleshooting - maas-setup "ambiguous placeholders" → Lines read found UPPERCASE format - controller-overview "script may not exist" → Files verified ran Glob Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: mynhardtburger <mynhardt@gmail.com> rh-pre-commit.version: 2.3.2 rh-pre-commit.check-secrets: ENABLED
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (4)
workflows/document-review/CLAUDE.md (1)
18-20:⚠️ Potential issue | 🟠 MajorPath references remain ambiguous outside workflow directory.
These relative paths (
.claude/skills/...) will fail when users read this from the repository root. They should be fully qualified asworkflows/document-review/.claude/skills/...to resolve correctly regardless of working directory.This repeats a previously flagged concern.
Proposed fix
-The workflow controller lives at `.claude/skills/controller/SKILL.md`. +The workflow controller lives at `workflows/document-review/.claude/skills/controller/SKILL.md`. It defines how to execute phases, recommend next steps, and handle transitions. -Phase skills are at `.claude/skills/{name}/SKILL.md`. +Phase skills are at `workflows/document-review/.claude/skills/{name}/SKILL.md`.As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workflows/document-review/CLAUDE.md` around lines 18 - 20, Update the ambiguous relative paths in the CLAUDE.md text so they resolve from repo root: replace occurrences of ".claude/skills/controller/SKILL.md" and ".claude/skills/{name}/SKILL.md" with fully qualified paths "workflows/document-review/.claude/skills/controller/SKILL.md" and "workflows/document-review/.claude/skills/{name}/SKILL.md" respectively (i.e., edit the strings in the file that reference the workflow controller and phase skill locations).workflows/document-review/.claude/skills/code-check/SKILL.md (1)
203-224:⚠️ Potential issue | 🟠 MajorField naming mismatch between SKILL and template.
Line 206 specifies
**File**: Doc file path and line, but the referenced template attemplates/findings-code-check.md:36uses**Doc location**. This inconsistency will cause confusion during finding generation and may break the report merge logic that parses structured fields.🔧 Proposed fix
Update the SKILL.md to match the template field names:
-Each finding must include all of these fields — if you cannot fill in -**Code verified**, discard the finding: +Each finding must include all of these fields — if you cannot fill in +**Code verified**, discard the finding: - **Severity**: Critical, High, Medium, or Low (use the guidance in the verification process above, but assess each finding individually) - **Dimension**: Accuracy, Completeness, or Staleness -- **File**: Doc file path and line (e.g., `README.md:85`) +- **Doc location**: Doc file path and line (e.g., `README.md:85`) - **Code location**: Source file and line🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workflows/document-review/.claude/skills/code-check/SKILL.md` around lines 203 - 224, The SKILL.md field name "File" under the findings template is inconsistent with the template field "Doc location" used by templates/findings-code-check.md:36; update the SKILL.md entry (the list item currently reading "**File**: Doc file path and line") to use "**Doc location**: Doc file path and line" (and search SKILL.md for any other "File" labels in that findings block to rename them), ensuring the structured field name matches the template so the report merge/parser expects "Doc location".workflows/document-review/templates/findings-quality-review.md (1)
31-41:⚠️ Potential issue | 🟠 MajorMissing Source field in finding template.
The finding structure lacks a
Sourcefield to indicate which phase produced it (quality-review vs code-check). While the report skill can infer provenance from the filename, explicitly includingSource: quality-reviewin each finding block improves machine-parseability and makes the deduplication logic in report SKILL.md more robust.🔧 Proposed fix
- **Severity:** Critical - **Dimension:** Accuracy - **File:** `path/to/document.md:42` +- **Source:** quality-review - **Issue:** The documented command uses a flag that doesn't exist.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workflows/document-review/templates/findings-quality-review.md` around lines 31 - 41, Add a "Source: quality-review" field to the finding blocks in the findings-quality-review.md template so each finding explicitly declares its provenance; locate the block that contains "Severity:", "Dimension:", "File:", "Issue:", "Evidence:", "Context checked:" and "Fix:" (the finding shown uses the pip flag example) and insert a "Source: quality-review" line (e.g., directly beneath "Severity:" or at the top of the block) for machine-parseability and deduplication compatibility.workflows/document-review/.claude/skills/report/SKILL.md (1)
81-84:⚠️ Potential issue | 🟠 MajorDimension cross-tabulation excludes Staleness findings from code-check.
The hardcoded 7-dimension list omits "Staleness," which code-check SKILL.md (line 205) emits as a valid dimension. When code-check findings include Staleness, they won't be counted in the summary table, leading to incomplete statistics.
🔧 Proposed fix
Either expand the dimension list to include Staleness, or instruct the report skill to dynamically derive dimensions from the merged findings:
-Build a dimension × severity cross-tabulation from the deduplicated list: for -each of the 7 dimensions (Accuracy, Completeness, Consistency, Clarity, -Currency, Structure, Examples), count findings at each severity level (Critical, -High, Medium, Low). Include row and column totals. +Build a dimension × severity cross-tabulation from the deduplicated list. Derive +the unique set of dimensions from the merged findings (commonly: Accuracy, +Completeness, Consistency, Clarity, Currency, Structure, Examples from +quality-review; plus Staleness from code-check when present). Count findings at +each severity level (Critical, High, Medium, Low). Include row and column totals.Alternatively, if Staleness should be subsumed under Accuracy, document that mapping explicitly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workflows/document-review/.claude/skills/report/SKILL.md` around lines 81 - 84, The cross-tabulation currently uses a hardcoded 7-dimension list (Accuracy, Completeness, Consistency, Clarity, Currency, Structure, Examples) and therefore omits findings labelled "Staleness" emitted by code-check; update the report generation logic in SKILL.md so the dimension × severity table either includes "Staleness" in the static dimension list, or (preferably) derives the set of dimensions dynamically from the merged findings before counting, or explicitly map "Staleness" to an existing dimension (e.g., Accuracy) and document that mapping; ensure the change touches the block that builds the dimension list used for the cross-tabulation so Staleness findings are counted and totals updated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@workflows/document-review/CLAUDE.md`:
- Around line 23-32: The "Quality Dimensions" list is ambiguous about which
phase evaluates them; update the documentation in the Quality Dimensions section
so it explicitly states that all seven dimensions (Accuracy, Completeness,
Consistency, Clarity, Currency, Structure, Examples) are evaluated during the
quality-review phase only, while the code-check skill/phase evaluates only
Accuracy, Completeness, and Staleness; edit the paragraph around the existing
list (the "Quality Dimensions" heading and adjacent text) to add a clarifying
sentence referencing "quality-review" and "code-check" by name so readers
understand which dimensions each phase covers.
In `@workflows/document-review/templates/findings-code-check.md`:
- Around line 34-51: The findings entry is missing a Source field; edit the
block that contains "- **Severity:** Critical" and "- **Dimension:** Accuracy"
in workflows/document-review/templates/findings-code-check.md and insert a new
line "- **Source:** code-check" immediately after the "- **Dimension:**
Accuracy" line so the finding matches the SKILL.md/quality-review structure and
can be merged/deduplicated correctly.
---
Duplicate comments:
In `@workflows/document-review/.claude/skills/code-check/SKILL.md`:
- Around line 203-224: The SKILL.md field name "File" under the findings
template is inconsistent with the template field "Doc location" used by
templates/findings-code-check.md:36; update the SKILL.md entry (the list item
currently reading "**File**: Doc file path and line") to use "**Doc location**:
Doc file path and line" (and search SKILL.md for any other "File" labels in that
findings block to rename them), ensuring the structured field name matches the
template so the report merge/parser expects "Doc location".
In `@workflows/document-review/.claude/skills/report/SKILL.md`:
- Around line 81-84: The cross-tabulation currently uses a hardcoded 7-dimension
list (Accuracy, Completeness, Consistency, Clarity, Currency, Structure,
Examples) and therefore omits findings labelled "Staleness" emitted by
code-check; update the report generation logic in SKILL.md so the dimension ×
severity table either includes "Staleness" in the static dimension list, or
(preferably) derives the set of dimensions dynamically from the merged findings
before counting, or explicitly map "Staleness" to an existing dimension (e.g.,
Accuracy) and document that mapping; ensure the change touches the block that
builds the dimension list used for the cross-tabulation so Staleness findings
are counted and totals updated.
In `@workflows/document-review/CLAUDE.md`:
- Around line 18-20: Update the ambiguous relative paths in the CLAUDE.md text
so they resolve from repo root: replace occurrences of
".claude/skills/controller/SKILL.md" and ".claude/skills/{name}/SKILL.md" with
fully qualified paths
"workflows/document-review/.claude/skills/controller/SKILL.md" and
"workflows/document-review/.claude/skills/{name}/SKILL.md" respectively (i.e.,
edit the strings in the file that reference the workflow controller and phase
skill locations).
In `@workflows/document-review/templates/findings-quality-review.md`:
- Around line 31-41: Add a "Source: quality-review" field to the finding blocks
in the findings-quality-review.md template so each finding explicitly declares
its provenance; locate the block that contains "Severity:", "Dimension:",
"File:", "Issue:", "Evidence:", "Context checked:" and "Fix:" (the finding shown
uses the pip flag example) and insert a "Source: quality-review" line (e.g.,
directly beneath "Severity:" or at the top of the block) for
machine-parseability and deduplication compatibility.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: 0ec3ccb0-53ae-4c19-85d4-7a04a7805362
📒 Files selected for processing (7)
workflows/document-review/.claude/skills/code-check/SKILL.mdworkflows/document-review/.claude/skills/quality-review/SKILL.mdworkflows/document-review/.claude/skills/report/SKILL.mdworkflows/document-review/CLAUDE.mdworkflows/document-review/templates/findings-code-check.mdworkflows/document-review/templates/findings-quality-review.mdworkflows/document-review/templates/report.md
| ## Quality Dimensions | ||
|
|
||
| 1. **Accuracy** — Do docs match reality? | ||
| 2. **Completeness** — Are there gaps or missing docs? | ||
| 3. **Consistency** — Do docs agree with each other? Is terminology uniform? | ||
| 4. **Clarity** — Is language clear for the target audience? | ||
| 5. **Currency** — Dead links, deprecated references, old versions? | ||
| 6. **Structure** — Logical organization, navigation, headings? | ||
| 7. **Examples** — Code samples present and correct? | ||
|
|
There was a problem hiding this comment.
Clarify that the 7 dimensions primarily apply to quality-review phase.
This section lists 7 quality dimensions without specifying that they apply to the quality-review phase. According to the code-check skill specification, code-check evaluates only 3 dimensions: Accuracy, Completeness, and Staleness—it does not assess Consistency, Clarity, Currency, Structure, or Examples.
Users reading this overview may incorrectly expect both phases to evaluate all 7 dimensions, leading to confusion about workflow coverage.
Proposed clarification
## Quality Dimensions
+The quality-review phase evaluates documentation across these 7 dimensions:
+
1. **Accuracy** — Do docs match reality?
2. **Completeness** — Are there gaps or missing docs?
3. **Consistency** — Do docs agree with each other? Is terminology uniform?
4. **Clarity** — Is language clear for the target audience?
5. **Currency** — Dead links, deprecated references, old versions?
6. **Structure** — Logical organization, navigation, headings?
7. **Examples** — Code samples present and correct?
+
+The code-check phase focuses on 3 dimensions (Accuracy, Completeness, Staleness)
+by cross-referencing documentation claims against source code.Based on learnings, "Quality review analysis must assess documentation against 7 dimensions: Accuracy, Completeness, Consistency, Clarity, Currency, Structure, and Examples."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workflows/document-review/CLAUDE.md` around lines 23 - 32, The "Quality
Dimensions" list is ambiguous about which phase evaluates them; update the
documentation in the Quality Dimensions section so it explicitly states that all
seven dimensions (Accuracy, Completeness, Consistency, Clarity, Currency,
Structure, Examples) are evaluated during the quality-review phase only, while
the code-check skill/phase evaluates only Accuracy, Completeness, and Staleness;
edit the paragraph around the existing list (the "Quality Dimensions" heading
and adjacent text) to add a clarifying sentence referencing "quality-review" and
"code-check" by name so readers understand which dimensions each phase covers.
| - **Severity:** Critical | ||
| - **Dimension:** Accuracy | ||
| - **Doc location:** README.md, "Configuration" section, line 85 | ||
| - **Code location:** src/config.py:42 | ||
| - **Documented claim:** "Set `MAX_RETRIES` to configure retry count (default: 3)" | ||
| - **Actual behavior:** Default is 5, not 3. See `DEFAULT_MAX_RETRIES = 5` | ||
| - **Evidence:** | ||
|
|
||
| ```python | ||
| DEFAULT_MAX_RETRIES = 5 # src/config.py:42 | ||
| ``` | ||
|
|
||
| - **Code verified:** | ||
| - *Full definition read*: Read src/config.py:30-60, full Config class | ||
| including all defaults and env var overrides | ||
| - *Alternative sources checked*: Grepped for MAX_RETRIES — found only | ||
| in src/config.py:42. Single source confirmed. | ||
| - **Fix:** Change "default: 3" to "default: 5". |
There was a problem hiding this comment.
Missing Source field in code verification findings.
Similar to the quality-review template, this finding structure lacks a Source field. The report SKILL.md expects to merge findings from both phases and uses the Source field to track provenance. Add - **Source:** code-check after the Dimension field to maintain consistency with the deduplication logic.
🔧 Proposed fix
- **Severity:** Critical
- **Dimension:** Accuracy
+- **Source:** code-check
- **Doc location:** README.md, "Configuration" section, line 85
- **Code location:** src/config.py:42🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workflows/document-review/templates/findings-code-check.md` around lines 34 -
51, The findings entry is missing a Source field; edit the block that contains
"- **Severity:** Critical" and "- **Dimension:** Accuracy" in
workflows/document-review/templates/findings-code-check.md and insert a new line
"- **Source:** code-check" immediately after the "- **Dimension:** Accuracy"
line so the finding matches the SKILL.md/quality-review structure and can be
merged/deduplicated correctly.
Summary
consistency
Useful as a "garbage collection" process which can be scheduled to run occasionally (eg. every 2 weeks) to highlight issues. Use the
/jiraskill after the report is generated to create a Jira epic with child bugs/tasks to track the resolution.Workflow
/scan→/quality-review+/code-check(parallel) →/report→/jira(optional)Or run
/full-reviewfor the full pipeline in one shot (excluding/jira).Commands
/scan/quality-review/code-check/report/jira/full-review