Skip to content

fix: prompt-type Stop hook JSON output format#3193

Open
mabry1985 wants to merge 2 commits intodevfrom
feat/stop-hook-completion-verifier
Open

fix: prompt-type Stop hook JSON output format#3193
mabry1985 wants to merge 2 commits intodevfrom
feat/stop-hook-completion-verifier

Conversation

@mabry1985
Copy link
Copy Markdown
Contributor

@mabry1985 mabry1985 commented Mar 30, 2026

Summary

Context

The initial implementation (#3192) used a natural language response format, but the hook system expects structured JSON ({"ok": true} or {"ok": false, "reason": "..."}).

Test plan

  • CI passes
  • Stop hook fires without "JSON validation failed" error
  • Normal session endings pass through cleanly

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Chores
    • Improved internal task completion verification to better detect incomplete work, partial implementations, and untested changes before finalization.

Claude AI Agent and others added 2 commits March 29, 2026 21:18
Adds a completion verifier that runs before evaluate-session.js on every
session end. An LLM evaluates the last assistant message against four
failure patterns (silent exit, partial work, error abandonment, untested
changes) and prevents premature stop if work is genuinely incomplete.
Normal conversation endings pass through cleanly.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The hook system expects {"ok": true/false} responses from prompt-type
hooks. Updated the prompt to explicitly require JSON output format,
added haiku model selection to minimize latency/cost, and added
stop_hook_active loop prevention.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@mabry1985 mabry1985 enabled auto-merge (squash) March 30, 2026 04:26
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 30, 2026

📝 Walkthrough

Walkthrough

Added a new Stop hook to the Claude settings configuration that performs completion verification using a prompt-type completion check with the Haiku model. This hook validates whether the assistant's final message satisfactorily completes the requested work and detects issues like silent exits, partial completion, error abandonment, or untested changes.

Changes

Cohort / File(s) Summary
Stop Hook Configuration
.claude/settings.json
Added a new prompt-type completion verification hook using the haiku model with 30-second timeout, positioned before the existing command-based session evaluation hook. The hook outputs JSON to determine if the assistant's work is complete and flags conditions such as silent exits, partial work, error abandonment, or untested changes.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

🐰 A verifier hook, so keen and bright,
Checks if the work is finished right!
JSON whispers, "Is it done?"
No silent exits—caught mid-run! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'fix: prompt-type Stop hook JSON output format' accurately describes the main change: fixing JSON output format issues in a Stop hook introduced in a previous PR.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/stop-hook-completion-verifier
⚔️ Resolve merge conflicts
  • Resolve merge conflict in branch feat/stop-hook-completion-verifier

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/settings.json:
- Around line 36-37: The prompt hook must either be registered as a Stop hook or
make the stop check robust: update the prompt text/handler that references
stop_hook_active so it uses an explicit existence-and-true guard (e.g., check
stop_hook_active === true) instead of relying on the field being present, or
document/configure this entry as Stop-only; locate the prompt string under the
"prompt" key in .claude/settings.json (and the associated hook registration) and
modify the condition to safely handle absent fields so the hook is safe for
non-Stop events.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 8c467a1d-e9de-4080-8ad1-f1f08964c0a0

📥 Commits

Reviewing files that changed from the base of the PR and between d10fbd2 and b7e9c84.

📒 Files selected for processing (1)
  • .claude/settings.json

Comment on lines +36 to +37
"prompt": "You are a completion verifier. Review the assistant's last message to determine if the requested work is actually done.\n\nIf stop_hook_active is true in the input, immediately respond: {\"ok\": true}\n\nCheck for these failure patterns:\n1. SILENT EXIT: Stopped without a concrete outcome (no files modified, no answer, no result).\n2. PARTIAL WORK: Mentioned tasks planned but not completed (\"I'll also need to\", \"next step\", TODO, unchecked items).\n3. ERROR ABANDONMENT: Hit an error and gave up instead of investigating.\n4. UNTESTED CHANGES: Modified code without any verification (no test run, build check, or type check).\n\nNormal conversation endings are FINE — questions answered, user said thanks, all work done with clear results.\n\nRespond with ONLY valid JSON, nothing else:\n- If complete: {\"ok\": true}\n- If incomplete: {\"ok\": false, \"reason\": \"what specific work remains\"}",
"model": "haiku",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

For Claude Code hooks (prompt-type hook in settings.json), is model: "haiku"a valid model value, and isstop_hook_active guaranteed to be present in the prompt input payload?

💡 Result:

Yes, model: "haiku" is a valid model value for Claude Code prompt-type hooks (type: "prompt") in settings.json. Official documentation states that prompt hooks send input to a Claude model, with Haiku as the default ("Haiku by default"), and the model field is optional for specifying alternatives but accepts Claude models like Haiku. No, stop_hook_active is not guaranteed to be present in the prompt input payload for all prompt-type hooks. It is a Stop hook-specific field in the JSON input payload ("In addition to the common input fields, Stop hooks receive stop_hook_active"), present when a prompt hook is configured for the Stop event to prevent infinite loops. For other events (e.g., PreToolUse, UserPromptSubmit), it is absent from the input schema.

Citations:


Ensure this prompt hook is configured as a Stop hook, or add a safety check for stop_hook_active presence.

stop_hook_active is exclusive to Stop hook events in the input payload. If this prompt hook is used for other events (PreToolUse, UserPromptSubmit, etc.), the field will be absent and the guard becomes a no-op. Either document that this must be Stop-only, or use a safe check like stop_hook_active === true that handles missing fields gracefully.

model: "haiku" is valid and requires no changes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/settings.json around lines 36 - 37, The prompt hook must either be
registered as a Stop hook or make the stop check robust: update the prompt
text/handler that references stop_hook_active so it uses an explicit
existence-and-true guard (e.g., check stop_hook_active === true) instead of
relying on the field being present, or document/configure this entry as
Stop-only; locate the prompt string under the "prompt" key in
.claude/settings.json (and the associated hook registration) and modify the
condition to safely handle absent fields so the hook is safe for non-Stop
events.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant