fix: prompt-type Stop hook JSON output format#3193
Conversation
Adds a completion verifier that runs before evaluate-session.js on every session end. An LLM evaluates the last assistant message against four failure patterns (silent exit, partial work, error abandonment, untested changes) and prevents premature stop if work is genuinely incomplete. Normal conversation endings pass through cleanly. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The hook system expects {"ok": true/false} responses from prompt-type
hooks. Updated the prompt to explicitly require JSON output format,
added haiku model selection to minimize latency/cost, and added
stop_hook_active loop prevention.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdded a new Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/settings.json:
- Around line 36-37: The prompt hook must either be registered as a Stop hook or
make the stop check robust: update the prompt text/handler that references
stop_hook_active so it uses an explicit existence-and-true guard (e.g., check
stop_hook_active === true) instead of relying on the field being present, or
document/configure this entry as Stop-only; locate the prompt string under the
"prompt" key in .claude/settings.json (and the associated hook registration) and
modify the condition to safely handle absent fields so the hook is safe for
non-Stop events.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 8c467a1d-e9de-4080-8ad1-f1f08964c0a0
📒 Files selected for processing (1)
.claude/settings.json
| "prompt": "You are a completion verifier. Review the assistant's last message to determine if the requested work is actually done.\n\nIf stop_hook_active is true in the input, immediately respond: {\"ok\": true}\n\nCheck for these failure patterns:\n1. SILENT EXIT: Stopped without a concrete outcome (no files modified, no answer, no result).\n2. PARTIAL WORK: Mentioned tasks planned but not completed (\"I'll also need to\", \"next step\", TODO, unchecked items).\n3. ERROR ABANDONMENT: Hit an error and gave up instead of investigating.\n4. UNTESTED CHANGES: Modified code without any verification (no test run, build check, or type check).\n\nNormal conversation endings are FINE — questions answered, user said thanks, all work done with clear results.\n\nRespond with ONLY valid JSON, nothing else:\n- If complete: {\"ok\": true}\n- If incomplete: {\"ok\": false, \"reason\": \"what specific work remains\"}", | ||
| "model": "haiku", |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For Claude Code hooks (prompt-type hook in settings.json), is model: "haiku"a valid model value, and isstop_hook_active guaranteed to be present in the prompt input payload?
💡 Result:
Yes, model: "haiku" is a valid model value for Claude Code prompt-type hooks (type: "prompt") in settings.json. Official documentation states that prompt hooks send input to a Claude model, with Haiku as the default ("Haiku by default"), and the model field is optional for specifying alternatives but accepts Claude models like Haiku. No, stop_hook_active is not guaranteed to be present in the prompt input payload for all prompt-type hooks. It is a Stop hook-specific field in the JSON input payload ("In addition to the common input fields, Stop hooks receive stop_hook_active"), present when a prompt hook is configured for the Stop event to prevent infinite loops. For other events (e.g., PreToolUse, UserPromptSubmit), it is absent from the input schema.
Citations:
- 1: https://code.claude.com/docs/en/hooks
- 2: https://code.claude.com/docs/en/hooks-guide
- 3: https://docs.claude.com/en/docs/claude-code/hooks.md
- 4: https://claudefa.st/blog/tools/hooks/stop-hook-task-enforcement
- 5: http://code.claude.com/docs/en/hooks
Ensure this prompt hook is configured as a Stop hook, or add a safety check for stop_hook_active presence.
stop_hook_active is exclusive to Stop hook events in the input payload. If this prompt hook is used for other events (PreToolUse, UserPromptSubmit, etc.), the field will be absent and the guard becomes a no-op. Either document that this must be Stop-only, or use a safe check like stop_hook_active === true that handles missing fields gracefully.
model: "haiku" is valid and requires no changes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/settings.json around lines 36 - 37, The prompt hook must either be
registered as a Stop hook or make the stop check robust: update the prompt
text/handler that references stop_hook_active so it uses an explicit
existence-and-true guard (e.g., check stop_hook_active === true) instead of
relying on the field being present, or document/configure this entry as
Stop-only; locate the prompt string under the "prompt" key in
.claude/settings.json (and the associated hook registration) and modify the
condition to safely handle absent fields so the hook is safe for non-Stop
events.
Summary
{"ok": true/false}instead of free-form textmodel: haikuto minimize latency/cost on the evaluation callstop_hook_activeloop prevention guardContext
The initial implementation (#3192) used a natural language response format, but the hook system expects structured JSON (
{"ok": true}or{"ok": false, "reason": "..."}).Test plan
🤖 Generated with Claude Code
Summary by CodeRabbit