A small CLI that applies a curated set of env vars to ~/.claude/settings.json to mitigate the Feb–Mar 2026 Claude Code quality regression. Idempotent, reversible, dry-run-able, and never destroys data without a timestamped backup. For everyone who isn't already fixing this via SuperFlow.
Merges the following into ~/.claude/settings.json, preserving every unrelated key (hooks, permissions, custom env vars):
env.CLAUDE_CODE_EFFORT_LEVEL = "max"— force maximum reasoning effortenv.CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING = "1"— disable adaptive thinking heuristicenv.MAX_THINKING_TOKENS = "63999"— raise the per-turn thinking budgetenv.CLAUDE_CODE_AUTO_COMPACT_WINDOW = "400000"— extend auto-compaction windowshowThinkingSummaries = true— surface thinking summaries in the UI
Before any write, a timestamped backup of your original settings.json lands in ~/.claude/backups/. A marker file at ~/.claude/.cc-anti-regression-marker.json records what was applied so uninstall can restore byte-for-byte.
In Feb–Mar 2026 Anthropic shipped two changes that measurably regressed Claude Code quality:
- Adaptive thinking (Feb 9, with Opus 4.6) — the model self-paces reasoning per turn and produces zero-thinking turns even at
effort=high. - Default effort lowered (Mar 3) from
hightomedium.
A senior AI Director at AMD published a log analysis of 6,852 sessions: reads-per-edit dropped 6.6 → 2.0, full-rewrites roughly doubled, a "give up" hook fired 173× in 17 days vs. 0 previously. Anthropic's Boris Cherny acknowledged the regression and recommended env-var workarounds.
- GitHub issue #42796 — AMD report + bcherny reply
- Hacker News discussion
The recommended env vars work, but applying them by hand means editing JSON, remembering the key names, and re-applying whenever you touch settings.json. This tool automates that.
npm (recommended) — requires Node 20+:
npm install -g claude-code-anti-regressionBash one-liner — uses npx if Node 20+ is available, falls back to jq otherwise:
curl -fsSL https://raw.githubusercontent.com/egerev/claude-code-anti-regression/main/install.sh | bash# See what's missing
cc-anti-regression check
# Apply recommendations (interactive confirmation)
cc-anti-regression install
# Non-interactive (for CI / scripted setups)
cc-anti-regression install --yesAfter install, restart Claude Code (exit + relaunch) for env vars to take effect.
Forces Claude Code to request maximum reasoning effort on every turn, overriding the Mar 3 default drop to medium. Trade-off: ~2–4× more thinking tokens per turn. If you pay per token, you'll feel it.
Disables the adaptive thinking heuristic introduced on Feb 9 with Opus 4.6. With adaptive thinking on, the model sometimes skips thinking entirely on turns where it should think, producing shallow answers and more "give up" outcomes. Caveat: per binary analysis, this env var only covers 1 of 5 adaptive code paths in the current CLI — subagent paths via V9H() ignore it. So this helps the main loop but not all spawned subagents. See docs/why-not-patch-binary.md.
Raises the per-turn thinking budget to the model's maximum. Without this, long-horizon tasks (reviews, architectural planning) get truncated reasoning. Trade-off: latency and tokens.
Extends the context window before auto-compaction kicks in. Auto-compaction aggressively summarises older context, which can lose detail the model needs later. A wider window delays that trade.
UI-only: shows the model's thinking summaries inline in the transcript. Purely diagnostic — helps you see why a turn went shallow, which is how regressions were discovered in the first place.
cc-anti-regression uninstallThis restores ~/.claude/settings.json byte-for-byte from the backup recorded in the marker file. Backup files are kept so you can re-apply later if you change your mind.
- Token cost:
CLAUDE_CODE_EFFORT_LEVEL=max+MAX_THINKING_TOKENS=63999roughly 2–4× your thinking token bill. If you're on a metered plan or burning through quotas, reconsider. - Latency: max-effort turns take noticeably longer. If your workflow is interactive-heavy (many short prompts), the extra latency adds up.
- This is a workaround, not a fix: the root cause is in Anthropic's CLI. Env vars partially compensate, but subagent code paths still ignore some of them (see
docs/why-not-patch-binary.mdfor the honest version — and why this tool deliberately does not patchcli.js). - Future-proofing: when Anthropic ships a real fix, these env vars may become redundant or harmful. Re-check the issue tracker periodically.
If you use SuperFlow, you don't need this tool. SuperFlow ships its own anti-regression onboarding check (PR #64) that applies the same env vars and manages agent effort frontmatter — which this tool does not, by design. Run superflow update to pull the latest recommendations. This tool exists for everyone else.
Issues, PRs, and recommendation updates welcome at the issue tracker. Conventional commits (feat:, fix:, docs:, chore:). Small PRs preferred.
MIT © 2026 egerev