Conversation
`burn waste` now hard-filters its input slice against the coverage flags
each detector requires, refuses with a non-zero exit when no turn meets
the prereqs, and annotates partial exclusions both in text and JSON
output.
- `attributeWaste` / `aggregateBy*` need `hasToolCalls` +
`hasToolResultEvents`. The orchestrator filters the slice before
calling them and reports an "analyzed N of M turns; K excluded ..."
notice when some turns survive.
- `--patterns retries|failures` need the same flags; `reverts` needs
`hasToolCalls` + `hasRawContent` (the editPreHash / editPostHash
fields the parser computes from raw content); `compaction` is
unchanged because the compaction sidecar is independent of
`TurnRecord.fidelity`. Per-detector notices name the missing prereq
and the source kinds responsible.
- Refusal: when every input turn fails the prereq, exit 2 with
`burn waste: 142/142 turns lack tool-call/tool-result coverage
required for waste attribution. Sources: codex
(per-session-aggregate, missing tool-call records, tool-result
events). No waste analysis was performed.`
- `--json` carries a `fidelity` block (`{ analyzed, excluded, summary,
refused }`) mirroring `summary --json`. `--patterns` JSON additionally
exposes a `perDetector` array with each detector's `required` flags
and `excludedBySource` breakdown.
- Tests cover full refusal, partial exclusion, per-detector messaging,
and the JSON contract for both the attribution and patterns paths.
Closes #100. Refs #41, #76.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
| const analyzedUnion = new Set<string>(); | ||
| for (const d of perDetectorCoverage) { | ||
| if (d.kind === 'compaction') continue; | ||
| const slice = perDetector.get(d.kind)!; | ||
| for (const t of slice) analyzedUnion.add(`${t.sessionId}|${t.messageId}`); | ||
| } | ||
| const analyzedCount = analyzedUnion.size; | ||
|
|
||
| if (args.flags['json'] === true) { | ||
| process.stdout.write( | ||
| JSON.stringify( | ||
| { | ||
| turnsAnalyzed, | ||
| turnsAnalyzed: analyzedCount, | ||
| retryLoops, | ||
| failureRuns, | ||
| compactions, | ||
| compactions: compactionLosses, | ||
| editReverts, | ||
| sessionSummaries: patterns.sessionSummaries, | ||
| sessionSummaries, | ||
| fidelity: { | ||
| analyzed: analyzedCount, | ||
| excluded: total - analyzedCount, |
There was a problem hiding this comment.
🟡 Top-level turnsAnalyzed and fidelity.excluded are wrong when compaction is the only (or one of the) selected pattern detector(s)
When --patterns compaction is used, analyzedUnion at line 677-682 skips every compaction entry via if (d.kind === 'compaction') continue, so analyzedCount ends up 0 even though compaction successfully analyzed all total turns. This flows into:
turnsAnalyzed: 0in JSON (line 689) and text output (line 715)fidelity.excluded: total - 0 = total(line 697)
These contradict the per-detector breakdown which correctly reports compaction: { analyzed: total, excluded: 0 } (waste.ts:560-563). A JSON consumer seeing fidelity.excluded: N alongside fidelity.refused: false with an empty turnsAnalyzed: 0 would conclude nothing was analyzed. In the mixed case (--patterns retries,compaction), excluded also over-counts by not crediting turns that were only analyzed by compaction.
Prompt for agents
In runPatternsMode, the analyzedUnion set (line 677-682) deliberately skips compaction because compaction doesn't use fidelity-based coverage filtering. However, the resulting analyzedCount (which is 0 in the compaction-only case) is used for both turnsAnalyzed in the output and fidelity.excluded (total - analyzedCount) in the JSON fidelity block. This creates a contradiction: the top-level says excluded=N but the per-detector detail for compaction says excluded=0.
To fix this, the top-level fidelity counts need to account for compaction's contribution. One approach: when computing fidelity.analyzed and fidelity.excluded, take the max of analyzedCount and the compaction analyzed count (which is always total when compaction is selected). Or better: define the top-level fidelity.analyzed/excluded as a summary of the per-detector entries rather than deriving it from a union that ignores compaction.
Similarly, turnsAnalyzed in the headline should probably include turns that were analyzed by compaction when that's the only detector. A simple fix: if compaction is in selected, include all turns in the union.
Was this helpful? React with 👍 or 👎 to provide feedback.
Summary
burn wastenow hard-filters its input slice against the coverage flags each detector requires, refuses with a non-zero exit when no turn meets the prereqs, and annotates partial exclusions in both text and JSON output.attributeWaste/aggregateBy*needhasToolCalls+hasToolResultEvents.--patterns retries|failuresneed the same;revertsneedshasToolCalls+hasRawContent(the source ofeditPreHash/editPostHash);compactionis unchanged because its sidecar is independent ofTurnRecord.fidelity.burn waste: 142/142 turns lack tool-call/tool-result coverage required for waste attribution. Sources: codex (per-session-aggregate, missing tool-call records, tool-result events). No waste analysis was performed.and exits 2.--jsonnow carries afidelityblock{ analyzed, excluded, summary, refused }mirroringsummary --json.--patternsJSON additionally surfaces aperDetectorarray with each detector'srequiredflags andexcludedBySourcebreakdown.Test plan
pnpm run buildpnpm run test:ts— 533 tests pass (20 new inpackages/cli/src/commands/waste.test.ts)--patterns retries,failures, JSONfidelityshape (analyzed/excluded/summary/refused), per-detectorrequired+excludedBySourceJSON, and the orthogonalcompaction-only path that must not refuse on aggregate-only input.Refs
summarizeFidelity/hasMinimumFidelityand called out this wiring as deferred work)Out of scope per the issue: detecting waste patterns on aggregate-only data; modifying
attributeWasteitself (all gating happens at the CLI layer so analyze stays pure).Generated with Claude Code