feat: verify S207 @Boehner bounty — NO verdict, duplicate of S036 (#529)#340
Open
xliry wants to merge 4 commits intopeteromallet:mainfrom
Open
feat: verify S207 @Boehner bounty — NO verdict, duplicate of S036 (#529)#340xliry wants to merge 4 commits intopeteromallet:mainfrom
xliry wants to merge 4 commits intopeteromallet:mainfrom
Conversation
… (#451) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…eld confirmed (#456) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Issue: #204
Submission: #204 (comment)
Author: @Boehner
Problem (in our own words)
S207 claims
_compute_batch_qualityinbatch/core.pycomputesdimension_coverageaslen(assessments) / max(len(assessments), 1), which is a self-division tautology that always yields 1.0 for non-empty dicts. The metric is supposed to measure what fraction of expected dimensions were assessed, but sinceexpected_dimension_countis never passed to the function, it divides by itself. This renders the entire dimension coverage telemetry pipeline meaningless.Evidence
desloppify/app/commands/review/batch/core.py:373-375— tautological formulalen(assessments) / max(len(assessments), 1)confirmed at snapshot commit 6eb2065desloppify/app/commands/review/batch/core.py:365-370— function signature lacks anyexpected_dimension_countparameterdesloppify/app/commands/review/batch/core.py:617—_accumulate_batch_qualitycollects the always-1.0 valuesdesloppify/app/commands/review/batch/merge.py:199-201— averages the always-1.0 values into final outputFix
No fix needed — verdict is NO (duplicate of S036).
Verdict
Final verdict: NO — the bug is real but S207 is a duplicate of S036 (@Midwest-AI-Solutions), which was submitted ~24 hours earlier with the same finding and more detailed downstream analysis.
Scores
Summary
The technical claim is fully confirmed —
dimension_coverageis computed asN/Nand is always 1.0, making the metric permanently uninformative. However, S036 by @Midwest-AI-Solutions reported the exact same bug in the same function a day earlier with even more thorough downstream tracing. S207 is a duplicate and receives a NO verdict.Why Desloppify Missed This
Verdict Files
Generated with Lota