Conversation
…report generation - Add .gavel.yaml configuration for pre-commit build step - Update ci-test target to build binary and remove parallel flag - Add static dependency to dev target - Simplify tool installation commands by removing $(LOCALBIN)/ prefix - Add report/kitchen-sink.json build target for TypeScript report generation - Expand .gitignore with report artifacts and playwright config These changes improve build consistency and add support for report generation in the build pipeline.
…pplications and views Introduce new section types for displaying access control information and access logs in applications. Add AccessItem and AccessLogItem types to represent access data, along with AccessUIFilters and AccessLogsUIFilters for filtering. Implement GetAccessForUIRef and GetAccessLogsForUIRef database functions to query access and audit log data with support for filtering by search, config types, roles, user types, and time ranges. Add PDF rendering options to ViewSpec with FacetRenderOptions abstraction. Update all generated deepcopy functions and CRD schemas to support the new types. Add BuildCommit global variable for build metadata tracking.
Add detection for HTML responses on JSON endpoints to help users identify when their backend URL points to the frontend instead of the /api endpoint. Changes: - Introduce ErrHTMLResponse error for when server returns HTML instead of JSON - Add decodeJSON() helper that checks Content-Type header and response body for HTML indicators - Add looksLikeHTML() to detect HTML by content-type or body prefix - Update GetConnection() and TestConnection() to use decodeJSON() - Add Accept: application/json header to all requests - Add comprehensive test suite covering HTML detection scenarios This improves the user experience by providing clear error messages when misconfiguration occurs.
…d and OIDC flows with improved error handling Major refactoring across multiple components: **artifacts/controllers.go:** - Refactored ListArtifacts to use switch statement for path-based routing - Added support for config_change artifact listing - Simplified DownloadArtifact to use ctx.Blobs() instead of connection-based file access - Improved error messages and removed unnecessary intermediate variables **auth/middleware.go & auth/oidc/models.go:** - Renamed api.FrontendURL to api.PublicURL for consistency - Replaced custom StringList type with pq.StringArray - Removed unnecessary database/sql/driver import **auth/oidcclient/oidcclient.go:** - Changed RandomBase64 to panic on crypto/rand failure instead of returning error - Simplified function signature from (string, error) to string **cmd/connection_browser.go:** - Added loginWaitResult struct for better result handling - Implemented interactive token selection with huh library - Added signal handling for strict mode (required tokens) - Improved timeout and auto-detection logic - Added displayCapturedState function for cleaner output - Enhanced user prompts and feedback messages **connection/jwt.go:** - Added ScopeCount() method to JWT - Fixed audience parsing to handle both string and array formats **db/rbac.go:** - Added QueryLogSummary() method to RBACAccessRow - Refactored GetRBACAccess with query logging and recursive config expansion - Added GetRBACAccessByConfigIDs convenience function - Added GetGroupMembersForConfigs for audit reporting - Removed applyAccessSelectors in favor of inline logic Refs #123
WalkthroughThis pull request introduces comprehensive catalog and findings report features with enhanced access control visibility. Key additions include new data models for access/access logs, catalog and findings report generation with support for PDF export, refactored report rendering infrastructure using a unified CLI approach, and expanded React/TSX components for report visualization. Multiple database queries are added to support filtering and auditing. Changes
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
✨ Simplify code
|
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
|
Warning Review the following alerts detected in dependencies. According to your organization's Security Policy, it is recommended to resolve "Warn" alerts. Learn more about Socket for GitHub.
|
There was a problem hiding this comment.
Actionable comments posted: 14
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
auth/middleware.go (1)
109-112:⚠️ Potential issue | 🟠 MajorBasic vs Kratos mode OIDC URL divergence looks unintentional.
Basic mode still passes
api.FrontendURLto bothoidc.MountRoutesand the log line, while Kratos mode (lines 131/134) was switched toapi.PublicURL. Combined withoidc_validate.gonow validatingissstrictly againstapi.PublicURL, Basic-mode issued tokens will not validate whenPublicURL != FrontendURL. Consider unifying both modes onapi.PublicURL(or a shared issuer constant).Proposed fix
- if err := oidc.MountRoutes(e, ctx, api.FrontendURL, OIDCSigningKeyPath, htpasswdChecker, LookupPersonByUsername); err != nil { + if err := oidc.MountRoutes(e, ctx, api.PublicURL, OIDCSigningKeyPath, htpasswdChecker, LookupPersonByUsername); err != nil { return fmt.Errorf("failed to mount OIDC routes: %w", err) } - logger.Infof("OIDC provider enabled at %s", api.FrontendURL) + logger.Infof("OIDC provider enabled at %s", api.PublicURL)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@auth/middleware.go` around lines 109 - 112, The Basic-mode branch is still passing api.FrontendURL to oidc.MountRoutes and logger.Infof, causing issuer mismatch with the strict iss validation in oidc_validate.go; update the Basic-mode calls to use api.PublicURL (replace api.FrontendURL with api.PublicURL in the oidc.MountRoutes call and the logger.Infof message) so both Basic and Kratos modes use the same issuer URL and stay consistent with the iss validation.
🟡 Minor comments (39)
report/rbac-types.ts-92-106 (1)
92-106:⚠️ Potential issue | 🟡 MinorNaming convention inconsistent with the rest of this file.
RBACResource/RBACUserResourceuse camelCase (configClass,parentId,createdAt,updatedAt), but the newConfigItemuses snake_case (config_class,parent_id,created_at,updated_at). BecauseRBACReportnow surfaces both shapes side-by-side (resources[]camelCase,subject/parentssnake_case), consumers have to remember which casing applies where — easy to get wrong in templates/selectors.Prefer aligning
ConfigItemto the file’s camelCase convention (and normalizing server-side JSON tags), or — if snake_case is intentional because this comes straight from the DB model — at least document it inline so readers don't assume it's a typo.Also applies to: 112-113
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/rbac-types.ts` around lines 92 - 106, ConfigItem uses snake_case keys (config_class, parent_id, created_at, updated_at) which is inconsistent with the file's camelCase conventions (e.g., RBACResource/RBACUserResource); change those properties to camelCase (configClass, parentId, createdAt, updatedAt) in the ConfigItem interface and update any serialization/json mapping or server-side tags that produce snake_case so consumers see a consistent shape, or if snake_case is required by the DB layer add an inline comment on ConfigItem explaining it is intentionally DB-shaped and not normalized.report/components/GitRef.tsx-53-55 (1)
53-55:⚠️ Potential issue | 🟡 Minor
gitops.git.*fields should be optional.The inline type declares
url,branch,file,dir,linkas required strings, yet line 54 optional-chainsgitops?.git?.url(acknowledginggitcan be absent) and the underlyingGitRefprops treat every field as optional. Callers passing a partialgitops(e.g.,{ git: { url } }) will fail type-check.Using the real source type is cleanest; otherwise mark every sub-field optional:
♻️ Suggested change
-export function GitRefFromSource({ gitops, size }: { gitops?: { git: { url: string; branch: string; file: string; dir: string; link: string } }; size?: 'xs' | 'sm' }) { +export function GitRefFromSource({ + gitops, + size, +}: { + gitops?: { git?: { url?: string; branch?: string; file?: string; dir?: string; link?: string } }; + size?: 'xs' | 'sm'; +}) { if (!gitops?.git?.url) return null; return <GitRef url={gitops.git.url} branch={gitops.git.branch} file={gitops.git.file} dir={gitops.git.dir} link={gitops.git.link} size={size} />; }Better still, import/reuse the shared
GitOpsSourcetype (the backendquery.GitOpsSourcethe report JSON is generated from) so the shape stays in lock-step with the producer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/components/GitRef.tsx` around lines 53 - 55, The prop type for GitRefFromSource declares gitops.git.url/branch/file/dir/link as required but the code and GitRef treat them as optional; update the signature of GitRefFromSource to use the real shared type (e.g., import and use GitOpsSource / query.GitOpsSource) or change the inline type so gitops?: { git?: { url?: string; branch?: string; file?: string; dir?: string; link?: string } } to make every git sub-field optional; ensure the call to GitRef still passes these possibly-undefined values (GitRefFromSource -> GitRef) without forcing non-null assertions so TypeScript type-checks succeed.report/finding-schema.json-6-8 (1)
6-8:⚠️ Potential issue | 🟡 MinorMissing top-level
idwhilerelatedFindingsreferences IDs.
context.relatedFindingsis described as "IDs of related findings that may form a larger attack chain", but the schema doesn't define anid(orfindingId) on the finding object itself, andadditionalProperties: falsemeans callers can't tack one on without the validator rejecting the payload. Either add an optionalidto the top-level properties, or document thatrelatedFindingsreferences IDs that live outside the finding envelope (e.g., in a wrapping collection).Also applies to: 206-210
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/finding-schema.json` around lines 6 - 8, The schema omits a top-level identifier while context.relatedFindings expects IDs; update the JSON schema (the top-level "properties"/"required" block in finding-schema.json) to add an optional "id" (or "findingId") property of type string (or string/UUID) and, if you want it required, include it in the "required" array, or alternatively update context.relatedFindings description to state those IDs are external; ensure this new "id"/"findingId" property is added to "properties" and adjust "additionalProperties" handling if needed so validators allow the identifier referenced by "context.relatedFindings".report/components/ConfigInsightsSection.tsx-64-69 (1)
64-69:⚠️ Potential issue | 🟡 MinorSort puts unknown/missing statuses and severities first.
indexOfreturns-1for values not instatusOrder/SEVERITY_ORDER, so entries with a missing or unexpectedstatus/severitysort beforeopen/critical. You likely want them at the end. Also, severity comparison uses rawa.severitybutInsightEntryfalls back to'info'— keep these consistent.🔧 Proposed fix
const sorted = [...analyses].sort((a, b) => { const statusOrder = ['open', 'silenced', 'resolved']; - const statusDiff = statusOrder.indexOf(a.status ?? '') - statusOrder.indexOf(b.status ?? ''); + const rank = (arr: readonly string[], v: string | undefined) => { + const i = arr.indexOf(v ?? ''); + return i === -1 ? arr.length : i; + }; + const statusDiff = rank(statusOrder, a.status) - rank(statusOrder, b.status); if (statusDiff !== 0) return statusDiff; - return SEVERITY_ORDER.indexOf(a.severity as ConfigSeverity) - SEVERITY_ORDER.indexOf(b.severity as ConfigSeverity); + return rank(SEVERITY_ORDER, a.severity ?? 'info') - rank(SEVERITY_ORDER, b.severity ?? 'info'); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/components/ConfigInsightsSection.tsx` around lines 64 - 69, The sort currently places unknown/missing statuses/severities first because indexOf returns -1; update the comparator in the sorted computation so missing/unknown values map to the end: for status use statusOrder.indexOf(a.status ?? '') and if that yields -1 treat it as statusOrder.length (same for b), and for severity use SEVERITY_ORDER.indexOf(a.severity ?? 'info') (or map -1 to SEVERITY_ORDER.length) so the severity fallback matches InsightEntry's default of 'info'; update the comparator used in the sort on analyses accordingly (references: sorted, statusOrder, SEVERITY_ORDER, analyses, ConfigSeverity, InsightEntry fallback).report/icons.ts-90-90 (1)
90-90:⚠️ Potential issue | 🟡 MinorEncoding artifact in comment.
The decorative line at the end of "Actor Type Icons" contains a corrupted multi-byte sequence (
───────��). Cosmetic but worth a quick fix.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/icons.ts` at line 90, Replace the corrupted multi-byte sequence at the end of the decorative comment line that contains "Actor Type Icons" with a clean ASCII-only decoration (e.g., use dashes or remove the stray characters) so the comment reads something like "// ── Actor Type Icons ───────────────────────────────────────────────" or simply "// Actor Type Icons"; update the comment in report/icons.ts where that string appears to remove the encoding artifact.report/components/rbac-visual.tsx-31-39 (1)
31-39:⚠️ Potential issue | 🟡 Minor
IdentityInfo.iconis inconsistent withIDENTITY_ICON_NAMES.
identityType()setsicon: type(e.g.'service','bot'), but the actual icon names registered viaIDENTITY_ICON_NAMESare'server'for service and'bot'/'user'/'group'for the others. Any external consumer that readsinfo.iconto render an icon (instead of going throughIdentityIcon) will get a wrong/missing icon forservice. Either dropiconfromIdentityInfoor set it fromIDENTITY_ICON_NAMES[type].Proposed fix
const resolve = (type: IdentityType): IdentityInfo => ({ - type, icon: type, color: IDENTITY_COLOR, label: IDENTITY_LABELS[type], + type, icon: IDENTITY_ICON_NAMES[type], color: IDENTITY_COLOR, label: IDENTITY_LABELS[type], });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/components/rbac-visual.tsx` around lines 31 - 39, The identityType function currently assigns icon: type which mismatches registered icons; update identityType to set icon using the canonical mapping (IDENTITY_ICON_NAMES[type]) instead of the raw IdentityType string (or remove the icon field from IdentityInfo if icons are no longer needed). Locate identityType in rbac-visual.tsx and change its resolve helper to use IDENTITY_ICON_NAMES[type] (keep label from IDENTITY_LABELS[type] and color from IDENTITY_COLOR) so consumers reading IdentityInfo.icon get the correct icon name.api/catalog_report.go-11-16 (1)
11-16:⚠️ Potential issue | 🟡 MinorTrailing slash handling in
ConfigPermalink.If
FrontendURLis configured with a trailing/(a common pattern), the resulting URL becomeshttps://host//catalog/<id>. Most browsers tolerate this, but it’s ugly in reports/emails and breaks strict URL comparisons. Consider trimming once:Proposed fix
- return fmt.Sprintf("%s/catalog/%s", FrontendURL, configID) + return fmt.Sprintf("%s/catalog/%s", strings.TrimRight(FrontendURL, "/"), configID)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/catalog_report.go` around lines 11 - 16, ConfigPermalink currently concatenates FrontendURL and the path which yields a double slash if FrontendURL ends with '/', so update ConfigPermalink to trim any trailing slashes from FrontendURL before building the permalink (e.g., use strings.TrimRight(FrontendURL, "/")) and then format "%s/catalog/%s"; keep the early return when FrontendURL is empty and preserve the function name ConfigPermalink and its signature.report/icons.ts-95-96 (1)
95-96:⚠️ Potential issue | 🟡 Minor
ICON_IDENTITY_MACHINEuses incorrect Y-offset in robot icon path.The path starts at
M6 4but the official tabler:robot icon starts atM6 6. This shifts the icon up by 2 units and makes it visually distinct from the correct robot icon (and fromICON_SERVICE_ACCOUNT_MISUSEon line 62, which has the correctM6 6). Update line 96 to useM6 6to match the official tabler:robot specification.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/icons.ts` around lines 95 - 96, The robot icon path in ICON_IDENTITY_MACHINE uses an incorrect Y-offset ("M6 4") causing a 2-unit upward shift; update the path string passed to tabler in the ICON_IDENTITY_MACHINE constant so the initial moveto is "M6 6" (matching the official tabler:robot and the ICON_SERVICE_ACCOUNT_MISUSE entry) and keep the rest of the path unchanged.report/MatrixDemo.tsx-87-149 (1)
87-149:⚠️ Potential issue | 🟡 MinorMissing
keyprops on<Cell>array children in demo code.The
cellsarrays in this demo file lackkeyprops on<Cell />elements. While this is demo code, the production componentRBACMatrixSection.tsxcorrectly usesroles.map((role) => <Indicator key={role} ... />), demonstrating the proper pattern. Consider aligning the demo with this approach for consistency and to avoid potential React warnings ifMatrixTableiterates over the array.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/MatrixDemo.tsx` around lines 87 - 149, The demo rows in MatrixDemo.tsx pass arrays of JSX <Cell> elements without keys causing potential React warnings; update each cells array to assign stable key props to every <Cell /> (for example based on the column/role index or role id) so they mirror the production pattern used in RBACMatrixSection.tsx (roles.map((role) => <Indicator key={role} ... />)); locate the rows prop and add a unique key to each <Cell> element in every cells array (or build the cells via a map that supplies the key) to eliminate the missing-key warnings.connection/jwt.go-95-99 (1)
95-99:⚠️ Potential issue | 🟡 MinorMulti-audience tokens silently drop extra entries.
Per RFC 7519 §4.1.3,
audcan legitimately be an array with multiple values. Taking onlyarr[0]is fine if downstream code only needs a representative audience, but it can cause surprises when a token's relevant audience isn't first. IfJWT.Audienceis only used for display/debug (as suggested byPretty()), this is acceptable — otherwise consider storing all audiences or matching the expected one.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connection/jwt.go` around lines 95 - 99, The current parsing only takes arr[0] from claims["aud"], which drops extra audiences; modify the JWT model and parsing so you keep all audience entries: add a new field (e.g., Audiences []string) to the JWT struct, when claims["aud"] is []any iterate and collect all string entries into Audiences (optionally set the existing Audience string to Audiences[0] for backward compatibility), and update Pretty() (or any display/debug routines) to use strings.Join(Audiences, ",") or otherwise show the full list; ensure parsing still handles the single-string aud case into Audiences as a single-entry slice.report/embed.go-22-34 (1)
22-34:⚠️ Potential issue | 🟡 MinorStat error is silently treated as a valid directory override.
When
SourceDiris set butos.Statfails (e.g., typo, wrong path, permission denied),ResolveSourcereturns(SourceDir, "")as if the path were a valid directory. Downstream facet rendering will then fail with an obscure error instead of falling back to embedded assets or surfacing the misconfiguration clearly.A couple of options:
- Return
("", "")on stat failure so rendering falls back to the embedded FS, or- Change the signature to
(dir, entryFile string, err error)and propagate the error so the CLI flag handler can report it.♻️ Fallback-to-embedded option
- info, err := os.Stat(SourceDir) - if err != nil { - return SourceDir, "" - } + info, err := os.Stat(SourceDir) + if err != nil { + return "", "" + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/embed.go` around lines 22 - 34, ResolveSource currently treats any os.Stat error as if SourceDir were a valid path and returns (SourceDir, ""), which masks misconfiguration; change the os.Stat error branch in ResolveSource to return ("", "") so callers fall back to the embedded FS instead of using an invalid override: keep the ResolveSource signature, check the err from os.Stat(SourceDir) and on any non-nil err return ("", "") (do not treat SourceDir as a directory), leaving the existing handling for valid file vs directory cases intact.cmd/context.go-316-318 (1)
316-318:⚠️ Potential issue | 🟡 MinorSingle
Readmay return partial/zero bytes.
resp.Body.Read(buf)is not guaranteed to fill the buffer in one call even when more data is available; for some servers the first read can return 0 bytes without EOF. The downstreamstrings.HasPrefix(body, "<")/"[" / "{"checks would then misclassify the response and skip the upgrade. Preferio.ReadAll(io.LimitReader(resp.Body, 512)).🛠️ Proposed fix
- buf := make([]byte, 512) - n, _ := resp.Body.Read(buf) - body := strings.TrimLeft(string(buf[:n]), " \t\r\n") + raw, _ := io.ReadAll(io.LimitReader(resp.Body, 512)) + body := strings.TrimLeft(string(raw), " \t\r\n")Add
"io"to the imports.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/context.go` around lines 316 - 318, The code uses a single resp.Body.Read(buf) which can return partial/zero bytes; replace that read with io.ReadAll(io.LimitReader(resp.Body, 512)) to reliably read up to 512 bytes, handle and check the returned error, and then set body := strings.TrimLeft(string(p), " \t\r\n"); ensure you add the "io" import and propagate/log the read error as appropriate so the subsequent strings.HasPrefix checks operate on the actual initial bytes.report/kitchen-sink/ConfigComponentsPage.tsx-43-47 (1)
43-47:⚠️ Potential issue | 🟡 MinorAll related-config cards will display
configItem's timestamps.
created_at/updated_atare overridden fromdata.configItemfor every row insampleConfigs, so related configs get the primary item's timestamps instead of their own. Theid: config.idoverride is also redundant with the spread. Consider reading timestamps from eachconfig(and only falling back todata.configItemfor the primary entry) if this page is meant to showcase real card metadata.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/kitchen-sink/ConfigComponentsPage.tsx` around lines 43 - 47, The current mapping of sampleConfigs incorrectly forces every related config card to use data.configItem's timestamps and redundantly overrides id; in the sampleConfigs.map block (and the ConfigItemCard props) stop spreading data.configItem timestamps into every config and instead pass each config's own created/updated timestamps (e.g., config.createdAt / config.updatedAt or config.created_at / config.updated_at depending on your model), only falling back to data.configItem.createdAt/updatedAt for the primary item when necessary, and remove the redundant id: config.id override since the spread already includes id.report/kitchen-sink/ConfigComponentsPage.tsx-14-15 (1)
14-15:⚠️ Potential issue | 🟡 MinorGuard
data.relatedConfigsagainst undefined.Line 15 defensively uses
data.scrapers ?? [], but line 14 will throwCannot read properties of undefined (reading 'slice')whenrelatedConfigsis absent. Please apply the same guard.🛠️ Proposed fix
- const sampleConfigs = [data.configItem, ...data.relatedConfigs.slice(0, 5)]; + const sampleConfigs = [data.configItem, ...(data.relatedConfigs ?? []).slice(0, 5)];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/kitchen-sink/ConfigComponentsPage.tsx` around lines 14 - 15, The sampleConfigs construction uses data.relatedConfigs.slice(0, 5) which will throw if relatedConfigs is undefined; change the expression in the sampleConfigs declaration (the symbol sampleConfigs) to guard relatedConfigs (e.g., use data.relatedConfigs ?? [] or data.relatedConfigs?.slice(0,5) ?? []) so you spread a safe array: keep data.configItem first and then append up to five related configs from the guarded array.report/catalog/default-settings.yaml-15-20 (1)
15-20:⚠️ Potential issue | 🟡 MinorPartial redundancy in
backup.successrules for BACKUP_DB events.The third
backup.successrule (line 10-11:changeType == "BACKUP_DB" && severity != "high") is a superset of the first rule's BACKUP_DB clause (line 6-7: includeschangeType == "BACKUP_DB" && severity == "low").The category mapping evaluates rules in order and assigns the category only once (line 119 of change_mappings.go checks
if category == ""). For BACKUP_DB events with severity "low", the first rule will match and assign the category, preventing the third rule from ever assigning it. The third rule only takes effect for severity values like "medium" or "info" that the first rule doesn't match.Consider consolidating or clarifying the boundary: either remove the
BACKUP_DBclause from the first rule's filter, or restrict the third rule toseverity == "medium"(or similar) to make the intent explicit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/catalog/default-settings.yaml` around lines 15 - 20, The backup.success rules overlap for BACKUP_DB events: the first backup.success filter includes (changeType == "BACKUP_DB" && severity == "low") while a later backup.success rule uses (changeType == "BACKUP_DB" && severity != "high"), causing redundant/ambiguous matching because category assignment stops at first match; resolve by making the intent explicit—either remove the BACKUP_DB clause from the first rule so only the later rule handles BACKUP_DB events, or narrow the later rule to an explicit set (e.g., changeType == "BACKUP_DB" && severity == "medium") or alternatively change the first rule to only match severity == "low" for non-BACKUP_DB kinds; update the filters for the rules named backup.success (and backup.failed if relevant) accordingly so there is no overlap.cmd/catalog_tree.go-39-41 (1)
39-41:⚠️ Potential issue | 🟡 MinorFormat-string pitfall: don't pass
err.Error()as the format argument.
logger.Errorf(err.Error())interprets the error message as a printf format string. If the error text contains stray%characters (e.g. from a path or templated message), the log will emit%!(EXTRA ...)or, worse, read uninitialized varargs. Use a literal format:🛡️ Proposed fix
- if err := properties.LoadFile("mission-control.properties"); err != nil { - logger.Errorf(err.Error()) - } + if err := properties.LoadFile("mission-control.properties"); err != nil { + logger.Errorf("%s", err) + }Also worth confirming the intent of swallowing this error — if
mission-control.propertiesis optional, aWarnfwould be more accurate thanErrorf.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/catalog_tree.go` around lines 39 - 41, Replace the unsafe logger call that passes the error string as a format with a literal format and the error as an argument: in the code around properties.LoadFile("mission-control.properties") and the logger.Errorf call, change the log invocation to use a format string (e.g. "failed to load mission-control.properties: %v") and pass err as the parameter; also consider using logger.Warnf instead of logger.Errorf if the properties file is optional to reflect intent.report/build-kitchen-sink.ts-21-21 (1)
21-21:⚠️ Potential issue | 🟡 MinorFragile cross-repo path to
dutyschema.
../../duty/schema/openapi/change-types.schema.jsonassumesdutyis checked out as a sibling repository at a specific relative location. CI/contributor workflows that don't replicate that layout will fail with an opaqueENOENTfromreadFileSyncat line 151. Consider either resolving the path throughnode_modules(if duty is a JS dep), copying the schema into the repo as a generated artifact, or at least surfacing a clearer error and allowing the path to be overridden via env var (e.g.CHANGE_TYPES_SCHEMA).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/build-kitchen-sink.ts` at line 21, The hardcoded relative path assigned to schemaPath is fragile; change the resolution to first check an environment override (CHANGE_TYPES_SCHEMA) and if not present attempt a robust lookup via require.resolve or node_modules (e.g., resolve('duty/schema/openapi/change-types.schema.json')) before falling back to the current relative path, and wrap the subsequent readFileSync usage in a try/catch that throws a clear, actionable error mentioning the attempted paths and the env var name; locate and update the schemaPath declaration and the code that reads the file (the schemaPath constant and the readFileSync usage) to implement this fallback and improved error handling.report/CatalogReport.tsx-197-201 (1)
197-201:⚠️ Potential issue | 🟡 MinorDe-duplicate changes by ID before passing to ArtifactAppendix.
When
groupBy === 'config',data.changes,entry.changes, andgroup.changesmay contain overlapping entries (same changes surfaced via different aggregations). The appendix will render these duplicates since it groups by config but doesn't de-duplicate by change ID. Consider deduplicating the array before passing it toArtifactAppendix:const uniqueChanges = Array.from( new Map((changes || []).map(c => [c.id, c])).values() ) <ArtifactAppendix changes={uniqueChanges} />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/CatalogReport.tsx` around lines 197 - 201, The ArtifactAppendix is receiving concatenated change arrays (from data.changes, data.entries.flatMap(e=>e.changes), and data.configGroups.flatMap(g=>g.changes)) which can contain duplicate change objects when groupBy === 'config'; deduplicate by change id before passing to ArtifactAppendix by building a map keyed by change.id (or the unique identifier used on change objects) and using its values as the prop; replace the inline array spread with a single deduplicated array and pass that to ArtifactAppendix (refer to ArtifactAppendix, data.changes, data.entries, data.configGroups, and change.id).report/components/CatalogList.tsx-20-22 (1)
20-22:⚠️ Potential issue | 🟡 MinorMinor: pluralization for singular counts.
When
changeCount,insightCount, oraccessCountequals 1, the rendered strings read as "1 changes", "1 insights", "1 access". Consider handling the singular form (e.g., "1 change", "1 insight", "1 access") for cleaner output, and pluralizeaccess→accesseswhen > 1.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/components/CatalogList.tsx` around lines 20 - 22, The three assignments producing "1 changes"/"1 insights"/"1 access" should handle singular vs plural; update the logic around entry.changeCount, entry.insightCount and entry.accessCount so row.changes = `${entry.changeCount} ${entry.changeCount === 1 ? 'change' : 'changes'}`, row.insights = `${entry.insightCount} ${entry.insightCount === 1 ? 'insight' : 'insights'}`, and row.access = `${entry.accessCount} ${entry.accessCount === 1 ? 'access' : 'accesses'}` (or extract a small pluralize helper used for these three fields) so singular counts use the correct singular word and access pluralizes to "accesses" when > 1.report/components/CatalogList.tsx-12-17 (1)
12-17:⚠️ Potential issue | 🟡 MinorInconsistent null-safety for
entry.configItem.
entryToRowdereferencesentry.configItemdirectly (ci.name,ci.type), but the key computation on Line 61 uses optional chainingentry.configItem?.id. IfconfigItemis ever absent, the map callback will throw before the key fallback helps. Either tighten the type (non-optionalconfigItem) or guard it consistently inentryToRow.Also applies to: 60-62
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/components/CatalogList.tsx` around lines 12 - 17, entryToRow dereferences entry.configItem without guarding it while elsewhere the code uses optional chaining; update entryToRow (and the related map callback that computes the key) to handle a possibly undefined configItem consistently: either tighten the type so CatalogReportEntry.configItem is non-optional, or add a null-safe guard in entryToRow (e.g., return safe fallbacks for name/type when entry.configItem is undefined) and ensure the map key computation uses the same guard pattern so no runtime dereference occurs; refer to the entryToRow function and the map callback that uses entry.configItem?.id to make the changes.report/kitchen-sink/LayoutComponentsPage.tsx-14-14 (1)
14-14:⚠️ Potential issue | 🟡 MinorPotential NPE if
data.configItemis undefined.
configis dereferenced without guarding (config.type,config.name,config.status, etc.). IfConfigReportData.configItemis optional, this page will crash during render. Either guard with an earlyreturn nullwhenconfigis missing, or ensure the type is non-optional.Also applies to: 26-38
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/kitchen-sink/LayoutComponentsPage.tsx` at line 14, LayoutComponentsPage currently dereferences data.configItem into const config = data.configItem and then uses config.type, config.name, config.status (and other fields in the 26-38 block) which will throw if config is undefined; update LayoutComponentsPage to guard against a missing config by either returning null/a fallback UI early when data.configItem is falsy or by providing a safe default object before rendering (e.g., const config = data.configItem || /* fallback */), and update usages (config.type, config.name, config.status) accordingly so no property access occurs on undefined.report/sample-findings.json-144-144 (1)
144-144:⚠️ Potential issue | 🟡 MinorInconsistent/likely-typo identifiers in sample data.
A few entries deviate from the rest of the fixture in ways that look unintentional:
- Bucket name: most findings use
s3://acme-app-audit/...but Lines 144, 207, 271 uses3://acmer-app-audit/....- Line 249 scope
OMA_ZIM_app_PRODand Line 311 resourceOMA_KEN_app_PRODdon't follow theacme_*_app_PRODconvention used elsewhere.- Line 516
"connection": "omar-prod"— all other findings use theconnection://...URI scheme, andomaris otherwise unused. Looks like a stray/placeholder value.Since this fixture drives the findings renderer (and PDF samples), aligning these avoids confusion when reviewing rendered output.
Also applies to: 207-207, 249-249, 311-311, 516-516
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/sample-findings.json` at line 144, Fix inconsistent sample identifiers: change the path value "s3://acmer-app-audit/..." to "s3://acme-app-audit/..." where it appears (the path string shown in the diff), rename scope "OMA_ZIM_app_PROD" to follow the convention (e.g., "acme_ZIM_app_PROD") and rename resource "OMA_KEN_app_PROD" to "acme_KEN_app_PROD" to match other acme_*_app_PROD entries, and normalize the connection field "omar-prod" to the URI scheme used elsewhere (e.g., "connection://omar-prod"). Ensure each replacement matches the exact keys shown: the "path" string, the "scope" value, the "resource" value, and the "connection" value.report/components/ConfigRelationshipGraph.tsx-56-62 (1)
56-62:⚠️ Potential issue | 🟡 MinorEarly-return suppresses the whole section when
relatedConfigsis empty.If a caller passes
centralConfigandrelationshipsbut norelatedConfigs, the entire "Config Relationships" section is hidden rather than showing the central config alone. Also, whenrelatedConfigsis populated butrelationshipsis not, bothRelationshipGroups render with count 0 —RelationshipGroupalready returns null for empty arrays, so the section header will show with no body. Consider whether "nothing to show when there are no relationships" should be the guard instead of "no relatedConfigs".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/components/ConfigRelationshipGraph.tsx` around lines 56 - 62, Remove the early-return that hides the whole section when relatedConfigs is empty; instead return null only when there are no relationships AND no relatedConfigs. Specifically, in ConfigRelationshipGraph replace the guard "if (!relatedConfigs?.length) return null" with a check like "if (!(relationships?.length || relatedConfigs?.length)) return null", ensure configLookup is built from (relatedConfigs || []) so it works when relatedConfigs is undefined, and keep computing rels/incoming/outgoing as you already do so RelationshipGroup can decide to render or return null for empty groups.rbac_report/report.go-195-216 (1)
195-216:⚠️ Potential issue | 🟡 MinorSubject selection via
configItems[0]is order-dependent and non-deterministic.Since
query.GetConfigsByIDsprovides no ordering guarantees, the "subject" can vary across runs for the same input, affecting report reproducibility. The function should either validate that exactly one item is provided, or pick deterministically (e.g., by ID or by selecting the item without a ParentID). Document the intended behavior to clarify whether this should be a single-subject report.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rbac_report/report.go` around lines 195 - 216, The subject selection in resolveSubjectAndParents currently uses configItems[0], which is order-dependent and non-deterministic; change resolveSubjectAndParents to deterministically select the subject (or validate single-item input): e.g., if len(configItems)==1 use that item, otherwise pick the item with nil ParentID if present, else pick the item with the smallest UUID string (or explicitly return an error), and update callers/docs to reflect whether reports are single-subject only; reference resolveSubjectAndParents, configItems, ParentID, and query.GetConfigsByIDs when making the change.report/components/BackupActivityCalendar.tsx-17-21 (1)
17-21:⚠️ Potential issue | 🟡 MinorLegend label mismatch for
warningstatus.
LABEL_CLASSES/CELL_CLASSESkey this state aswarning(amber), but the legend renders it as "In Progress". Ifwarningis meant to cover both "warning" and "in progress" semantics that's fine, but the label discrepancy will be confusing when a backup is actually a warning state (e.g., partial failure). Consider either renaming the status key or adjusting the legend text to match the domain meaning.Also applies to: 137-140
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/components/BackupActivityCalendar.tsx` around lines 17 - 21, The legend text for the "In Progress" state doesn't match the status key 'warning' used in LABEL_CLASSES and CELL_CLASSES (and BackupCalendarStatus), causing confusion; update either the domain model or the legend so they match — either rename the status key 'warning' to 'in-progress' across BackupCalendarStatus, LABEL_CLASSES, and CELL_CLASSES (and any usages in BackupActivityCalendar) or change the legend text that renders the label to "Warning" (or "Warning / In Progress") to reflect the existing 'warning' key; ensure CSS class mapping and any conditional logic using LABEL_CLASSES/CELL_CLASSES remain consistent after the change.db/applications.go-591-620 (1)
591-620:⚠️ Potential issue | 🟡 Minor
IsStaleis alwaysfalsewhenfilters.Staleis empty.When the caller doesn't supply a
Stalefilter,staleCutoffis zero,isStaleevaluates tofalsefor every row, andAccessItem.IsStaleis populated asfalseeven for users who have never signed in or who signed in years ago. This makesIsStaleunreliable for UI consumers that expect it to always reflect staleness — they'd have to reintroduce their own cutoff.If
IsStaleis intended to be authoritative in the response, compute it against a sensible default (e.g., the thresholds from settings, or 90 days) even when no filter is supplied. Otherwise consider making it a pointer so an unset value is distinguishable fromfalse.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@db/applications.go` around lines 591 - 620, The IsStale boolean is being computed only when filters.Stale is provided, causing AccessItem.IsStale to be false for all rows when filters.Stale is empty; change the logic so staleCutoff is always set (use a sensible default like the app settings' stale threshold or 90 days when filters.Stale == "") and then compute isStale using that cutoff inside the rows loop (symbols to update: filters.Stale, staleCutoff, isStale, and the AccessItem.IsStale assignment), or alternatively change AccessItem.IsStale to a *bool and populate it as nil when no cutoff is intended; pick one approach and update the loop and any callers accordingly so IsStale reliably represents staleness.report/components/BackupActivityCalendar.tsx-71-94 (1)
71-94:⚠️ Potential issue | 🟡 MinorEntries outside the reference month are silently dropped.
referenceKeypicks the latest aggregated day and the grid only renders cells for that single month. Any entries from prior months still land indateMapbut have no corresponding cell, so they disappear without any visible indication. If the caller passes a range that spans multiple months (whichBackupsSectioncan easily produce), users will see an incomplete view.Consider either rendering one calendar per distinct month present in
aggregated, or documenting/asserting that callers must pre-slice entries to a single month.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/components/BackupActivityCalendar.tsx` around lines 71 - 94, The current logic uses referenceKey and a single month grid (monthLabel, cells, dateMap) so any aggregated entries outside that reference month are omitted; change to group aggregated entries by their YYYY-MM key (use aggregateEntries output -> map by entry.date.slice(0,10).slice(0,7) or similar), then iterate each distinct month-key and for each compute year/month, daysInMonth, firstDow, build a per-month dateMap and cells exactly like the existing block (reusing aggregated -> dateMap, monthLabel, cells logic) and render a calendar for each group; alternatively add an explicit assertion that callers must pass a single-month slice if you prefer not to support multi-month rendering.report/components/CatalogAccessSection.tsx-27-27 (1)
27-27:⚠️ Potential issue | 🟡 Minor
hasMultipleConfigsactually checks "any entry has a configName".The name implies multiple distinct configs, but
.some((a) => a.configName)is true even when every row shares a single config. If the intent is to only show the extra "Config" column when results span more than one config, this should compare distinct names:- const hasMultipleConfigs = access.some((a) => a.configName); + const hasMultipleConfigs = + new Set(access.map((a) => a.configName).filter(Boolean)).size > 1;Otherwise consider renaming to
hasConfigColumn/showConfigColumnso the behavior matches the name.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/components/CatalogAccessSection.tsx` at line 27, The variable hasMultipleConfigs currently uses access.some((a) => a.configName) which only detects if any row has a configName; change it to detect multiple distinct config names (e.g., compute new Set(access.map(a => a.configName).filter(Boolean)).size > 1) so the "Config" column is shown only when results span more than one config, or alternatively rename hasMultipleConfigs to hasConfigColumn / showConfigColumn if the intended behavior is to show the column whenever any row has a config; update references in CatalogAccessSection.tsx accordingly (hasMultipleConfigs and any conditional rendering logic that decides whether to show the Config column).cmd/catalog_report.go-92-96 (1)
92-96:⚠️ Potential issue | 🟡 Minor
ShutdownAndExitmakes the followingreturn errunreachable and bypasses Cobra’s error plumbing.
shutdown.ShutdownAndExit(1, err.Error())callsos.Exit(1)internally, soreturn errnever runs and any deferred cleanup in callers is skipped. Either let Cobra handle the failure via the returned error, or drop the return — but not both. Also consider usingctx.Oops().Wrap(err)for consistency with the rest of the codebase.🛠 Suggested change
result, err := catalog.Export(ctx, configs, opts, catalogReportFormat) if err != nil { - shutdown.ShutdownAndExit(1, err.Error()) - return err + return ctx.Oops().Wrap(err) }As per coding guidelines: "For wrapping internal errors, use
ctx.Oops().Wrap(err)orctx.Oops().Wrapf(err, "context")."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/catalog_report.go` around lines 92 - 96, The code calls shutdown.ShutdownAndExit(1, err.Error()) then returns err which is unreachable; replace the os.Exit path with returning a wrapped error so Cobra and deferred cleanup run: in the error branch after calling catalog.Export(ctx, configs, opts, catalogReportFormat) remove Shutdown.ShutdownAndExit and instead return ctx.Oops().Wrap(err) (or ctx.Oops().Wrapf(err, "exporting catalog") if extra context desired) so the error plumbing is consistent with the rest of the codebase.report/components/AuditPage.tsx-101-160 (1)
101-160:⚠️ Potential issue | 🟡 MinorGuard against missing
queries/scrapers/groupsarrays.
audit.queries.length,audit.scrapers.length, andaudit.groups.lengthassume these arrays are always present on the payload. If the Go encoder omits them (e.g.omitemptywith a nil slice), these accesses will throwCannot read properties of undefined. Safer to default with?? []or make the optionality explicit inCatalogReportAudit.🛠 Suggested guard
- {audit.queries.length > 0 && ( + {(audit.queries?.length ?? 0) > 0 && ( ... - {audit.scrapers.length > 0 && ( + {(audit.scrapers?.length ?? 0) > 0 && ( ... - {audit.groups.length > 0 && ( + {(audit.groups?.length ?? 0) > 0 && (🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/components/AuditPage.tsx` around lines 101 - 160, Guard against missing audit arrays by defaulting undefined audit.queries, audit.scrapers, and audit.groups to empty arrays before using .length or mapping; update the render checks and map calls that reference audit.queries, audit.scrapers, and audit.groups (used in the Section/ListTable/ScraperCard blocks) to use a safe fallback (e.g., audit.queries ?? [], audit.scrapers ?? [], audit.groups ?? []) or make these properties non-optional on CatalogReportAudit so the components always receive arrays.report/components/BackupsSection.tsx-33-40 (1)
33-40:⚠️ Potential issue | 🟡 MinorInconsistent casing between success and failed checks can miscategorize backups.
isFaileduses a case-insensitive substring match, butsuccessCountand the calendar’s success branch use strictb.status === 'success'. A status like"Success","SUCCEEDED", or"Completed"will fall through to neither bucket, inflatingwarningentries and undercountingsuccessCount.🛠 Suggested fix
- const isFailed = (status: string) => String(status).toLowerCase().includes('fail'); - const successCount = backups.filter((b) => b.status === 'success').length; + const isFailed = (status: string) => String(status).toLowerCase().includes('fail'); + const isSuccess = (status: string) => String(status).toLowerCase().includes('success'); + const successCount = backups.filter((b) => isSuccess(b.status)).length; const failedCount = backups.filter((b) => isFailed(b.status)).length; const calendarEntries = backups.map((backup) => ({ date: backup.date, - status: (backup.status === 'success' ? 'success' : isFailed(backup.status) ? 'failed' : 'warning') as BackupCalendarStatus, + status: (isSuccess(backup.status) ? 'success' : isFailed(backup.status) ? 'failed' : 'warning') as BackupCalendarStatus, label: backup.size || undefined, }));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/components/BackupsSection.tsx` around lines 33 - 40, Status checks are inconsistent: isFailed uses a case-insensitive substring but successCount and calendarEntries use strict equality, causing mixed-case or synonym statuses to be misclassified; normalize the status once (e.g., const norm = String(b.status).toLowerCase()) and use that normalized value in isFailed, in the success check used by successCount, and when computing calendarEntries' status (or define explicit helpers like isSuccess and isFailed that operate on the normalized status and handle synonyms such as 'success', 'succeeded', 'completed'); update references in successCount, failedCount, calendarEntries and the isFailed helper to use these normalized checks.cmd/catalog_report.go-64-66 (1)
64-66:⚠️ Potential issue | 🟡 MinorAvoid using error messages / interpolated strings as format strings.
logger.Errorf(err.Error())(Line 65) andlogger.Infof(details)(Line 116) pass already-rendered strings as the format string. A%in a file path, query, or error message becomes an unintended verb and produces%!(EXTRA …)noise. Use an explicit%s:🛠 Suggested change
- logger.Errorf(err.Error()) + logger.Errorf("%s", err) ... - logger.Infof(details) + logger.Infof("%s", details)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/catalog_report.go` around lines 64 - 66, The log calls are passing rendered strings as format strings which can misinterpret % characters; update the usages around properties.LoadFile so that logger.Errorf and logger.Infof use an explicit format specifier (e.g., "%s" or "%v") and pass the error or details as an argument instead of calling err.Error() or passing details directly; specifically change the logger.Errorf(...) call that follows properties.LoadFile and the logger.Infof(details) call to use logger.Errorf("%s", err) / logger.Infof("%s", details) (or equivalent) to avoid format-string issues.report/FindingsReport.tsx-521-525 (1)
521-525:⚠️ Potential issue | 🟡 MinorRuntime lookups assume valid enum values.
FindingsReportaccepts untypedRecord<string, unknown>, but downstream code likeSEVERITY_STYLES[f.severity],OUTCOME_STYLES[f.outcome].label, andPLATFORM_LABELS[f.platform]will throwCannot read properties of undefinedif any incoming finding carries a value outside the declared union (e.g., an unknown severity like"unknown"or typo). Since this report is typically rendered from JSON data (e.g.,sample-findings.json), a single bad record will crash the whole report.Consider adding a normalization step on the input that clamps unknown values to
"info"/"informational"/ default platform, or??fallbacks at each lookup.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/FindingsReport.tsx` around lines 521 - 525, FindingsReport currently trusts untyped props and performs runtime lookups like SEVERITY_STYLES[f.severity], OUTCOME_STYLES[f.outcome], and PLATFORM_LABELS[f.platform] which will throw if a finding has an unexpected value; modify FindingsReport to normalize input before sorting/lookup by iterating over the raw findings (the findings array used to build sorted) and for each finding validate severity/outcome/platform against the known keys, clamping unknown values to safe defaults (e.g., "info"/"informational"/default platform) or replacing with canonical enum values, and then use those sanitized fields for the SEVERITY_STYLES/OUTCOME_STYLES/PLATFORM_LABELS accesses so lookups cannot return undefined.cmd/catalog_get.go-129-132 (1)
129-132:⚠️ Potential issue | 🟡 MinorSilent swallow of non-
NotFoundDB errors.
First(&lastScraped).Error == niltreats every error (connection loss, permission denied, context cancelled, etc.) the same as "no row". Consider checking forgorm.ErrRecordNotFoundspecifically and logging/returning other errors.🛡️ Proposed fix
- var lastScraped models.ConfigItemLastScrapedTime - if err := ctx.DB().Where("config_id = ?", id).First(&lastScraped).Error; err == nil { - result.LastScrapedTime = lastScraped.LastScrapedTime - } + var lastScraped models.ConfigItemLastScrapedTime + if err := ctx.DB().Where("config_id = ?", id).First(&lastScraped).Error; err == nil { + result.LastScrapedTime = lastScraped.LastScrapedTime + } else if !errors.Is(err, gorm.ErrRecordNotFound) { + ctx.Logger.V(2).Infof("failed to load last scraped time for %s: %v", id, err) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/catalog_get.go` around lines 129 - 132, The current block that calls ctx.DB().Where("config_id = ?", id).First(&lastScraped).Error in cmd/catalog_get.go silently treats all DB errors as "no row"; change it to capture the error into a variable, then check specifically using errors.Is(err, gorm.ErrRecordNotFound) to handle the missing-row case (set result.LastScrapedTime only when found) and for any other non-nil error (connection/permission/context cancel) log it via the same logger or return it from the surrounding function so it isn’t swallowed; reference the lastScraped variable and the ctx.DB().Where(...).First(...) call when making the change.report/catalog/change_mappings.go-172-189 (1)
172-189:⚠️ Potential issue | 🟡 MinorSilent drop when transform returns a non-map or empty-kind value.
evalMappingTransformreturns(nil, nil)in two distinct failure modes:
- Transform eval succeeded but returned a non-map (likely a user expression bug).
- Transform returned a map without a
kindfield.In both cases
Applysilently continues without any log or diagnostic, making it hard for authors to debug why theirtransformisn't taking effect. Consider emitting at least actx.Logger.V(2).Infof(...)on case (1), since that is almost certainly a misconfigured expression.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/catalog/change_mappings.go` around lines 172 - 189, The function evalMappingTransform currently returns (nil, nil) for two failure modes (non-map result from celValueToNative and map missing "kind") which silently drops transforms; update evalMappingTransform to emit a debug/info log in those cases (use the caller's logger, e.g. ctx.Logger.V(2).Infof) including the evaluated output and a short message so authors can diagnose misconfigured expressions — specifically, when celValueToNative(out) is not a map log the raw out and note "transform returned non-map", and when the map has empty or missing "kind" log the map value and note "transform returned map without kind"; keep the existing return values but ensure the logs reference evalMappingTransform, program (or its string/identifier) and the problematic value for debugging.report/components/change-section-utils.ts-415-445 (1)
415-445:⚠️ Potential issue | 🟡 MinorInvalid dates can break
latestDateordering.
new Date(change.date).getTime()returnsNaNfor unparseable strings, and any NaN comparison yieldsfalse. That means:
- Line 418: the very first row's invalid date becomes the group's
latestDateand is never superseded even by valid later dates.- Line 442/444: invalid dates sort unpredictably.
If upstream guarantees
change.dateis always a valid ISO string, this is moot. Otherwise considerNumber.isFinite(...)checks or defaulting invalid dates to0/-Infinityfor sorting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/components/change-section-utils.ts` around lines 415 - 445, The group.latestDate and row sorting rely on new Date(change.date).getTime() which can be NaN for invalid dates; update the logic in the block that sets group.latestDate (referencing group.latestDate and change.date) to parse the date once, check Number.isFinite(parsedTime) and only assign if parsedTime is finite (or use a fallback like 0 or -Infinity), and update both sorts (the rows sort comparing a.date/b.date and the outer groups sort comparing group.latestDate) to use parsed numeric timestamps with the same finite fallback so invalid dates don't break ordering.report/facet.go-201-244 (1)
201-244:⚠️ Potential issue | 🟡 Minor
SrcDircache is never invalidated across binary upgrades.
sync.OnceValuesonly caches for the life of the current process, so this isn't a problem within one run. ButExtractFileswrites into a stable~/.cache/incident-commander/facet-reportpath without ever removing files that were present in an old embed FS but not the new one. After upgrades, the cache dir can accumulate stale TSX files that might still be picked up by the facet CLI via relative imports.Consider either:
- writing a version marker (e.g., binary build ID) and wiping the dir on mismatch, or
- clearing the dir at the start of
ExtractFilesbefore re-extracting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@report/facet.go` around lines 201 - 244, The cache directory can accumulate stale embedded files across upgrades; change ExtractFiles to clear the destination directory before writing new files: at the start of ExtractFiles(destDir string) remove or empty destDir (e.g., call os.RemoveAll(destDir) then os.MkdirAll(destDir, 0750)) so stale TSX files from previous embeds are removed, then proceed with fs.WalkDir(FS, ...) and writing files; update any callers (SrcDir using sync.OnceValues) to rely on this cleaned extraction behavior.cmd/connection_browser.go-810-812 (1)
810-812:⚠️ Potential issue | 🟡 MinorInverted
strings.Containssemantics.
strings.Contains(flags.RequireBearerAud, "graph.microsoft.com")treats the configured audience as the haystack and the literal as the needle, which is opposite to the usual convention. It happens to work because the Azure preRun setsRequireBearerAud = "graph.microsoft.com"exactly, but any user override likeRequireBearerAud = "graph"would silently stop redirecting to the Graph/meURL.🛠️ Proposed fix
- if props["bearer"] != "" && strings.Contains(flags.RequireBearerAud, "graph.microsoft.com") { + if props["bearer"] != "" && strings.Contains("graph.microsoft.com", flags.RequireBearerAud) && flags.RequireBearerAud != "" { connURL = "https://graph.microsoft.com/v1.0/me" }Or, more clearly, check the selected token's decoded audience instead of the flag.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/connection_browser.go` around lines 810 - 812, The current check uses strings.Contains(flags.RequireBearerAud, "graph.microsoft.com") which inverts needle/haystack and can fail for partial overrides; update the condition in the block that sets connURL (where props["bearer"], flags.RequireBearerAud and connURL are used) to either flip the arguments to strings.Contains("graph.microsoft.com", flags.RequireBearerAud) or, preferably, decode the selected bearer token's audience and check that the token audience contains or equals "graph.microsoft.com" before setting connURL = "https://graph.microsoft.com/v1.0/me".cmd/connection_browser.go-566-588 (1)
566-588:⚠️ Potential issue | 🟡 MinorPotential race between stdin press and bearer auto-detection.
When both the bearer-detect goroutine and the stdin reader finish around the same time,
doneChfires and the innerselectmay non-deterministically pick<-stdinCh(settingInterrupted = true) even though the bearer token was actually detected. The user then falls through to manual token selection despite auto-detection having succeeded.Consider prioritizing
bearerChby polling it first (non-blocking) before falling through to the multi-case select:🛠️ Proposed fix
case <-doneCh: + select { + case tokens := <-bearerCh: + result.AutoDetected = true + result.Tokens = tokens + return result + default: + } select { - case tokens := <-bearerCh: - result.AutoDetected = true - result.Tokens = tokens case <-stdinCh: result.Interrupted = true default: result.AutoDetected = true }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/connection_browser.go` around lines 566 - 588, When doneCh fires, avoid the race by polling bearerCh first: do a non-blocking receive from bearerCh and if you get tokens set result.AutoDetected = true and result.Tokens = tokens; only if bearerCh is empty then check stdinCh (non-blocking) and set result.Interrupted = true if stdin was pressed; do not set result.AutoDetected in the default/fallback path—leave it false so manual selection proceeds when no token was actually received. Reference the existing channels doneCh, bearerCh, stdinCh and the result fields AutoDetected, Tokens, Interrupted when making this change.
| func DownloadArtifact(c echo.Context) error { | ||
| ctx := c.Request().Context().(context.Context) | ||
|
|
||
| _id := c.Param("id") | ||
| artifactID, err := uuid.Parse(_id) | ||
| artifactID, err := uuid.Parse(c.Param("id")) | ||
| if err != nil { | ||
| return api.WriteError(c, api.Errorf(api.EINVALID, "invalid id(%s). must be a uuid. %v", _id, err)) | ||
| return api.WriteError(c, api.Errorf(api.EINVALID, "invalid id: %v", err)) | ||
| } | ||
|
|
||
| artifact, err := db.FindArtifact(ctx, artifactID) | ||
| blobs, err := ctx.Blobs() | ||
| if err != nil { | ||
| return api.WriteError(c, err) | ||
| } else if artifact == nil { | ||
| return api.WriteError(c, api.Errorf(api.ENOTFOUND, "artifact(%s) was not found", artifactID)) | ||
| } | ||
| defer blobs.Close() | ||
|
|
||
| conn, err := pkgConnection.Get(ctx, artifact.ConnectionID.String()) | ||
| data, err := blobs.Read(artifactID) | ||
| if err != nil { | ||
| return api.WriteError(c, err) | ||
| } else if conn == nil { | ||
| return api.WriteError(c, api.Errorf(api.ENOTFOUND, "artifact's connection was not found")) | ||
| } | ||
| defer data.Content.Close() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Look for any per-artifact access checks elsewhere and the previous FindArtifact/GetFSForConnection path on main.
rg -nP -C3 '\bFindArtifact\b|\bGetFSForConnection\b' --type=go
rg -nP -C3 'ObjectArtifact|ActionRead' --type=go
rg -nP -C3 'artifact.*(team|playbook|tenant|agent|scope)' --type=go -iRepository: flanksource/mission-control
Length of output: 50383
🏁 Script executed:
# Check artifact model definition and relationships
fd -e go -type f | xargs rg -l 'type Artifact struct' | head -5Repository: flanksource/mission-control
Length of output: 241
🏁 Script executed:
# Look at artifact model to understand scoping
rg -A 20 'type Artifact struct' --type=goRepository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
# Check the full DownloadArtifact implementation to see current state
rg -B 5 -A 30 'func DownloadArtifact' artifacts/controllers.goRepository: flanksource/mission-control
Length of output: 982
🏁 Script executed:
# Look for any artifact authorization checks (HasPermission calls with artifacts)
rg -C 5 'artifact.*HasPermission|HasPermission.*artifact' --type=go -iRepository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
# Find artifact model definition
find . -name "*.go" | xargs grep -l 'type Artifact struct' 2>/dev/nullRepository: flanksource/mission-control
Length of output: 55
🏁 Script executed:
# Search for Artifact in models directory
fd Artifact models/ --type fRepository: flanksource/mission-control
Length of output: 165
🏁 Script executed:
# Look for artifact relationships (fields that reference parent entities)
rg 'PlaybookRunActionID|ConfigChangeID|CheckID' --type go -A 2 -B 2 | head -50Repository: flanksource/mission-control
Length of output: 2245
🏁 Script executed:
# Check if artifacts table has any scoping fields
rg -A 30 'type Artifact' models/ --type goRepository: flanksource/mission-control
Length of output: 118
🏁 Script executed:
# Find all Go files with Artifact struct definition
rg 'type Artifact struct' --type goRepository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
# Look at artifacts/controllers.go for context around the handler
cat -n artifacts/controllers.go | head -50Repository: flanksource/mission-control
Length of output: 1787
🏁 Script executed:
# Check if artifacts have parent entity relationships by looking at ListArtifacts
rg -A 20 'func ListArtifacts' artifacts/controllers.goRepository: flanksource/mission-control
Length of output: 773
🏁 Script executed:
# Look for the Artifact model definition - check duty package imports
rg 'models.Artifact' artifacts/controllers.go -B 3 -A 3Repository: flanksource/mission-control
Length of output: 306
🏁 Script executed:
# Check query.ArtifactsByCheck to see if it filters by user permissions
find . -name "*.go" -type f | xargs grep -l 'ArtifactsByCheck' 2>/dev/nullRepository: flanksource/mission-control
Length of output: 97
🏁 Script executed:
# Look for duty/models or check what models are available locally
git ls-files | grep -i artifact | grep -i modelRepository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
# Check if there are any per-artifact permission checks in the playbook/mcp code
rg -B 3 -A 3 'FindArtifact.*ctx|artifact.*permission' --type go -i | head -80Repository: flanksource/mission-control
Length of output: 1144
🏁 Script executed:
# Check the Artifact model fields to understand scoping
rg -A 15 'type Artifact struct' --type go | head -50Repository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
# Look at the query functions to see how they filter artifacts
grep -n 'ArtifactsByCheck\|ArtifactsByPlaybookRun\|ArtifactsByConfigChange' artifacts/controllers.goRepository: flanksource/mission-control
Length of output: 256
🏁 Script executed:
# Check if there's a git history or diff showing what changed
git log --oneline -20 -- artifacts/controllers.go 2>/dev/null || echo "No git history available"Repository: flanksource/mission-control
Length of output: 239
🏁 Script executed:
# Check how MCP uses FindArtifact and what context it provides
cat mcp/artifacts.goRepository: flanksource/mission-control
Length of output: 4491
🏁 Script executed:
# Search for ctx.Blobs() implementation to see if it has authorization
rg 'func.*Blobs' --type goRepository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
# Look at the refactoring commit to understand the change
git show 8d61cff --stat 2>/dev/null | head -30Repository: flanksource/mission-control
Length of output: 1533
🏁 Script executed:
# Search for any per-artifact authorization or scoping in the codebase
rg 'artifact.*scope|artifact.*playbook|artifact.*check|artifact.*config' --type go -i | grep -v test | head -40Repository: flanksource/mission-control
Length of output: 1480
🏁 Script executed:
# Check if artifacts have PlaybookRunID, CheckID, ConfigChangeID fields
rg 'PlaybookRunID|PlaybookRunActionID|CheckID|ConfigChangeID|ConfigID' --type go | grep -i artifact | head -20Repository: flanksource/mission-control
Length of output: 453
🏁 Script executed:
# Look at the actual refactor diff for DownloadArtifact
git show 8d61cff -- artifacts/controllers.go 2>/dev/null | head -100Repository: flanksource/mission-control
Length of output: 3632
🏁 Script executed:
# Check if ctx.Blobs() has any authorization or scoping
rg -B 5 -A 15 'func.*Blobs' --type goRepository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
# Look at the complete ListArtifacts function
rg -A 40 'func ListArtifacts' artifacts/controllers.goRepository: flanksource/mission-control
Length of output: 1272
🏁 Script executed:
# Check the database schema or model for Artifact to see foreign keys
rg 'type Artifact' -A 25 --type go | grep -E 'ID|PlaybookRun|Check|Config|Action'Repository: flanksource/mission-control
Length of output: 368
🏁 Script executed:
# Look for the Blobs() implementation in duty or incident-commander
find . -name "*.go" -type f | xargs grep -l 'func.*Blobs()' 2>/dev/nullRepository: flanksource/mission-control
Length of output: 55
🏁 Script executed:
# Check if there's an Artifact model definition in duty imports
rg 'duty.*Artifact|models.Artifact' artifacts/controllers.go -B 2 -A 2Repository: flanksource/mission-control
Length of output: 167
🏁 Script executed:
# Look for any middleware or RLS that might filter artifacts by parent scope
rg 'RLSMiddleware|row.*level.*security|artifact.*scope' --type go -i | head -30Repository: flanksource/mission-control
Length of output: 1230
🏁 Script executed:
# Check if the query functions filter by user scope
rg -A 10 'ArtifactsByCheck|ArtifactsByPlaybookRun|ArtifactsByConfigChange' --type go | head -50Repository: flanksource/mission-control
Length of output: 1147
🏁 Script executed:
# Check if artifacts have PlaybookRunID, CheckID, ConfigID or similar fields
rg 'PlaybookRunActionID|CheckID|ConfigChangeID' --type go | grep -i artifact | head -20Repository: flanksource/mission-control
Length of output: 214
🏁 Script executed:
# Look at the previous version of DownloadArtifact before the refactor
git show 8d61cff~1:artifacts/controllers.go 2>/dev/null || echo "Previous version not available"Repository: flanksource/mission-control
Length of output: 101
🏁 Script executed:
# Check the RLS middleware implementation
rg -A 20 'func RLSMiddleware' --type goRepository: flanksource/mission-control
Length of output: 1156
🏁 Script executed:
# Look for any authorization checks in ListArtifacts that might be inherited
rg -B 5 'ArtifactsByCheck\|ArtifactsByPlaybookRun\|ArtifactsByConfigChange' query/ --type go 2>/dev/null | head -80Repository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
# Check if artifacts route has RLSMiddleware
rg -B 5 -A 5 'ObjectArtifact.*ActionRead' artifacts/controllers.goRepository: flanksource/mission-control
Length of output: 459
🏁 Script executed:
# Look at auth.WithRLS to understand how it works
rg -A 20 'func WithRLS' --type goRepository: flanksource/mission-control
Length of output: 902
🏁 Script executed:
# Check if there are any tests showing authorization model for artifacts
rg -B 5 -A 10 'artifact.*permission|artifact.*rbac|artifact.*scope' tests/ --type go -i | head -100Repository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
# Look for artifact scoping in db/rbac or auth
rg 'artifact' db/rbac.go auth/rls.go -iRepository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
# Check if artifacts route should have RLSMiddleware by looking at similar patterns
rg -B 10 'RLSMiddleware' echo/serve.go | grep -A 10 'catalog\|resources/search'Repository: flanksource/mission-control
Length of output: 651
🏁 Script executed:
# Look for any artifact-specific authorization checks elsewhere
rg 'artifact.*ctx.Subject|ctx.Subject.*artifact' --type go -iRepository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
# Check if DownloadArtifact is called in any test to understand expected behavior
rg 'DownloadArtifact|/download/' tests/ --type go -iRepository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
# Look at the git log to see if there are any related issues/PRs
git log --oneline --all --grep='artifact' -i 2>/dev/null | head -20Repository: flanksource/mission-control
Length of output: 1318
🏁 Script executed:
# Look for how artifacts are related to parent entities in the database or models
rg -B 5 -A 15 'ArtifactsByPlaybookRun|ArtifactsByCheck|ArtifactsByConfigChange' --type go | head -100Repository: flanksource/mission-control
Length of output: 1769
🏁 Script executed:
# Check if RLS is applied to artifacts queries
rg -A 10 'GetRLSPayload' auth/rls.goRepository: flanksource/mission-control
Length of output: 712
🏁 Script executed:
# Look at what the RLS payload filters on
rg 'postgres.*rls|SetPostgresSessionRLS' --type go -A 5 | head -50Repository: flanksource/mission-control
Length of output: 3649
Confirm per-artifact authorization is still enforced after dropping the DB lookup.
The refactoring changed from db.FindArtifact + scope-aware connection lookup to direct blob storage access by UUID. Since ListArtifacts explicitly requires parent entity IDs (check, playbook_run, config_change) to filter results, artifacts are clearly scoped to parent entities. The new DownloadArtifact eliminates the database check that previously validated artifact scope, replacing it with only route-level rbac.Authorization(policy.ObjectArtifact, policy.ActionRead).
Any user with Artifact:Read can now download any artifact by UUID, bypassing per-artifact scope restrictions. Either confirm this coarse access model is intended, or restore per-artifact authorization by re-introducing the artifact lookup to verify scope visibility before streaming.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@artifacts/controllers.go` around lines 73 - 91, DownloadArtifact now reads
blobs directly by UUID which bypasses per-artifact scope checks; restore the DB
lookup and scope/permission validation before streaming. After parsing
artifactID in DownloadArtifact, call the existing db.FindArtifact (or the
repository method that returns the artifact and its parent IDs) and verify the
caller can see that artifact (re-use the same scope/visibility logic
ListArtifacts relies on or invoke the rbac.Authorization(policy.ObjectArtifact,
policy.ActionRead) with the artifact's parent context) and only then proceed to
call ctx.Blobs().Read(artifactID) and stream; ensure
blobs.Close()/data.Content.Close() remain in place.
| if filename := c.QueryParam("filename"); filename != "" { | ||
| c.Response().Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=%q", filename)) | ||
| } | ||
| defer file.Close() | ||
|
|
||
| return c.Stream(http.StatusOK, artifact.ContentType, file) | ||
| return c.Stream(http.StatusOK, data.ContentType, data.Content) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Locate the Blobs().Read return type and its fields to see if a filename is available.
rg -nP -C3 '\bfunc\s+\([^)]*\)\s+Read\s*\(' --type=go
rg -nP -C2 '\bContentType\b|\bContentLength\b|\bFilename\b|\bName\b' --type=go -g '**/blob*'
rg -nP -C3 '\bBlobs\s*\(\s*\)' --type=goRepository: flanksource/mission-control
Length of output: 591
🏁 Script executed:
#!/bin/bash
# 1. Read the DownloadArtifact function to see the complete context
cat -n artifacts/controllers.go | sed -n '70,105p'Repository: flanksource/mission-control
Length of output: 1202
🏁 Script executed:
#!/bin/bash
# 2. Find Blobs interface/type definition and Read method
fd -t f -e go | xargs rg -A 10 'type Blobs' | head -50
fd -t f -e go | xargs rg -B 2 -A 5 'func.*Read\(' | grep -A 5 'Blobs\|blob' | head -60Repository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
#!/bin/bash
# 3. Search for the actual Blobs().Read return type definition
rg -A 10 'func \(.*\) Read' --type=go | head -80Repository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
#!/bin/bash
# Search more broadly for Blobs type/interface definition in the codebase
fd -t f -e go | xargs grep -l "type Blobs\|interface Blobs" | head -10Repository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
#!/bin/bash
# Check what package ctx.Blobs() comes from and search there
rg 'ctx\.Blobs\(\)' --type=go -B 5 -A 5 | head -40Repository: flanksource/mission-control
Length of output: 589
🏁 Script executed:
#!/bin/bash
# Search for blob-related packages or interfaces
fd -t d -name "*blob*" | head -10
rg 'type.*Read\(' --type=go | grep -i blobRepository: flanksource/mission-control
Length of output: 301
🏁 Script executed:
#!/bin/bash
# Check git history to see what the previous implementation looked like
cd artifacts && git log --oneline -n 20 controllers.go 2>/dev/null | head -20
git show HEAD~1:artifacts/controllers.go 2>/dev/null | sed -n '70,110p'Repository: flanksource/mission-control
Length of output: 239
🏁 Script executed:
#!/bin/bash
# Get the previous implementation using git show with the commit hash
git show 8d61cff:artifacts/controllers.go 2>/dev/null | sed -n '70,120p'Repository: flanksource/mission-control
Length of output: 978
🏁 Script executed:
#!/bin/bash
# Check imports in artifacts/controllers.go to find where Blobs comes from
head -30 artifacts/controllers.go | grep -E "^import|github.com"Repository: flanksource/mission-control
Length of output: 477
🏁 Script executed:
#!/bin/bash
# Search for Blobs definition in the entire codebase - look for struct/interface
rg -U 'type\s+Blobs\s+(interface|struct)' --type=go
rg 'Blobs.*interface' --type=go -A 10Repository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
#!/bin/bash
# Get the parent commit to see previous implementation
git show HEAD~1:artifacts/controllers.go 2>/dev/null | sed -n '70,120p'Repository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
#!/bin/bash
# Try to find information about Blobs in external dependencies or vendor directories
find . -path ./node_modules -prune -o -type f -name "*.go" -exec grep -l "type.*Blobs.*interface\|type.*Blobs.*struct" {} \;Repository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
#!/bin/bash
# Search for the Read method and its return type more carefully
rg 'Read.*func\(' --type=go -A 3 | head -60Repository: flanksource/mission-control
Length of output: 654
🏁 Script executed:
#!/bin/bash
# Check git log for artifacts/controllers.go to understand history
git log --oneline artifacts/controllers.go | head -10Repository: flanksource/mission-control
Length of output: 239
🏁 Script executed:
#!/bin/bash
# Try to see the diff of the most recent change to artifacts/controllers.go
git diff HEAD~1 artifacts/controllers.go 2>/dev/null | head -150Repository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
#!/bin/bash
# Look for Artifact model definition in models package
fd -t f -e go models | xargs grep -l "type Artifact" | head -5Repository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
#!/bin/bash
# Search for any previous artifact-to-filename mapping or DB lookup
rg 'FindArtifact|GetArtifact|GetFSForConnection' --type=go -B 2 -A 5Repository: flanksource/mission-control
Length of output: 8905
🏁 Script executed:
#!/bin/bash
# Find the Artifact model definition
find . -type f -name "*.go" -path "*/models/*" -o -path "*/model/*" | xargs grep -l "type Artifact struct" | head -5Repository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
#!/bin/bash
# Search for Artifact struct definition anywhere
rg 'type Artifact struct' --type=go -A 15Repository: flanksource/mission-control
Length of output: 53
🏁 Script executed:
#!/antml/bash
# Look at the db/artifacts.go file to understand what FindArtifact returns
cat -n db/artifacts.go | head -50Repository: flanksource/mission-control
Length of output: 690
🏁 Script executed:
#!/bin/bash
# Check if there's any reference to artifact.Filename or similar in the codebase
rg '\.Filename|\.Path' --type=go | grep -i artifact | head -20Repository: flanksource/mission-control
Length of output: 1060
🏁 Script executed:
#!/bin/bash
# Find where DownloadArtifact route is registered
rg 'DownloadArtifact' --type=go -B 3 -A 3Repository: flanksource/mission-control
Length of output: 804
🏁 Script executed:
#!/bin/bash
# Check for RBAC configuration on artifact routes
rg 'artifacts.*authorization|rbac.*artifact' --type=go -i -B 2 -A 2Repository: flanksource/mission-control
Length of output: 453
🏁 Script executed:
#!/bin/bash
# Look for route registration in init or router setup
rg 'RegisterRoutes|GET.*artifact|POST.*artifact' --type=go -i -B 2 -A 2 | head -60Repository: flanksource/mission-control
Length of output: 2666
🏁 Script executed:
#!/bin/bash
# Check if the Blobs interface provides any metadata methods
rg 'type.*Blobs.*interface|func.*Blobs.*\(\)' --type=go -A 15Repository: flanksource/mission-control
Length of output: 53
Content-Disposition default filename regression confirmed.
The models.Artifact struct includes a Filename field, which was previously available when the endpoint performed a direct DB lookup via db.FindArtifact(). The refactored code now retrieves artifacts via ctx.Blobs().Read(artifactID), which returns only ContentType, ContentLength, and Content — the Filename field is no longer exposed. As a result, when no ?filename= query parameter is provided, browsers will use the artifact UUID or a generic default, which is a user-visible regression.
This represents an architectural shift: the new Blobs interface abstracts away artifact metadata. If ctx.Blobs().Read() cannot be extended to expose filename, you will need to restore the DB lookup to retrieve the artifact's stored filename and use it as a fallback default.
Additionally, verify whether the removal of the DB lookup eliminates any per-artifact authorization or tenancy scope validation that was previously enforced.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@artifacts/controllers.go` around lines 97 - 100, The refactor removed the
filename by switching from db.FindArtifact() to ctx.Blobs().Read(artifactID);
restore a fallback so responses include a sensible Content-Disposition filename:
either extend the Blobs interface/Read return value to include Filename (and
update its implementations) or re-add a DB lookup using models.Artifact via
db.FindArtifact(artifactID) to read Artifact.Filename and use it when
c.QueryParam("filename") is empty; also ensure you preserve any per-artifact
authorization/tenancy checks that existed with the DB path (re-run the same
auth/tenant validations before returning the blob).
| "/revoke", | ||
| "/device_authorization", | ||
| "/endsession", | ||
| "/end_session", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -nP -C2 '(end_?session|EndSession)' --type=goRepository: flanksource/mission-control
Length of output: 487
Endpoint path mismatch: /endsession registered but /end_session whitelisted.
The OIDC provider in auth/oidc/routes.go:44 registers the endpoint at /endsession (no underscore), but the skip-auth middleware in auth/middleware.go:75 has /end_session (with underscore). This mismatch will cause the actual OIDC endpoint to be blocked by authentication, breaking the end_session flow. Update the skip-auth list to /endsession.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@auth/middleware.go` at line 75, Update the skip-auth whitelist entry in the
middleware where the list of paths is defined (in auth/middleware.go) to match
the OIDC route registration: replace the incorrect "/end_session" entry with
"/endsession" so the skipAuth middleware (used by the auth middleware function
that checks the whitelist) allows the OIDC provider's end session endpoint
registered in auth/oidc/routes.go.
| } | ||
|
|
||
| issuer := strings.TrimRight(api.FrontendURL, "/") | ||
| issuer := strings.TrimRight(api.PublicURL, "/") |
There was a problem hiding this comment.
Issuer now hard-coded to PublicURL — breaks Basic-mode OIDC.
In auth/middleware.go, Basic mode still mounts the OIDC provider at api.FrontendURL (lines 109 and 112), while only Kratos mode was switched to api.PublicURL. Tokens issued by the Basic-mode provider will therefore carry iss == FrontendURL and fail the iss != issuer check here whenever PublicURL != FrontendURL, silently rejecting all OIDC tokens in that mode.
Either mount Basic-mode OIDC on PublicURL too, or derive issuer from the same source used to mount the routes (e.g. a shared api.OIDCIssuer).
| func openBrowser(url string) error { | ||
| switch runtime.GOOS { | ||
| case "darwin": | ||
| cmd = "open" | ||
| args = []string{url} | ||
| return exec.Command("open", url).Start() | ||
| case "windows": | ||
| cmd = "cmd" | ||
| args = []string{"/c", "start", url} | ||
| return exec.Command("cmd", "/c", "start", "", url).Start() | ||
| default: | ||
| cmd = "xdg-open" | ||
| args = []string{url} | ||
| return exec.Command("xdg-open", url).Start() | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Go openBrowser cmd /c start URL with ampersand Windows escape
💡 Result:
To open a URL containing ampersands (&) using Go's cmd/internal/browser Open function (or equivalent custom openBrowser using exec.Command("cmd", "/c", "start", url)) on Windows, the & is interpreted as a command separator by cmd.exe, causing failure. Fixes from sources: 1. Replace each & with ^& in the URL before passing: strings.ReplaceAll(url, "&", "^&"). Then use exec.Command("cmd", "/c", "start", escapedURL). This escapes for cmd parsing while preserving the original URL for the browser. Example: import ( "os/exec" "runtime" "strings" ) func openBrowser(url string) error { if runtime.GOOS == "windows" { url = strings.ReplaceAll(url, "&", "^&") return exec.Command("cmd", "/c", "start", url).Start } // other platforms... return nil } 2. Use explorer instead: exec.Command("explorer", url).Start. Handles & without escaping (quotes URL automatically). 3. Use rundll32: exec.Command("rundll32", "url.dll,FileProtocolHandler", url).Start. No escaping needed for &. The standard cmd/internal/browser uses cmd /c start without escaping, so it fails on URLs with &. Use one of the above modifications for reliability. Confirmed across Go gists, Stack Overflow, and Windows cmd docs as of 2026.
Citations:
- 1: https://stackoverflow.com/questions/44513063/how-to-open-a-url-that-contains-an-ampersand-with-start-command
- 2: https://ourtechroom.com/tech/open-ampersand-containing-url-with-start-command/
- 3: https://gist.github.com/threeaccents/607f3bc3a57a2ddd9d57?permalink_comment_id=2621809
- 4: https://gist.github.com/threeaccents/607f3bc3a57a2ddd9d57
- 5: https://stackoverflow.com/questions/1327431/how-do-i-escape-ampersands-in-batch-files
- 6: https://stackoverflow.com/questions/10377243/how-can-i-launch-a-process-that-is-not-a-file-in-go-e-g-open-a-web-page
- 7: https://go.dev/src/cmd/internal/browser/browser.go
- 8: https://tip.golang.org/src/cmd/internal/browser/browser.go
Windows cmd /c start breaks on URLs with ampersands in query parameters.
authURL built at lines 98–105 contains multiple & query separators (state, nonce, code_challenge). On Windows, exec.Command("cmd", "/c", "start", "", url) passes the URL through cmd.exe, which interprets & as a command separator. The browser receives only the URL up to the first &, breaking the OIDC flow (missing state/nonce/code_challenge → callback validation fails) and introducing a command-injection risk if url becomes attacker-influenced.
Use exec.Command("rundll32", "url.dll,FileProtocolHandler", url).Start() instead, which bypasses cmd.exe parsing and handles & correctly. Alternatively, exec.Command("explorer", url).Start() is simpler and also avoids the issue.
🛠️ Suggested fix
func openBrowser(url string) error {
switch runtime.GOOS {
case "darwin":
return exec.Command("open", url).Start()
case "windows":
- return exec.Command("cmd", "/c", "start", "", url).Start()
+ return exec.Command("rundll32", "url.dll,FileProtocolHandler", url).Start()
default:
return exec.Command("xdg-open", url).Start()
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func openBrowser(url string) error { | |
| switch runtime.GOOS { | |
| case "darwin": | |
| cmd = "open" | |
| args = []string{url} | |
| return exec.Command("open", url).Start() | |
| case "windows": | |
| cmd = "cmd" | |
| args = []string{"/c", "start", url} | |
| return exec.Command("cmd", "/c", "start", "", url).Start() | |
| default: | |
| cmd = "xdg-open" | |
| args = []string{url} | |
| return exec.Command("xdg-open", url).Start() | |
| } | |
| func openBrowser(url string) error { | |
| switch runtime.GOOS { | |
| case "darwin": | |
| return exec.Command("open", url).Start() | |
| case "windows": | |
| return exec.Command("rundll32", "url.dll,FileProtocolHandler", url).Start() | |
| default: | |
| return exec.Command("xdg-open", url).Start() | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/auth_login.go` around lines 162 - 170, The openBrowser function uses
exec.Command("cmd", "/c", "start", "", url) on Windows which causes cmd.exe to
parse ampersands and breaks OIDC URLs; replace that branch in openBrowser to
call a command that bypasses cmd.exe parsing (e.g., use exec.Command("rundll32",
"url.dll,FileProtocolHandler", url).Start() or exec.Command("explorer",
url).Start()) so URLs with & are passed intact and avoid command-injection
risks; update the Windows case in openBrowser accordingly and ensure Start() is
invoked on the returned *Cmd.
| func GetRBACAccessByConfigIDs(ctx context.Context, configIDs []uuid.UUID) ([]RBACAccessRow, error) { | ||
| return GetRBACAccess(ctx, nil, false, configIDs...) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm callers and whether any rely on empty-input-returns-all behaviour.
rg -n -C3 'GetRBACAccessByConfigIDs|GetRBACAccess\b'Repository: flanksource/mission-control
Length of output: 1866
🏁 Script executed:
#!/bin/bash
# Check the context of targetIDs in report/catalog/report.go
echo "=== Checking report/catalog/report.go around line 278 ==="
sed -n '260,290p' report/catalog/report.go
echo -e "\n=== Checking db/rbac.go for GetGroupMembersForConfigs and GetRBACAccess logic ==="
sed -n '41,150p' db/rbac.go
echo -e "\n=== Checking if targetIDs comes from a potentially empty operation ==="
rg -B 20 'GetRBACAccessByConfigIDs' report/catalog/report.go | head -60Repository: flanksource/mission-control
Length of output: 6355
🏁 Script executed:
#!/bin/bash
# Find where targetIDs is defined in report/catalog/report.go
echo "=== Looking for targetIDs definition in report/catalog/report.go ==="
rg -n 'targetIDs' report/catalog/report.go | head -20
# Check function signature and early returns
echo -e "\n=== Broader context around the report function ==="
sed -n '1,100p' report/catalog/report.go | head -80Repository: flanksource/mission-control
Length of output: 2709
🏁 Script executed:
#!/bin/bash
# Look at the full BuildReport function to understand tree initialization
echo "=== Full BuildReport function ==="
sed -n '100,250p' report/catalog/report.go
echo -e "\n=== Check what OutgoingIDs returns ==="
rg -n 'OutgoingIDs' report/catalog/report.go -A 2 -B 2Repository: flanksource/mission-control
Length of output: 4934
🏁 Script executed:
#!/bin/bash
# Verify the ConfigTree and OutgoingIDs behavior - check if it can be empty
echo "=== Searching for ConfigTree and OutgoingIDs definitions ==="
rg -n 'func.*OutgoingIDs|ConfigTree' --type go
echo -e "\n=== Check if there's any guard between targetIDs assignment and usage ==="
sed -n '195,210p' report/catalog/report.goRepository: flanksource/mission-control
Length of output: 2535
🏁 Script executed:
#!/bin/bash
# Final confirmation: check the exact query construction in GetRBACAccess
echo "=== GetRBACAccess query construction ==="
sed -n '45,90p' db/rbac.go
echo -e "\n=== Confirm the behavior: no WHERE clause when both conditions are empty ==="
sed -n '75,92p' db/rbac.goRepository: flanksource/mission-control
Length of output: 2352
Critical: GetRBACAccessByConfigIDs returns all rows when configIDs is empty.
When configIDs is empty, the call becomes GetRBACAccess(ctx, nil, false) where both selectors and configIDs are empty — the WHERE clause at line 82 is skipped and the query runs with no config_id filter, returning the entire config_access_summary table. The actual call site at report/catalog/report.go:278 can pass empty targetIDs when tree.OutgoingIDs() finds no outgoing edges, making this a real data-scope hazard.
Note: GetGroupMembersForConfigs already guards this correctly (line 127–129); mirror that here.
🔒 Proposed fix
func GetRBACAccessByConfigIDs(ctx context.Context, configIDs []uuid.UUID) ([]RBACAccessRow, error) {
+ if len(configIDs) == 0 {
+ return nil, nil
+ }
return GetRBACAccess(ctx, nil, false, configIDs...)
}Optionally also reject empty inputs inside GetRBACAccess itself so that rbac_report.Report with no selectors and no IDs cannot accidentally dump the whole table:
func GetRBACAccess(ctx context.Context, selectors []types.ResourceSelector, recursive bool, configIDs ...uuid.UUID) (results []RBACAccessRow, err error) {
timer := dutyQuery.NewQueryLogger(ctx).Start("RBACAccess").Arg("configIDs", len(configIDs)).Arg("selectors", len(selectors))
defer timer.End(&err)
+
+ if len(selectors) == 0 && len(configIDs) == 0 {
+ return nil, ctx.Oops().Code(dutyAPI.EINVALID).Errorf("at least one selector or config id is required")
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func GetRBACAccessByConfigIDs(ctx context.Context, configIDs []uuid.UUID) ([]RBACAccessRow, error) { | |
| return GetRBACAccess(ctx, nil, false, configIDs...) | |
| } | |
| func GetRBACAccessByConfigIDs(ctx context.Context, configIDs []uuid.UUID) ([]RBACAccessRow, error) { | |
| if len(configIDs) == 0 { | |
| return nil, nil | |
| } | |
| return GetRBACAccess(ctx, nil, false, configIDs...) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@db/rbac.go` around lines 41 - 43, GetRBACAccessByConfigIDs currently calls
GetRBACAccess with no selectors and can return the entire table when configIDs
is empty; change GetRBACAccessByConfigIDs to explicitly guard for
len(configIDs)==0 and return an empty []RBACAccessRow (nil, nil) immediately
(mirror the behavior in GetGroupMembersForConfigs), and optionally add a
defensive check inside GetRBACAccess to return an empty result or error when
both selectors and configIDs are empty to prevent accidental full-table scans.
| func resolveSubjectAndParents(ctx context.Context, configItems []models.ConfigItem, configMap map[string]models.ConfigItem) (*models.ConfigItem, []models.ConfigItem) { | ||
| if len(configItems) == 0 { | ||
| return nil, nil | ||
| } | ||
|
|
||
| first := configItems[0] | ||
|
|
||
| var parents []models.ConfigItem | ||
| current := first | ||
| for current.ParentID != nil { | ||
| parent, ok := configMap[current.ParentID.String()] | ||
| if !ok { | ||
| loaded, err := query.GetConfigsByIDs(ctx, []uuid.UUID{*current.ParentID}) | ||
| if err != nil || len(loaded) == 0 { | ||
| break | ||
| } | ||
| parent = loaded[0] | ||
| configMap[parent.ID.String()] = parent | ||
| } | ||
| parents = append(parents, parent) | ||
| current = parent | ||
| } | ||
|
|
||
| // Reverse so root is first | ||
| for i, j := 0, len(parents)-1; i < j; i, j = i+1, j-1 { | ||
| parents[i], parents[j] = parents[j], parents[i] | ||
| } | ||
|
|
||
| return &first, parents | ||
| } |
There was a problem hiding this comment.
Add cycle protection when walking ParentID chain.
for current.ParentID != nil will loop forever if the parent chain ever contains a cycle (e.g., self-parent or A→B→A from bad data). Track visited IDs and break when a repeat is seen.
🛡️ Proposed fix
func resolveSubjectAndParents(ctx context.Context, configItems []models.ConfigItem, configMap map[string]models.ConfigItem) (*models.ConfigItem, []models.ConfigItem) {
if len(configItems) == 0 {
return nil, nil
}
first := configItems[0]
var parents []models.ConfigItem
current := first
+ visited := map[uuid.UUID]struct{}{current.ID: {}}
for current.ParentID != nil {
+ if _, seen := visited[*current.ParentID]; seen {
+ break
+ }
parent, ok := configMap[current.ParentID.String()]
if !ok {
loaded, err := query.GetConfigsByIDs(ctx, []uuid.UUID{*current.ParentID})
if err != nil || len(loaded) == 0 {
break
}
parent = loaded[0]
configMap[parent.ID.String()] = parent
}
+ visited[parent.ID] = struct{}{}
parents = append(parents, parent)
current = parent
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@rbac_report/report.go` around lines 195 - 224, The loop in
resolveSubjectAndParents can infinite-loop on cyclic parent chains (self-parent
or A→B→A); add cycle protection by tracking visited IDs (e.g., a
map[string]bool) keyed by current.ID or *current.ParentID and check before
following a parent: if the parent ID is already visited, break the loop (or
return error/stop traversal). Update resolveSubjectAndParents to mark the
starting item's ID as visited, check visited before loading/using parent from
configMap or query.GetConfigsByIDs, and stop appending/traversing when a repeat
is detected so the parents slice cannot grow indefinitely.
| const grouped = new Map<string, { configName: string; configType?: string; changes: ConfigChange[] }>(); | ||
| for (const c of withArtifacts) { | ||
| const key = c.configName || c.configID || 'unknown'; | ||
| if (!grouped.has(key)) { | ||
| grouped.set(key, { configName: c.configName || 'Unknown', configType: c.configType, changes: [] }); | ||
| } | ||
| grouped.get(key)!.changes.push(c); |
There was a problem hiding this comment.
Group appendix entries by stable config identity, not configName.
Line 17 uses configName as the primary grouping key, so two different configs with the same display name collapse into one section. In multi-namespace or same-name environments that will mix screenshots and change history from unrelated resources. Use configID as the grouping key and keep the name only for display.
Suggested fix
- const grouped = new Map<string, { configName: string; configType?: string; changes: ConfigChange[] }>();
+ const grouped = new Map<string, { configName: string; configType?: string; changes: ConfigChange[] }>();
for (const c of withArtifacts) {
- const key = c.configName || c.configID || 'unknown';
+ const key = c.configID || `${c.configType || ''}:${c.configName || 'unknown'}`;
if (!grouped.has(key)) {
- grouped.set(key, { configName: c.configName || 'Unknown', configType: c.configType, changes: [] });
+ grouped.set(key, {
+ configName: c.configName || c.configID || 'Unknown',
+ configType: c.configType,
+ changes: [],
+ });
}
grouped.get(key)!.changes.push(c);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const grouped = new Map<string, { configName: string; configType?: string; changes: ConfigChange[] }>(); | |
| for (const c of withArtifacts) { | |
| const key = c.configName || c.configID || 'unknown'; | |
| if (!grouped.has(key)) { | |
| grouped.set(key, { configName: c.configName || 'Unknown', configType: c.configType, changes: [] }); | |
| } | |
| grouped.get(key)!.changes.push(c); | |
| const grouped = new Map<string, { configName: string; configType?: string; changes: ConfigChange[] }>(); | |
| for (const c of withArtifacts) { | |
| const key = c.configID || `${c.configType || ''}:${c.configName || 'unknown'}`; | |
| if (!grouped.has(key)) { | |
| grouped.set(key, { | |
| configName: c.configName || c.configID || 'Unknown', | |
| configType: c.configType, | |
| changes: [], | |
| }); | |
| } | |
| grouped.get(key)!.changes.push(c); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@report/components/ArtifactAppendix.tsx` around lines 15 - 21, The current
grouping uses configName as the key which collapses different configs with the
same display name; change the grouping key to use configID (falling back to
configName or 'unknown' only for display) so entries are grouped by stable
identity: update the logic that builds grouped (the Map named grouped over
withArtifacts) to compute key = c.configID || c.configName || 'unknown' and keep
c.configName/c.configType only in the stored value (changes: ConfigChange[]),
ensuring grouped.get(key)!.changes.push(c) remains unchanged.
| if (diffDays <= 30) { | ||
| const weekStart = new Date(d); | ||
| weekStart.setDate(d.getDate() - d.getDay() + 1); | ||
| const weekEnd = new Date(weekStart); | ||
| weekEnd.setDate(weekStart.getDate() + 4); | ||
| const fmt = (dt: Date) => dt.toLocaleDateString('en-US', { month: 'short', day: 'numeric' }); | ||
| return { key: `week-${fmt(weekStart)}`, label: `${fmt(weekStart)} – ${fmt(weekEnd)}`, dateFormat: 'monthDay' }; | ||
| } |
There was a problem hiding this comment.
getTimeBucket puts Sundays in the wrong week.
weekStart.setDate(d.getDate() - d.getDay() + 1) assumes getDay() of Monday is 1 through Saturday is 6. For Sunday (getDay() === 0) this becomes d.getDate() + 1 — i.e. the next Monday, not the previous one. A Sunday timestamp ends up in a different week bucket than the Saturday that immediately preceded it.
Additionally, weekEnd = weekStart + 4 produces a Mon–Fri range, so any Saturday in the 7–30 day window gets a bucket label that excludes the day itself (e.g. a Saturday shows as “Oct 13 – Oct 17”). If weekend grouping is intentional, consider extending the range to Sunday.
🐛 Proposed fix
- if (diffDays <= 30) {
- const weekStart = new Date(d);
- weekStart.setDate(d.getDate() - d.getDay() + 1);
- const weekEnd = new Date(weekStart);
- weekEnd.setDate(weekStart.getDate() + 4);
+ if (diffDays <= 30) {
+ const weekStart = new Date(d);
+ // Shift so Monday is day 0; Sunday (0) maps to 6.
+ weekStart.setDate(d.getDate() - ((d.getDay() + 6) % 7));
+ const weekEnd = new Date(weekStart);
+ weekEnd.setDate(weekStart.getDate() + 6);
const fmt = (dt: Date) => dt.toLocaleDateString('en-US', { month: 'short', day: 'numeric' });
return { key: `week-${fmt(weekStart)}`, label: `${fmt(weekStart)} – ${fmt(weekEnd)}`, dateFormat: 'monthDay' };
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@report/components/utils.ts` around lines 51 - 58, The week-bucketing logic in
getTimeBucket mishandles Sundays and creates a Mon–Fri label; adjust weekStart
and weekEnd calculation to compute the previous Monday correctly and include the
full week: compute an offset like (d.getDay()+6)%7 (days since Monday) and set
weekStart by subtracting that offset from d (use weekStart.setDate(d.getDate() -
offset)); set weekEnd to weekStart + 6 days to cover Monday–Sunday (instead of
+4), keep using fmt for label and key (e.g., `${fmt(weekStart)} –
${fmt(weekEnd)}`) so Sunday timestamps fall into the same week as the preceding
Saturday.
| response, err := client.R(ctx). | ||
| Header("Content-Type", mw.FormDataContentType()). | ||
| Post("/render", &body) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("facet render request failed: %w", err) | ||
| } | ||
| if !response.IsOK() { | ||
| errBody, _ := response.AsString() | ||
| return nil, fmt.Errorf("facet render failed (status %d): %s", response.StatusCode, errBody) | ||
| } | ||
|
|
||
| if format == "html" { | ||
| return io.ReadAll(response.Body) | ||
| } | ||
|
|
||
| renderResult, err := response.AsJSON() | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to parse render response: %w", err) | ||
| } | ||
| resultURL, _ := renderResult["url"].(string) | ||
| if resultURL == "" { | ||
| return nil, fmt.Errorf("render response missing 'url' field") | ||
| } | ||
|
|
||
| pdfResponse, err := client.R(ctx).Get(resultURL) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to fetch rendered result: %w", err) | ||
| } | ||
| if !pdfResponse.IsOK() { | ||
| errBody, _ := pdfResponse.AsString() | ||
| return nil, fmt.Errorf("result fetch failed (status %d): %s", pdfResponse.StatusCode, errBody) | ||
| } | ||
|
|
||
| return io.ReadAll(pdfResponse.Body) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Inspect how commons/http handles Body close after AsJSON/AsString/raw ReadAll access.
fd -t f -e go . | xargs rg -l 'commons/http' 2>/dev/null | head -20
rg -nP --type=go -C3 '\bR\(ctx\)\.' -g '!**/vendor/**' | head -80
rg -nP --type=go -C5 'response\.Body' -g '!**/vendor/**' | head -60
# Look at other consumers of commons/http to see the prevailing pattern
rg -nP --type=go -C3 'AsString\(\)|AsJSON\(\)' -g '!**/vendor/**' | head -60Repository: flanksource/mission-control
Length of output: 10621
🌐 Web query:
flanksource commons http Response AsJSON AsString body close behavior
💡 Result:
flanksource/commons is a Go library of common utilities (https://github.com/flanksource/commons, latest v1.50.2 as of 2026-04-09). It likely includes an HTTP client wrapper with a Response type featuring methods like AsJSON and AsString for parsing the response body into JSON objects or strings. These methods read the response body (io.ReadCloser). In standard Go HTTP practice (net/http), the response.Body must be fully read and closed to allow connection reuse and prevent resource leaks. Closing without full read may prevent reuse but still closes the body; neither alone is sufficient if not reading fully. For flanksource/commons, follow the same: call AsJSON or AsString (reads body), then explicitly call Close on the Response or body if provided, e.g., resp := client.Get(url); json := resp.AsJSON; defer resp.Close or resp.Body.Close. No official docs found detailing exact API, but behavior aligns with Go conventions (sources 1,2,4,5,8). Check pkg.go.dev/github.com/flanksource/commons/http or repo source for precise signature.
Citations:
- 1: https://github.com/flanksource/commons
- 2: https://pkg.go.dev/github.com/flanksource/commons
- 3: https://groups.google.com/g/golang-nuts/c/148riho42sU
- 4: https://andrii-kushch.medium.com/is-it-necessary-to-close-the-body-in-the-http-response-object-in-golang-171c44c9394d
- 5: https://go.dev/src/net/http/client.go
Add defer response.Body.Close() and defer pdfResponse.Body.Close() to prevent connection leaks.
Lines 177 and 198 call io.ReadAll(response.Body) and io.ReadAll(pdfResponse.Body) without closing the bodies. The io.ReadAll() function reads but does not close the underlying io.ReadCloser. Without explicit Close(), the response bodies remain open, preventing HTTP connection reuse and leaking TCP connections.
Other code in the repository (e.g., connection/check.go) explicitly uses defer response.Body.Close() after similar response operations, confirming the pattern. Even if response.AsJSON() or response.AsString() (used at line 180 and line 172/194 respectively) were called, they do not auto-close the body.
Add defer statements immediately after creating each response:
- After line 167 (
response, err := client.R(ctx)...):defer response.Body.Close() - After line 189 (
pdfResponse, err := client.R(ctx)...):defer pdfResponse.Body.Close()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@report/facet.go` around lines 165 - 199, The HTTP response bodies are not
closed causing connection leaks; after creating the first response (the variable
response from client.R(...).Post("/render", &body)) add a defer
response.Body.Close() immediately after the error check, and after creating
pdfResponse (the variable from client.R(...).Get(resultURL)) add a defer
pdfResponse.Body.Close() immediately after its error check so both
io.ReadAll(response.Body) and io.ReadAll(pdfResponse.Body) run with the bodies
properly closed.
Summary by CodeRabbit
New Features
catalog get,catalog report, andcatalog treecommands for detailed configuration exploration.Improvements