Skip to content

refactor(artifacts,auth,connection,db,cmd): refactor artifact download and OIDC flows with improved error handling Major refactoring across multiple components:#2985

Open
moshloop wants to merge 5 commits intomainfrom
pr/reports-squash
Open

refactor(artifacts,auth,connection,db,cmd): refactor artifact download and OIDC flows with improved error handling Major refactoring across multiple components:#2985
moshloop wants to merge 5 commits intomainfrom
pr/reports-squash

Conversation

@moshloop
Copy link
Copy Markdown
Member

@moshloop moshloop commented Apr 17, 2026

Summary by CodeRabbit

  • New Features

    • Access control sections display user access and login audit logs within applications.
    • Catalog report generation with configuration relationships, changes, insights, and access summaries.
    • New catalog get, catalog report, and catalog tree commands for detailed configuration exploration.
    • PDF export option for views.
    • Enhanced RBAC report with improved access matrix visualization and user grouping.
  • Improvements

    • Better artifact download and handling.
    • Refined authentication and OIDC token validation.

…report generation

- Add .gavel.yaml configuration for pre-commit build step
- Update ci-test target to build binary and remove parallel flag
- Add static dependency to dev target
- Simplify tool installation commands by removing $(LOCALBIN)/ prefix
- Add report/kitchen-sink.json build target for TypeScript report generation
- Expand .gitignore with report artifacts and playwright config

These changes improve build consistency and add support for report generation in the build pipeline.
…pplications and views

Introduce new section types for displaying access control information and access logs in applications. Add AccessItem and AccessLogItem types to represent access data, along with AccessUIFilters and AccessLogsUIFilters for filtering. Implement GetAccessForUIRef and GetAccessLogsForUIRef database functions to query access and audit log data with support for filtering by search, config types, roles, user types, and time ranges. Add PDF rendering options to ViewSpec with FacetRenderOptions abstraction. Update all generated deepcopy functions and CRD schemas to support the new types. Add BuildCommit global variable for build metadata tracking.
Add detection for HTML responses on JSON endpoints to help users identify when their backend URL points to the frontend instead of the /api endpoint.

Changes:
- Introduce ErrHTMLResponse error for when server returns HTML instead of JSON
- Add decodeJSON() helper that checks Content-Type header and response body for HTML indicators
- Add looksLikeHTML() to detect HTML by content-type or body prefix
- Update GetConnection() and TestConnection() to use decodeJSON()
- Add Accept: application/json header to all requests
- Add comprehensive test suite covering HTML detection scenarios

This improves the user experience by providing clear error messages when misconfiguration occurs.
…d and OIDC flows with improved error handling

Major refactoring across multiple components:

**artifacts/controllers.go:**
- Refactored ListArtifacts to use switch statement for path-based routing
- Added support for config_change artifact listing
- Simplified DownloadArtifact to use ctx.Blobs() instead of connection-based file access
- Improved error messages and removed unnecessary intermediate variables

**auth/middleware.go & auth/oidc/models.go:**
- Renamed api.FrontendURL to api.PublicURL for consistency
- Replaced custom StringList type with pq.StringArray
- Removed unnecessary database/sql/driver import

**auth/oidcclient/oidcclient.go:**
- Changed RandomBase64 to panic on crypto/rand failure instead of returning error
- Simplified function signature from (string, error) to string

**cmd/connection_browser.go:**
- Added loginWaitResult struct for better result handling
- Implemented interactive token selection with huh library
- Added signal handling for strict mode (required tokens)
- Improved timeout and auto-detection logic
- Added displayCapturedState function for cleaner output
- Enhanced user prompts and feedback messages

**connection/jwt.go:**
- Added ScopeCount() method to JWT
- Fixed audience parsing to handle both string and array formats

**db/rbac.go:**
- Added QueryLogSummary() method to RBACAccessRow
- Refactored GetRBACAccess with query logging and recursive config expansion
- Added GetRBACAccessByConfigIDs convenience function
- Added GetGroupMembersForConfigs for audit reporting
- Removed applyAccessSelectors in favor of inline logic

Refs #123
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 17, 2026

Walkthrough

This pull request introduces comprehensive catalog and findings report features with enhanced access control visibility. Key additions include new data models for access/access logs, catalog and findings report generation with support for PDF export, refactored report rendering infrastructure using a unified CLI approach, and expanded React/TSX components for report visualization. Multiple database queries are added to support filtering and auditing.

Changes

Cohort / File(s) Summary
Build & Configuration
.gavel.yaml, Makefile, .gitignore
Added pre-build hook, extended build target dependencies, ignore additional artifact/report outputs and Playwright assets.
API Core Models
api/application.go, api/global.go, api/catalog_report.go, api/scraper_report.go
Introduced SectionTypeAccess/SectionTypeAccessLogs and corresponding structs; added build metadata field BuildCommit; created comprehensive catalog report data structures with constructors; defined scraper metadata model ScraperInfo.
API View & RBAC Models
api/rbac_report.go, api/view.go
Extended RBACReport with subject/parents context; added ConfigClass/Path/ParentID to resources; introduced AccessUIFilters and AccessLogsUIFilters for view filtering.
Deepcopy Methods
api/zz_generated.deepcopy.go, api/v1/zz_generated.deepcopy.go
Generated deepcopy implementations for new types including AccessUIFilters, AccessLogsUIFilters, FacetRenderOptions, and ViewSpec.PDF.
Playbook Actions & Views
api/v1/playbook_actions.go, api/v1/view_types.go
Refactored FacetOptions to use embedded FacetRenderOptions; added optional PDF field to ViewSpec.
OIDC/Auth Updates
auth/middleware.go, auth/oidc/*, auth/oidcclient/oidcclient.go, auth/oidc_validate.go
Changed auth endpoint paths, replaced StringList with pq.StringArray, switched to PublicURL for OIDC config, simplified RandomBase64 signature to panic on error.
Database Queries
db/applications.go, db/rbac.go, db/rbac_test.go
Added GetAccessForUIRef and GetAccessLogsForUIRef for access filtering; refactored GetRBACAccess to accept recursive and configIDs; introduced group membership querying via GetGroupMembersForConfigs.
CLI Catalog Commands
cmd/catalog.go, cmd/catalog_get.go, cmd/catalog_get_test.go, cmd/catalog_report.go, cmd/catalog_tree.go
New get/report/tree subcommands for querying configs with relationships, changes, insights, and access summaries; refactored query parsing and output formatting.
CLI Auth & Context
cmd/auth_login.go, cmd/context.go, cmd/connection_browser.go, cmd/connection_test_cmd.go
Removed token printing, simplified PKCE input, added context management (ensureAPIBase), enhanced browser login with auto-detection and bearer token selection, added upgrade flow for API base discovery.
RBAC Report Refactoring
rbac_report/export.go, rbac_report/render_facet.go, rbac_report/report.go
Changed ByUser boolean to View string; added recursive option; enriched resources with config metadata; implemented resolveSubjectAndParents; unified rendering via report.RenderCLI.
Application/Artifact Layer
application/application.go, application/render_facet.go, artifacts/controllers.go
Added buildSection support for access/access-logs UIRef types; simplified facet rendering by delegating to report.RenderCLI; refactored artifact download to use blob storage; added config-change artifacts endpoint.
CRD & JSON Schemas
config/crds/mission-control.flanksource.com_*.yaml, config/schemas/*.schema.json
Extended CRDs with access/accessLogs filters in UIRef; added PDF export config to views; flattened FacetOptions PDF fields; added application and view schema definitions for new UI filter types.
Fixtures
fixtures/views/ui-ref-examples.yaml
Added example UI reference sections demonstrating access filters with staleness and MFA conditions.
Main Entry & Connection Utilities
main.go, connection/jwt.go, cmd/rbac.go, cmd/view.go
Propagated build commit metadata; added JWT scope counting; added report-source flag support to CLI commands.
Catalog Report Infrastructure (Go)
report/catalog/report.go, report/catalog/export.go, report/catalog/render_facet.go, report/catalog/settings.go, report/catalog/change_mappings.go
Implemented complete catalog report generation with CEL-based change categorization, settings management with thresholds, audit trail construction, and facet rendering entrypoints.
Catalog Report Tests & Config
report/catalog/report_test.go, report/catalog/settings_test.go, report/catalog/change_mappings_test.go, report/catalog/default-settings.yaml
Added comprehensive test coverage for report building, settings loading/overlay, and change mapping application; defined default category mappings for RBAC, backup, and deployment changes.
Report Core Infrastructure
report/facet.go, report/embed.go, report/config-types.ts, report/build-kitchen-sink.ts
Added unified RenderCLI/RenderHTTP rendering; embedded source asset extraction; defined TypeScript types for config/change/analysis models; created kitchen sink test data builder combining YAML and schema examples.
Report Type Definitions
report/catalog-report-types.ts, report/rbac-types.ts, report/scraper-types.ts, report/kitchen-sink/KitchenSinkTypes.ts
Defined TypeScript interfaces for catalog report data, RBAC metadata, scraper info, and composite kitchen sink data.
Report Display Components (React)
report/Application.tsx, report/CatalogReport.tsx, report/FindingsReport.tsx, report/RBACByUserReport.tsx, report/RBACMatrixReport.tsx, report/ViewReport.tsx, report/KitchenSink.tsx
Updated to use unified Document/Header/Footer layout; added new CatalogReport and FindingsReport pages; refactored to use imported header/footer/cover components; removed per-section page breaks.
Report Utility Components
report/components/PageHeader.tsx, report/components/PageFooter.tsx, report/components/CoverPage.tsx, report/components/AuditPage.tsx
Created reusable page layout components for headers, footers, cover pages with metadata/stats, and comprehensive audit pages with group membership listings.
Catalog Report Components
report/components/CatalogAccessSection.tsx, report/components/CatalogAccessLogsSection.tsx, report/components/CatalogList.tsx, report/components/ConfigTreeSection.tsx, report/components/ConfigRelationshipGraph.tsx
New components for rendering access records, access logs, config listings with RBAC matrices, relationship trees, and config relationship graphs.
Change Rendering Components
report/components/ConfigChangesSection.tsx, report/components/ConfigChangesExamples.tsx, report/components/change-section-utils.ts, report/components/RBACChanges.tsx, report/components/BackupChanges.tsx, report/components/DeploymentChanges.tsx
Implemented categorized change rendering with support for single-line, typed diff, and visual state variants; added specialized rendering for RBAC, backup, and deployment change types; centralized change classification and formatting utilities.
Insights & Detail Components
report/components/ConfigInsightsSection.tsx, report/components/ConfigItemCard.tsx, report/components/GitRef.tsx, report/components/ScraperCard.tsx, report/components/ArtifactAppendix.tsx
Added components for insights by severity/type, config metadata cards, Git reference display, scraper metadata cards, and artifact appendix with embedded images.
RBAC & Finding Components
report/components/RBACSummarySection.tsx, report/components/RBACUserSection.tsx, report/components/RBACMatrixSection.tsx, report/components/RBACChangelogSection.tsx, report/components/RBACCoverContent.tsx, report/components/rbac-visual.tsx, report/components/FindingsSection.tsx
Enhanced RBAC visualization with matrix tables, user details, group-based roles, and changelog; added RBAC visual primitives for identity icons and staleness indicators; added findings severity/category filtering.
Additional Report Components
report/components/IncidentsSection.tsx, report/components/FindingsSection.tsx, report/components/ApplicationDetails.tsx, report/components/BackupsSection.tsx, report/components/BackupActivityCalendar.tsx, report/components/BackupChanges.tsx, report/components/DynamicSection.tsx, report/components/ViewResultSection.tsx, report/components/utils.ts
Updated page-break avoidance styling; refactored backup calendar rendering; enhanced dynamic section variant selection; added date/time formatting utilities with time-bucketing support.
Kitchen Sink Pages
report/kitchen-sink/*.tsx
Created comprehensive demo/test pages showcasing all report components: layout components, config details, changes, insights, RBAC matrix, catalog, findings, application details, and view results.
Report Schemas & Sample Data
report/finding-schema.json, report/sample-findings.json, report/icons.ts, report/mission-control.ts, report/kitchen-sink-data.ts
Added JSON Schema for findings with severity/platform/category constraints; provided sample security findings data; defined icon sets for outcomes/categories/kill-chain/identity/endpoint types; added kitchen sink data loading.
Report Package & Build
report/package.json, report/.gitignore, report/build-kitchen-sink.ts
Updated @flanksource/facet to v0.1.38; added icon dependencies (@iconify-json/*, @iconify/react); created kitchen sink data builder combining YAML and OpenAPI schema examples.
Scraper Report Support
report/scraper/scraper.go
Added BuildScraperInfo to construct scraper metadata with Git ops source info and spec-derived type lists.
✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch pr/reports-squash
✨ Simplify code
  • Create PR with simplified code
  • Commit simplified code in branch pr/reports-squash

@socket-security
Copy link
Copy Markdown

Warning

Review the following alerts detected in dependencies.

According to your organization's Security Policy, it is recommended to resolve "Warn" alerts. Learn more about Socket for GitHub.

Action Severity Alert  (click "▶" to expand/collapse)
Warn High
Obfuscated code: npm pdf-lib is 90.0% likely obfuscated

Confidence: 0.90

Location: Package overview

From: report/package-lock.jsonnpm/@flanksource/facet@0.1.38npm/pdf-lib@1.17.1

ℹ Read more on: This package | This alert | What is obfuscated code?

Next steps: Take a moment to review the security alert above. Review the linked package source code to understand the potential risk. Ensure the package is not malicious before proceeding. If you're unsure how to proceed, reach out to your security team or ask the Socket team for help at support@socket.dev.

Suggestion: Packages should not obfuscate their code. Consider not using packages with obfuscated code.

Mark the package as acceptable risk. To ignore this alert only in this pull request, reply with the comment @SocketSecurity ignore npm/pdf-lib@1.17.1. You can also ignore all packages with @SocketSecurity ignore-all. To ignore an alert for all future pull requests, use Socket's Dashboard to change the triage state of this alert.

View full report

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
auth/middleware.go (1)

109-112: ⚠️ Potential issue | 🟠 Major

Basic vs Kratos mode OIDC URL divergence looks unintentional.

Basic mode still passes api.FrontendURL to both oidc.MountRoutes and the log line, while Kratos mode (lines 131/134) was switched to api.PublicURL. Combined with oidc_validate.go now validating iss strictly against api.PublicURL, Basic-mode issued tokens will not validate when PublicURL != FrontendURL. Consider unifying both modes on api.PublicURL (or a shared issuer constant).

Proposed fix
-				if err := oidc.MountRoutes(e, ctx, api.FrontendURL, OIDCSigningKeyPath, htpasswdChecker, LookupPersonByUsername); err != nil {
+				if err := oidc.MountRoutes(e, ctx, api.PublicURL, OIDCSigningKeyPath, htpasswdChecker, LookupPersonByUsername); err != nil {
 					return fmt.Errorf("failed to mount OIDC routes: %w", err)
 				}
-				logger.Infof("OIDC provider enabled at %s", api.FrontendURL)
+				logger.Infof("OIDC provider enabled at %s", api.PublicURL)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@auth/middleware.go` around lines 109 - 112, The Basic-mode branch is still
passing api.FrontendURL to oidc.MountRoutes and logger.Infof, causing issuer
mismatch with the strict iss validation in oidc_validate.go; update the
Basic-mode calls to use api.PublicURL (replace api.FrontendURL with
api.PublicURL in the oidc.MountRoutes call and the logger.Infof message) so both
Basic and Kratos modes use the same issuer URL and stay consistent with the iss
validation.
🟡 Minor comments (39)
report/rbac-types.ts-92-106 (1)

92-106: ⚠️ Potential issue | 🟡 Minor

Naming convention inconsistent with the rest of this file.

RBACResource/RBACUserResource use camelCase (configClass, parentId, createdAt, updatedAt), but the new ConfigItem uses snake_case (config_class, parent_id, created_at, updated_at). Because RBACReport now surfaces both shapes side-by-side (resources[] camelCase, subject/parents snake_case), consumers have to remember which casing applies where — easy to get wrong in templates/selectors.

Prefer aligning ConfigItem to the file’s camelCase convention (and normalizing server-side JSON tags), or — if snake_case is intentional because this comes straight from the DB model — at least document it inline so readers don't assume it's a typo.

Also applies to: 112-113

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/rbac-types.ts` around lines 92 - 106, ConfigItem uses snake_case keys
(config_class, parent_id, created_at, updated_at) which is inconsistent with the
file's camelCase conventions (e.g., RBACResource/RBACUserResource); change those
properties to camelCase (configClass, parentId, createdAt, updatedAt) in the
ConfigItem interface and update any serialization/json mapping or server-side
tags that produce snake_case so consumers see a consistent shape, or if
snake_case is required by the DB layer add an inline comment on ConfigItem
explaining it is intentionally DB-shaped and not normalized.
report/components/GitRef.tsx-53-55 (1)

53-55: ⚠️ Potential issue | 🟡 Minor

gitops.git.* fields should be optional.

The inline type declares url, branch, file, dir, link as required strings, yet line 54 optional-chains gitops?.git?.url (acknowledging git can be absent) and the underlying GitRef props treat every field as optional. Callers passing a partial gitops (e.g., { git: { url } }) will fail type-check.

Using the real source type is cleanest; otherwise mark every sub-field optional:

♻️ Suggested change
-export function GitRefFromSource({ gitops, size }: { gitops?: { git: { url: string; branch: string; file: string; dir: string; link: string } }; size?: 'xs' | 'sm' }) {
+export function GitRefFromSource({
+  gitops,
+  size,
+}: {
+  gitops?: { git?: { url?: string; branch?: string; file?: string; dir?: string; link?: string } };
+  size?: 'xs' | 'sm';
+}) {
   if (!gitops?.git?.url) return null;
   return <GitRef url={gitops.git.url} branch={gitops.git.branch} file={gitops.git.file} dir={gitops.git.dir} link={gitops.git.link} size={size} />;
 }

Better still, import/reuse the shared GitOpsSource type (the backend query.GitOpsSource the report JSON is generated from) so the shape stays in lock-step with the producer.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/GitRef.tsx` around lines 53 - 55, The prop type for
GitRefFromSource declares gitops.git.url/branch/file/dir/link as required but
the code and GitRef treat them as optional; update the signature of
GitRefFromSource to use the real shared type (e.g., import and use GitOpsSource
/ query.GitOpsSource) or change the inline type so gitops?: { git?: { url?:
string; branch?: string; file?: string; dir?: string; link?: string } } to make
every git sub-field optional; ensure the call to GitRef still passes these
possibly-undefined values (GitRefFromSource -> GitRef) without forcing non-null
assertions so TypeScript type-checks succeed.
report/finding-schema.json-6-8 (1)

6-8: ⚠️ Potential issue | 🟡 Minor

Missing top-level id while relatedFindings references IDs.

context.relatedFindings is described as "IDs of related findings that may form a larger attack chain", but the schema doesn't define an id (or findingId) on the finding object itself, and additionalProperties: false means callers can't tack one on without the validator rejecting the payload. Either add an optional id to the top-level properties, or document that relatedFindings references IDs that live outside the finding envelope (e.g., in a wrapping collection).

Also applies to: 206-210

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/finding-schema.json` around lines 6 - 8, The schema omits a top-level
identifier while context.relatedFindings expects IDs; update the JSON schema
(the top-level "properties"/"required" block in finding-schema.json) to add an
optional "id" (or "findingId") property of type string (or string/UUID) and, if
you want it required, include it in the "required" array, or alternatively
update context.relatedFindings description to state those IDs are external;
ensure this new "id"/"findingId" property is added to "properties" and adjust
"additionalProperties" handling if needed so validators allow the identifier
referenced by "context.relatedFindings".
report/components/ConfigInsightsSection.tsx-64-69 (1)

64-69: ⚠️ Potential issue | 🟡 Minor

Sort puts unknown/missing statuses and severities first.

indexOf returns -1 for values not in statusOrder/SEVERITY_ORDER, so entries with a missing or unexpected status/severity sort before open/critical. You likely want them at the end. Also, severity comparison uses raw a.severity but InsightEntry falls back to 'info' — keep these consistent.

🔧 Proposed fix
   const sorted = [...analyses].sort((a, b) => {
     const statusOrder = ['open', 'silenced', 'resolved'];
-    const statusDiff = statusOrder.indexOf(a.status ?? '') - statusOrder.indexOf(b.status ?? '');
+    const rank = (arr: readonly string[], v: string | undefined) => {
+      const i = arr.indexOf(v ?? '');
+      return i === -1 ? arr.length : i;
+    };
+    const statusDiff = rank(statusOrder, a.status) - rank(statusOrder, b.status);
     if (statusDiff !== 0) return statusDiff;
-    return SEVERITY_ORDER.indexOf(a.severity as ConfigSeverity) - SEVERITY_ORDER.indexOf(b.severity as ConfigSeverity);
+    return rank(SEVERITY_ORDER, a.severity ?? 'info') - rank(SEVERITY_ORDER, b.severity ?? 'info');
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/ConfigInsightsSection.tsx` around lines 64 - 69, The sort
currently places unknown/missing statuses/severities first because indexOf
returns -1; update the comparator in the sorted computation so missing/unknown
values map to the end: for status use statusOrder.indexOf(a.status ?? '') and if
that yields -1 treat it as statusOrder.length (same for b), and for severity use
SEVERITY_ORDER.indexOf(a.severity ?? 'info') (or map -1 to
SEVERITY_ORDER.length) so the severity fallback matches InsightEntry's default
of 'info'; update the comparator used in the sort on analyses accordingly
(references: sorted, statusOrder, SEVERITY_ORDER, analyses, ConfigSeverity,
InsightEntry fallback).
report/icons.ts-90-90 (1)

90-90: ⚠️ Potential issue | 🟡 Minor

Encoding artifact in comment.

The decorative line at the end of "Actor Type Icons" contains a corrupted multi-byte sequence (───────��). Cosmetic but worth a quick fix.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/icons.ts` at line 90, Replace the corrupted multi-byte sequence at the
end of the decorative comment line that contains "Actor Type Icons" with a clean
ASCII-only decoration (e.g., use dashes or remove the stray characters) so the
comment reads something like "// ── Actor Type Icons
───────────────────────────────────────────────" or simply "// Actor Type
Icons"; update the comment in report/icons.ts where that string appears to
remove the encoding artifact.
report/components/rbac-visual.tsx-31-39 (1)

31-39: ⚠️ Potential issue | 🟡 Minor

IdentityInfo.icon is inconsistent with IDENTITY_ICON_NAMES.

identityType() sets icon: type (e.g. 'service', 'bot'), but the actual icon names registered via IDENTITY_ICON_NAMES are 'server' for service and 'bot'/'user'/'group' for the others. Any external consumer that reads info.icon to render an icon (instead of going through IdentityIcon) will get a wrong/missing icon for service. Either drop icon from IdentityInfo or set it from IDENTITY_ICON_NAMES[type].

Proposed fix
   const resolve = (type: IdentityType): IdentityInfo => ({
-    type, icon: type, color: IDENTITY_COLOR, label: IDENTITY_LABELS[type],
+    type, icon: IDENTITY_ICON_NAMES[type], color: IDENTITY_COLOR, label: IDENTITY_LABELS[type],
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/rbac-visual.tsx` around lines 31 - 39, The identityType
function currently assigns icon: type which mismatches registered icons; update
identityType to set icon using the canonical mapping (IDENTITY_ICON_NAMES[type])
instead of the raw IdentityType string (or remove the icon field from
IdentityInfo if icons are no longer needed). Locate identityType in
rbac-visual.tsx and change its resolve helper to use IDENTITY_ICON_NAMES[type]
(keep label from IDENTITY_LABELS[type] and color from IDENTITY_COLOR) so
consumers reading IdentityInfo.icon get the correct icon name.
api/catalog_report.go-11-16 (1)

11-16: ⚠️ Potential issue | 🟡 Minor

Trailing slash handling in ConfigPermalink.

If FrontendURL is configured with a trailing / (a common pattern), the resulting URL becomes https://host//catalog/<id>. Most browsers tolerate this, but it’s ugly in reports/emails and breaks strict URL comparisons. Consider trimming once:

Proposed fix
-	return fmt.Sprintf("%s/catalog/%s", FrontendURL, configID)
+	return fmt.Sprintf("%s/catalog/%s", strings.TrimRight(FrontendURL, "/"), configID)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/catalog_report.go` around lines 11 - 16, ConfigPermalink currently
concatenates FrontendURL and the path which yields a double slash if FrontendURL
ends with '/', so update ConfigPermalink to trim any trailing slashes from
FrontendURL before building the permalink (e.g., use
strings.TrimRight(FrontendURL, "/")) and then format "%s/catalog/%s"; keep the
early return when FrontendURL is empty and preserve the function name
ConfigPermalink and its signature.
report/icons.ts-95-96 (1)

95-96: ⚠️ Potential issue | 🟡 Minor

ICON_IDENTITY_MACHINE uses incorrect Y-offset in robot icon path.

The path starts at M6 4 but the official tabler:robot icon starts at M6 6. This shifts the icon up by 2 units and makes it visually distinct from the correct robot icon (and from ICON_SERVICE_ACCOUNT_MISUSE on line 62, which has the correct M6 6). Update line 96 to use M6 6 to match the official tabler:robot specification.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/icons.ts` around lines 95 - 96, The robot icon path in
ICON_IDENTITY_MACHINE uses an incorrect Y-offset ("M6 4") causing a 2-unit
upward shift; update the path string passed to tabler in the
ICON_IDENTITY_MACHINE constant so the initial moveto is "M6 6" (matching the
official tabler:robot and the ICON_SERVICE_ACCOUNT_MISUSE entry) and keep the
rest of the path unchanged.
report/MatrixDemo.tsx-87-149 (1)

87-149: ⚠️ Potential issue | 🟡 Minor

Missing key props on <Cell> array children in demo code.

The cells arrays in this demo file lack key props on <Cell /> elements. While this is demo code, the production component RBACMatrixSection.tsx correctly uses roles.map((role) => <Indicator key={role} ... />), demonstrating the proper pattern. Consider aligning the demo with this approach for consistency and to avoid potential React warnings if MatrixTable iterates over the array.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/MatrixDemo.tsx` around lines 87 - 149, The demo rows in MatrixDemo.tsx
pass arrays of JSX <Cell> elements without keys causing potential React
warnings; update each cells array to assign stable key props to every <Cell />
(for example based on the column/role index or role id) so they mirror the
production pattern used in RBACMatrixSection.tsx (roles.map((role) => <Indicator
key={role} ... />)); locate the rows prop and add a unique key to each <Cell>
element in every cells array (or build the cells via a map that supplies the
key) to eliminate the missing-key warnings.
connection/jwt.go-95-99 (1)

95-99: ⚠️ Potential issue | 🟡 Minor

Multi-audience tokens silently drop extra entries.

Per RFC 7519 §4.1.3, aud can legitimately be an array with multiple values. Taking only arr[0] is fine if downstream code only needs a representative audience, but it can cause surprises when a token's relevant audience isn't first. If JWT.Audience is only used for display/debug (as suggested by Pretty()), this is acceptable — otherwise consider storing all audiences or matching the expected one.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@connection/jwt.go` around lines 95 - 99, The current parsing only takes
arr[0] from claims["aud"], which drops extra audiences; modify the JWT model and
parsing so you keep all audience entries: add a new field (e.g., Audiences
[]string) to the JWT struct, when claims["aud"] is []any iterate and collect all
string entries into Audiences (optionally set the existing Audience string to
Audiences[0] for backward compatibility), and update Pretty() (or any
display/debug routines) to use strings.Join(Audiences, ",") or otherwise show
the full list; ensure parsing still handles the single-string aud case into
Audiences as a single-entry slice.
report/embed.go-22-34 (1)

22-34: ⚠️ Potential issue | 🟡 Minor

Stat error is silently treated as a valid directory override.

When SourceDir is set but os.Stat fails (e.g., typo, wrong path, permission denied), ResolveSource returns (SourceDir, "") as if the path were a valid directory. Downstream facet rendering will then fail with an obscure error instead of falling back to embedded assets or surfacing the misconfiguration clearly.

A couple of options:

  • Return ("", "") on stat failure so rendering falls back to the embedded FS, or
  • Change the signature to (dir, entryFile string, err error) and propagate the error so the CLI flag handler can report it.
♻️ Fallback-to-embedded option
-	info, err := os.Stat(SourceDir)
-	if err != nil {
-		return SourceDir, ""
-	}
+	info, err := os.Stat(SourceDir)
+	if err != nil {
+		return "", ""
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/embed.go` around lines 22 - 34, ResolveSource currently treats any
os.Stat error as if SourceDir were a valid path and returns (SourceDir, ""),
which masks misconfiguration; change the os.Stat error branch in ResolveSource
to return ("", "") so callers fall back to the embedded FS instead of using an
invalid override: keep the ResolveSource signature, check the err from
os.Stat(SourceDir) and on any non-nil err return ("", "") (do not treat
SourceDir as a directory), leaving the existing handling for valid file vs
directory cases intact.
cmd/context.go-316-318 (1)

316-318: ⚠️ Potential issue | 🟡 Minor

Single Read may return partial/zero bytes.

resp.Body.Read(buf) is not guaranteed to fill the buffer in one call even when more data is available; for some servers the first read can return 0 bytes without EOF. The downstream strings.HasPrefix(body, "<") / "[" / "{" checks would then misclassify the response and skip the upgrade. Prefer io.ReadAll(io.LimitReader(resp.Body, 512)).

🛠️ Proposed fix
-	buf := make([]byte, 512)
-	n, _ := resp.Body.Read(buf)
-	body := strings.TrimLeft(string(buf[:n]), " \t\r\n")
+	raw, _ := io.ReadAll(io.LimitReader(resp.Body, 512))
+	body := strings.TrimLeft(string(raw), " \t\r\n")

Add "io" to the imports.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/context.go` around lines 316 - 318, The code uses a single
resp.Body.Read(buf) which can return partial/zero bytes; replace that read with
io.ReadAll(io.LimitReader(resp.Body, 512)) to reliably read up to 512 bytes,
handle and check the returned error, and then set body :=
strings.TrimLeft(string(p), " \t\r\n"); ensure you add the "io" import and
propagate/log the read error as appropriate so the subsequent strings.HasPrefix
checks operate on the actual initial bytes.
report/kitchen-sink/ConfigComponentsPage.tsx-43-47 (1)

43-47: ⚠️ Potential issue | 🟡 Minor

All related-config cards will display configItem's timestamps.

created_at/updated_at are overridden from data.configItem for every row in sampleConfigs, so related configs get the primary item's timestamps instead of their own. The id: config.id override is also redundant with the spread. Consider reading timestamps from each config (and only falling back to data.configItem for the primary entry) if this page is meant to showcase real card metadata.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/kitchen-sink/ConfigComponentsPage.tsx` around lines 43 - 47, The
current mapping of sampleConfigs incorrectly forces every related config card to
use data.configItem's timestamps and redundantly overrides id; in the
sampleConfigs.map block (and the ConfigItemCard props) stop spreading
data.configItem timestamps into every config and instead pass each config's own
created/updated timestamps (e.g., config.createdAt / config.updatedAt or
config.created_at / config.updated_at depending on your model), only falling
back to data.configItem.createdAt/updatedAt for the primary item when necessary,
and remove the redundant id: config.id override since the spread already
includes id.
report/kitchen-sink/ConfigComponentsPage.tsx-14-15 (1)

14-15: ⚠️ Potential issue | 🟡 Minor

Guard data.relatedConfigs against undefined.

Line 15 defensively uses data.scrapers ?? [], but line 14 will throw Cannot read properties of undefined (reading 'slice') when relatedConfigs is absent. Please apply the same guard.

🛠️ Proposed fix
-  const sampleConfigs = [data.configItem, ...data.relatedConfigs.slice(0, 5)];
+  const sampleConfigs = [data.configItem, ...(data.relatedConfigs ?? []).slice(0, 5)];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/kitchen-sink/ConfigComponentsPage.tsx` around lines 14 - 15, The
sampleConfigs construction uses data.relatedConfigs.slice(0, 5) which will throw
if relatedConfigs is undefined; change the expression in the sampleConfigs
declaration (the symbol sampleConfigs) to guard relatedConfigs (e.g., use
data.relatedConfigs ?? [] or data.relatedConfigs?.slice(0,5) ?? []) so you
spread a safe array: keep data.configItem first and then append up to five
related configs from the guarded array.
report/catalog/default-settings.yaml-15-20 (1)

15-20: ⚠️ Potential issue | 🟡 Minor

Partial redundancy in backup.success rules for BACKUP_DB events.

The third backup.success rule (line 10-11: changeType == "BACKUP_DB" && severity != "high") is a superset of the first rule's BACKUP_DB clause (line 6-7: includes changeType == "BACKUP_DB" && severity == "low").

The category mapping evaluates rules in order and assigns the category only once (line 119 of change_mappings.go checks if category == ""). For BACKUP_DB events with severity "low", the first rule will match and assign the category, preventing the third rule from ever assigning it. The third rule only takes effect for severity values like "medium" or "info" that the first rule doesn't match.

Consider consolidating or clarifying the boundary: either remove the BACKUP_DB clause from the first rule's filter, or restrict the third rule to severity == "medium" (or similar) to make the intent explicit.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/catalog/default-settings.yaml` around lines 15 - 20, The
backup.success rules overlap for BACKUP_DB events: the first backup.success
filter includes (changeType == "BACKUP_DB" && severity == "low") while a later
backup.success rule uses (changeType == "BACKUP_DB" && severity != "high"),
causing redundant/ambiguous matching because category assignment stops at first
match; resolve by making the intent explicit—either remove the BACKUP_DB clause
from the first rule so only the later rule handles BACKUP_DB events, or narrow
the later rule to an explicit set (e.g., changeType == "BACKUP_DB" && severity
== "medium") or alternatively change the first rule to only match severity ==
"low" for non-BACKUP_DB kinds; update the filters for the rules named
backup.success (and backup.failed if relevant) accordingly so there is no
overlap.
cmd/catalog_tree.go-39-41 (1)

39-41: ⚠️ Potential issue | 🟡 Minor

Format-string pitfall: don't pass err.Error() as the format argument.

logger.Errorf(err.Error()) interprets the error message as a printf format string. If the error text contains stray % characters (e.g. from a path or templated message), the log will emit %!(EXTRA ...) or, worse, read uninitialized varargs. Use a literal format:

🛡️ Proposed fix
-		if err := properties.LoadFile("mission-control.properties"); err != nil {
-			logger.Errorf(err.Error())
-		}
+		if err := properties.LoadFile("mission-control.properties"); err != nil {
+			logger.Errorf("%s", err)
+		}

Also worth confirming the intent of swallowing this error — if mission-control.properties is optional, a Warnf would be more accurate than Errorf.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/catalog_tree.go` around lines 39 - 41, Replace the unsafe logger call
that passes the error string as a format with a literal format and the error as
an argument: in the code around
properties.LoadFile("mission-control.properties") and the logger.Errorf call,
change the log invocation to use a format string (e.g. "failed to load
mission-control.properties: %v") and pass err as the parameter; also consider
using logger.Warnf instead of logger.Errorf if the properties file is optional
to reflect intent.
report/build-kitchen-sink.ts-21-21 (1)

21-21: ⚠️ Potential issue | 🟡 Minor

Fragile cross-repo path to duty schema.

../../duty/schema/openapi/change-types.schema.json assumes duty is checked out as a sibling repository at a specific relative location. CI/contributor workflows that don't replicate that layout will fail with an opaque ENOENT from readFileSync at line 151. Consider either resolving the path through node_modules (if duty is a JS dep), copying the schema into the repo as a generated artifact, or at least surfacing a clearer error and allowing the path to be overridden via env var (e.g. CHANGE_TYPES_SCHEMA).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/build-kitchen-sink.ts` at line 21, The hardcoded relative path
assigned to schemaPath is fragile; change the resolution to first check an
environment override (CHANGE_TYPES_SCHEMA) and if not present attempt a robust
lookup via require.resolve or node_modules (e.g.,
resolve('duty/schema/openapi/change-types.schema.json')) before falling back to
the current relative path, and wrap the subsequent readFileSync usage in a
try/catch that throws a clear, actionable error mentioning the attempted paths
and the env var name; locate and update the schemaPath declaration and the code
that reads the file (the schemaPath constant and the readFileSync usage) to
implement this fallback and improved error handling.
report/CatalogReport.tsx-197-201 (1)

197-201: ⚠️ Potential issue | 🟡 Minor

De-duplicate changes by ID before passing to ArtifactAppendix.

When groupBy === 'config', data.changes, entry.changes, and group.changes may contain overlapping entries (same changes surfaced via different aggregations). The appendix will render these duplicates since it groups by config but doesn't de-duplicate by change ID. Consider deduplicating the array before passing it to ArtifactAppendix:

const uniqueChanges = Array.from(
  new Map((changes || []).map(c => [c.id, c])).values()
)
<ArtifactAppendix changes={uniqueChanges} />
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/CatalogReport.tsx` around lines 197 - 201, The ArtifactAppendix is
receiving concatenated change arrays (from data.changes,
data.entries.flatMap(e=>e.changes), and data.configGroups.flatMap(g=>g.changes))
which can contain duplicate change objects when groupBy === 'config';
deduplicate by change id before passing to ArtifactAppendix by building a map
keyed by change.id (or the unique identifier used on change objects) and using
its values as the prop; replace the inline array spread with a single
deduplicated array and pass that to ArtifactAppendix (refer to ArtifactAppendix,
data.changes, data.entries, data.configGroups, and change.id).
report/components/CatalogList.tsx-20-22 (1)

20-22: ⚠️ Potential issue | 🟡 Minor

Minor: pluralization for singular counts.

When changeCount, insightCount, or accessCount equals 1, the rendered strings read as "1 changes", "1 insights", "1 access". Consider handling the singular form (e.g., "1 change", "1 insight", "1 access") for cleaner output, and pluralize accessaccesses when > 1.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/CatalogList.tsx` around lines 20 - 22, The three
assignments producing "1 changes"/"1 insights"/"1 access" should handle singular
vs plural; update the logic around entry.changeCount, entry.insightCount and
entry.accessCount so row.changes = `${entry.changeCount} ${entry.changeCount ===
1 ? 'change' : 'changes'}`, row.insights = `${entry.insightCount}
${entry.insightCount === 1 ? 'insight' : 'insights'}`, and row.access =
`${entry.accessCount} ${entry.accessCount === 1 ? 'access' : 'accesses'}` (or
extract a small pluralize helper used for these three fields) so singular counts
use the correct singular word and access pluralizes to "accesses" when > 1.
report/components/CatalogList.tsx-12-17 (1)

12-17: ⚠️ Potential issue | 🟡 Minor

Inconsistent null-safety for entry.configItem.

entryToRow dereferences entry.configItem directly (ci.name, ci.type), but the key computation on Line 61 uses optional chaining entry.configItem?.id. If configItem is ever absent, the map callback will throw before the key fallback helps. Either tighten the type (non-optional configItem) or guard it consistently in entryToRow.

Also applies to: 60-62

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/CatalogList.tsx` around lines 12 - 17, entryToRow
dereferences entry.configItem without guarding it while elsewhere the code uses
optional chaining; update entryToRow (and the related map callback that computes
the key) to handle a possibly undefined configItem consistently: either tighten
the type so CatalogReportEntry.configItem is non-optional, or add a null-safe
guard in entryToRow (e.g., return safe fallbacks for name/type when
entry.configItem is undefined) and ensure the map key computation uses the same
guard pattern so no runtime dereference occurs; refer to the entryToRow function
and the map callback that uses entry.configItem?.id to make the changes.
report/kitchen-sink/LayoutComponentsPage.tsx-14-14 (1)

14-14: ⚠️ Potential issue | 🟡 Minor

Potential NPE if data.configItem is undefined.

config is dereferenced without guarding (config.type, config.name, config.status, etc.). If ConfigReportData.configItem is optional, this page will crash during render. Either guard with an early return null when config is missing, or ensure the type is non-optional.

Also applies to: 26-38

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/kitchen-sink/LayoutComponentsPage.tsx` at line 14,
LayoutComponentsPage currently dereferences data.configItem into const config =
data.configItem and then uses config.type, config.name, config.status (and other
fields in the 26-38 block) which will throw if config is undefined; update
LayoutComponentsPage to guard against a missing config by either returning
null/a fallback UI early when data.configItem is falsy or by providing a safe
default object before rendering (e.g., const config = data.configItem || /*
fallback */), and update usages (config.type, config.name, config.status)
accordingly so no property access occurs on undefined.
report/sample-findings.json-144-144 (1)

144-144: ⚠️ Potential issue | 🟡 Minor

Inconsistent/likely-typo identifiers in sample data.

A few entries deviate from the rest of the fixture in ways that look unintentional:

  • Bucket name: most findings use s3://acme-app-audit/... but Lines 144, 207, 271 use s3://acmer-app-audit/....
  • Line 249 scope OMA_ZIM_app_PROD and Line 311 resource OMA_KEN_app_PROD don't follow the acme_*_app_PROD convention used elsewhere.
  • Line 516 "connection": "omar-prod" — all other findings use the connection://... URI scheme, and omar is otherwise unused. Looks like a stray/placeholder value.

Since this fixture drives the findings renderer (and PDF samples), aligning these avoids confusion when reviewing rendered output.

Also applies to: 207-207, 249-249, 311-311, 516-516

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/sample-findings.json` at line 144, Fix inconsistent sample
identifiers: change the path value "s3://acmer-app-audit/..." to
"s3://acme-app-audit/..." where it appears (the path string shown in the diff),
rename scope "OMA_ZIM_app_PROD" to follow the convention (e.g.,
"acme_ZIM_app_PROD") and rename resource "OMA_KEN_app_PROD" to
"acme_KEN_app_PROD" to match other acme_*_app_PROD entries, and normalize the
connection field "omar-prod" to the URI scheme used elsewhere (e.g.,
"connection://omar-prod"). Ensure each replacement matches the exact keys shown:
the "path" string, the "scope" value, the "resource" value, and the "connection"
value.
report/components/ConfigRelationshipGraph.tsx-56-62 (1)

56-62: ⚠️ Potential issue | 🟡 Minor

Early-return suppresses the whole section when relatedConfigs is empty.

If a caller passes centralConfig and relationships but no relatedConfigs, the entire "Config Relationships" section is hidden rather than showing the central config alone. Also, when relatedConfigs is populated but relationships is not, both RelationshipGroups render with count 0 — RelationshipGroup already returns null for empty arrays, so the section header will show with no body. Consider whether "nothing to show when there are no relationships" should be the guard instead of "no relatedConfigs".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/ConfigRelationshipGraph.tsx` around lines 56 - 62, Remove
the early-return that hides the whole section when relatedConfigs is empty;
instead return null only when there are no relationships AND no relatedConfigs.
Specifically, in ConfigRelationshipGraph replace the guard "if
(!relatedConfigs?.length) return null" with a check like "if
(!(relationships?.length || relatedConfigs?.length)) return null", ensure
configLookup is built from (relatedConfigs || []) so it works when
relatedConfigs is undefined, and keep computing rels/incoming/outgoing as you
already do so RelationshipGroup can decide to render or return null for empty
groups.
rbac_report/report.go-195-216 (1)

195-216: ⚠️ Potential issue | 🟡 Minor

Subject selection via configItems[0] is order-dependent and non-deterministic.

Since query.GetConfigsByIDs provides no ordering guarantees, the "subject" can vary across runs for the same input, affecting report reproducibility. The function should either validate that exactly one item is provided, or pick deterministically (e.g., by ID or by selecting the item without a ParentID). Document the intended behavior to clarify whether this should be a single-subject report.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rbac_report/report.go` around lines 195 - 216, The subject selection in
resolveSubjectAndParents currently uses configItems[0], which is order-dependent
and non-deterministic; change resolveSubjectAndParents to deterministically
select the subject (or validate single-item input): e.g., if len(configItems)==1
use that item, otherwise pick the item with nil ParentID if present, else pick
the item with the smallest UUID string (or explicitly return an error), and
update callers/docs to reflect whether reports are single-subject only;
reference resolveSubjectAndParents, configItems, ParentID, and
query.GetConfigsByIDs when making the change.
report/components/BackupActivityCalendar.tsx-17-21 (1)

17-21: ⚠️ Potential issue | 🟡 Minor

Legend label mismatch for warning status.

LABEL_CLASSES / CELL_CLASSES key this state as warning (amber), but the legend renders it as "In Progress". If warning is meant to cover both "warning" and "in progress" semantics that's fine, but the label discrepancy will be confusing when a backup is actually a warning state (e.g., partial failure). Consider either renaming the status key or adjusting the legend text to match the domain meaning.

Also applies to: 137-140

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/BackupActivityCalendar.tsx` around lines 17 - 21, The
legend text for the "In Progress" state doesn't match the status key 'warning'
used in LABEL_CLASSES and CELL_CLASSES (and BackupCalendarStatus), causing
confusion; update either the domain model or the legend so they match — either
rename the status key 'warning' to 'in-progress' across BackupCalendarStatus,
LABEL_CLASSES, and CELL_CLASSES (and any usages in BackupActivityCalendar) or
change the legend text that renders the label to "Warning" (or "Warning / In
Progress") to reflect the existing 'warning' key; ensure CSS class mapping and
any conditional logic using LABEL_CLASSES/CELL_CLASSES remain consistent after
the change.
db/applications.go-591-620 (1)

591-620: ⚠️ Potential issue | 🟡 Minor

IsStale is always false when filters.Stale is empty.

When the caller doesn't supply a Stale filter, staleCutoff is zero, isStale evaluates to false for every row, and AccessItem.IsStale is populated as false even for users who have never signed in or who signed in years ago. This makes IsStale unreliable for UI consumers that expect it to always reflect staleness — they'd have to reintroduce their own cutoff.

If IsStale is intended to be authoritative in the response, compute it against a sensible default (e.g., the thresholds from settings, or 90 days) even when no filter is supplied. Otherwise consider making it a pointer so an unset value is distinguishable from false.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@db/applications.go` around lines 591 - 620, The IsStale boolean is being
computed only when filters.Stale is provided, causing AccessItem.IsStale to be
false for all rows when filters.Stale is empty; change the logic so staleCutoff
is always set (use a sensible default like the app settings' stale threshold or
90 days when filters.Stale == "") and then compute isStale using that cutoff
inside the rows loop (symbols to update: filters.Stale, staleCutoff, isStale,
and the AccessItem.IsStale assignment), or alternatively change
AccessItem.IsStale to a *bool and populate it as nil when no cutoff is intended;
pick one approach and update the loop and any callers accordingly so IsStale
reliably represents staleness.
report/components/BackupActivityCalendar.tsx-71-94 (1)

71-94: ⚠️ Potential issue | 🟡 Minor

Entries outside the reference month are silently dropped.

referenceKey picks the latest aggregated day and the grid only renders cells for that single month. Any entries from prior months still land in dateMap but have no corresponding cell, so they disappear without any visible indication. If the caller passes a range that spans multiple months (which BackupsSection can easily produce), users will see an incomplete view.

Consider either rendering one calendar per distinct month present in aggregated, or documenting/asserting that callers must pre-slice entries to a single month.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/BackupActivityCalendar.tsx` around lines 71 - 94, The
current logic uses referenceKey and a single month grid (monthLabel, cells,
dateMap) so any aggregated entries outside that reference month are omitted;
change to group aggregated entries by their YYYY-MM key (use aggregateEntries
output -> map by entry.date.slice(0,10).slice(0,7) or similar), then iterate
each distinct month-key and for each compute year/month, daysInMonth, firstDow,
build a per-month dateMap and cells exactly like the existing block (reusing
aggregated -> dateMap, monthLabel, cells logic) and render a calendar for each
group; alternatively add an explicit assertion that callers must pass a
single-month slice if you prefer not to support multi-month rendering.
report/components/CatalogAccessSection.tsx-27-27 (1)

27-27: ⚠️ Potential issue | 🟡 Minor

hasMultipleConfigs actually checks "any entry has a configName".

The name implies multiple distinct configs, but .some((a) => a.configName) is true even when every row shares a single config. If the intent is to only show the extra "Config" column when results span more than one config, this should compare distinct names:

-  const hasMultipleConfigs = access.some((a) => a.configName);
+  const hasMultipleConfigs =
+    new Set(access.map((a) => a.configName).filter(Boolean)).size > 1;

Otherwise consider renaming to hasConfigColumn / showConfigColumn so the behavior matches the name.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/CatalogAccessSection.tsx` at line 27, The variable
hasMultipleConfigs currently uses access.some((a) => a.configName) which only
detects if any row has a configName; change it to detect multiple distinct
config names (e.g., compute new Set(access.map(a =>
a.configName).filter(Boolean)).size > 1) so the "Config" column is shown only
when results span more than one config, or alternatively rename
hasMultipleConfigs to hasConfigColumn / showConfigColumn if the intended
behavior is to show the column whenever any row has a config; update references
in CatalogAccessSection.tsx accordingly (hasMultipleConfigs and any conditional
rendering logic that decides whether to show the Config column).
cmd/catalog_report.go-92-96 (1)

92-96: ⚠️ Potential issue | 🟡 Minor

ShutdownAndExit makes the following return err unreachable and bypasses Cobra’s error plumbing.

shutdown.ShutdownAndExit(1, err.Error()) calls os.Exit(1) internally, so return err never runs and any deferred cleanup in callers is skipped. Either let Cobra handle the failure via the returned error, or drop the return — but not both. Also consider using ctx.Oops().Wrap(err) for consistency with the rest of the codebase.

🛠 Suggested change
 		result, err := catalog.Export(ctx, configs, opts, catalogReportFormat)
 		if err != nil {
-			shutdown.ShutdownAndExit(1, err.Error())
-			return err
+			return ctx.Oops().Wrap(err)
 		}

As per coding guidelines: "For wrapping internal errors, use ctx.Oops().Wrap(err) or ctx.Oops().Wrapf(err, "context")."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/catalog_report.go` around lines 92 - 96, The code calls
shutdown.ShutdownAndExit(1, err.Error()) then returns err which is unreachable;
replace the os.Exit path with returning a wrapped error so Cobra and deferred
cleanup run: in the error branch after calling catalog.Export(ctx, configs,
opts, catalogReportFormat) remove Shutdown.ShutdownAndExit and instead return
ctx.Oops().Wrap(err) (or ctx.Oops().Wrapf(err, "exporting catalog") if extra
context desired) so the error plumbing is consistent with the rest of the
codebase.
report/components/AuditPage.tsx-101-160 (1)

101-160: ⚠️ Potential issue | 🟡 Minor

Guard against missing queries/scrapers/groups arrays.

audit.queries.length, audit.scrapers.length, and audit.groups.length assume these arrays are always present on the payload. If the Go encoder omits them (e.g. omitempty with a nil slice), these accesses will throw Cannot read properties of undefined. Safer to default with ?? [] or make the optionality explicit in CatalogReportAudit.

🛠 Suggested guard
-      {audit.queries.length > 0 && (
+      {(audit.queries?.length ?? 0) > 0 && (
...
-      {audit.scrapers.length > 0 && (
+      {(audit.scrapers?.length ?? 0) > 0 && (
...
-      {audit.groups.length > 0 && (
+      {(audit.groups?.length ?? 0) > 0 && (
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/AuditPage.tsx` around lines 101 - 160, Guard against
missing audit arrays by defaulting undefined audit.queries, audit.scrapers, and
audit.groups to empty arrays before using .length or mapping; update the render
checks and map calls that reference audit.queries, audit.scrapers, and
audit.groups (used in the Section/ListTable/ScraperCard blocks) to use a safe
fallback (e.g., audit.queries ?? [], audit.scrapers ?? [], audit.groups ?? [])
or make these properties non-optional on CatalogReportAudit so the components
always receive arrays.
report/components/BackupsSection.tsx-33-40 (1)

33-40: ⚠️ Potential issue | 🟡 Minor

Inconsistent casing between success and failed checks can miscategorize backups.

isFailed uses a case-insensitive substring match, but successCount and the calendar’s success branch use strict b.status === 'success'. A status like "Success", "SUCCEEDED", or "Completed" will fall through to neither bucket, inflating warning entries and undercounting successCount.

🛠 Suggested fix
-  const isFailed = (status: string) => String(status).toLowerCase().includes('fail');
-  const successCount = backups.filter((b) => b.status === 'success').length;
+  const isFailed = (status: string) => String(status).toLowerCase().includes('fail');
+  const isSuccess = (status: string) => String(status).toLowerCase().includes('success');
+  const successCount = backups.filter((b) => isSuccess(b.status)).length;
   const failedCount = backups.filter((b) => isFailed(b.status)).length;
   const calendarEntries = backups.map((backup) => ({
     date: backup.date,
-    status: (backup.status === 'success' ? 'success' : isFailed(backup.status) ? 'failed' : 'warning') as BackupCalendarStatus,
+    status: (isSuccess(backup.status) ? 'success' : isFailed(backup.status) ? 'failed' : 'warning') as BackupCalendarStatus,
     label: backup.size || undefined,
   }));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/BackupsSection.tsx` around lines 33 - 40, Status checks are
inconsistent: isFailed uses a case-insensitive substring but successCount and
calendarEntries use strict equality, causing mixed-case or synonym statuses to
be misclassified; normalize the status once (e.g., const norm =
String(b.status).toLowerCase()) and use that normalized value in isFailed, in
the success check used by successCount, and when computing calendarEntries'
status (or define explicit helpers like isSuccess and isFailed that operate on
the normalized status and handle synonyms such as 'success', 'succeeded',
'completed'); update references in successCount, failedCount, calendarEntries
and the isFailed helper to use these normalized checks.
cmd/catalog_report.go-64-66 (1)

64-66: ⚠️ Potential issue | 🟡 Minor

Avoid using error messages / interpolated strings as format strings.

logger.Errorf(err.Error()) (Line 65) and logger.Infof(details) (Line 116) pass already-rendered strings as the format string. A % in a file path, query, or error message becomes an unintended verb and produces %!(EXTRA …) noise. Use an explicit %s:

🛠 Suggested change
-			logger.Errorf(err.Error())
+			logger.Errorf("%s", err)
...
-		logger.Infof(details)
+		logger.Infof("%s", details)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/catalog_report.go` around lines 64 - 66, The log calls are passing
rendered strings as format strings which can misinterpret % characters; update
the usages around properties.LoadFile so that logger.Errorf and logger.Infof use
an explicit format specifier (e.g., "%s" or "%v") and pass the error or details
as an argument instead of calling err.Error() or passing details directly;
specifically change the logger.Errorf(...) call that follows properties.LoadFile
and the logger.Infof(details) call to use logger.Errorf("%s", err) /
logger.Infof("%s", details) (or equivalent) to avoid format-string issues.
report/FindingsReport.tsx-521-525 (1)

521-525: ⚠️ Potential issue | 🟡 Minor

Runtime lookups assume valid enum values.

FindingsReport accepts untyped Record<string, unknown>, but downstream code like SEVERITY_STYLES[f.severity], OUTCOME_STYLES[f.outcome].label, and PLATFORM_LABELS[f.platform] will throw Cannot read properties of undefined if any incoming finding carries a value outside the declared union (e.g., an unknown severity like "unknown" or typo). Since this report is typically rendered from JSON data (e.g., sample-findings.json), a single bad record will crash the whole report.

Consider adding a normalization step on the input that clamps unknown values to "info" / "informational" / default platform, or ?? fallbacks at each lookup.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/FindingsReport.tsx` around lines 521 - 525, FindingsReport currently
trusts untyped props and performs runtime lookups like
SEVERITY_STYLES[f.severity], OUTCOME_STYLES[f.outcome], and
PLATFORM_LABELS[f.platform] which will throw if a finding has an unexpected
value; modify FindingsReport to normalize input before sorting/lookup by
iterating over the raw findings (the findings array used to build sorted) and
for each finding validate severity/outcome/platform against the known keys,
clamping unknown values to safe defaults (e.g., "info"/"informational"/default
platform) or replacing with canonical enum values, and then use those sanitized
fields for the SEVERITY_STYLES/OUTCOME_STYLES/PLATFORM_LABELS accesses so
lookups cannot return undefined.
cmd/catalog_get.go-129-132 (1)

129-132: ⚠️ Potential issue | 🟡 Minor

Silent swallow of non-NotFound DB errors.

First(&lastScraped).Error == nil treats every error (connection loss, permission denied, context cancelled, etc.) the same as "no row". Consider checking for gorm.ErrRecordNotFound specifically and logging/returning other errors.

🛡️ Proposed fix
-   var lastScraped models.ConfigItemLastScrapedTime
-   if err := ctx.DB().Where("config_id = ?", id).First(&lastScraped).Error; err == nil {
-       result.LastScrapedTime = lastScraped.LastScrapedTime
-   }
+   var lastScraped models.ConfigItemLastScrapedTime
+   if err := ctx.DB().Where("config_id = ?", id).First(&lastScraped).Error; err == nil {
+       result.LastScrapedTime = lastScraped.LastScrapedTime
+   } else if !errors.Is(err, gorm.ErrRecordNotFound) {
+       ctx.Logger.V(2).Infof("failed to load last scraped time for %s: %v", id, err)
+   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/catalog_get.go` around lines 129 - 132, The current block that calls
ctx.DB().Where("config_id = ?", id).First(&lastScraped).Error in
cmd/catalog_get.go silently treats all DB errors as "no row"; change it to
capture the error into a variable, then check specifically using errors.Is(err,
gorm.ErrRecordNotFound) to handle the missing-row case (set
result.LastScrapedTime only when found) and for any other non-nil error
(connection/permission/context cancel) log it via the same logger or return it
from the surrounding function so it isn’t swallowed; reference the lastScraped
variable and the ctx.DB().Where(...).First(...) call when making the change.
report/catalog/change_mappings.go-172-189 (1)

172-189: ⚠️ Potential issue | 🟡 Minor

Silent drop when transform returns a non-map or empty-kind value.

evalMappingTransform returns (nil, nil) in two distinct failure modes:

  1. Transform eval succeeded but returned a non-map (likely a user expression bug).
  2. Transform returned a map without a kind field.

In both cases Apply silently continues without any log or diagnostic, making it hard for authors to debug why their transform isn't taking effect. Consider emitting at least a ctx.Logger.V(2).Infof(...) on case (1), since that is almost certainly a misconfigured expression.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/catalog/change_mappings.go` around lines 172 - 189, The function
evalMappingTransform currently returns (nil, nil) for two failure modes (non-map
result from celValueToNative and map missing "kind") which silently drops
transforms; update evalMappingTransform to emit a debug/info log in those cases
(use the caller's logger, e.g. ctx.Logger.V(2).Infof) including the evaluated
output and a short message so authors can diagnose misconfigured expressions —
specifically, when celValueToNative(out) is not a map log the raw out and note
"transform returned non-map", and when the map has empty or missing "kind" log
the map value and note "transform returned map without kind"; keep the existing
return values but ensure the logs reference evalMappingTransform, program (or
its string/identifier) and the problematic value for debugging.
report/components/change-section-utils.ts-415-445 (1)

415-445: ⚠️ Potential issue | 🟡 Minor

Invalid dates can break latestDate ordering.

new Date(change.date).getTime() returns NaN for unparseable strings, and any NaN comparison yields false. That means:

  • Line 418: the very first row's invalid date becomes the group's latestDate and is never superseded even by valid later dates.
  • Line 442/444: invalid dates sort unpredictably.

If upstream guarantees change.date is always a valid ISO string, this is moot. Otherwise consider Number.isFinite(...) checks or defaulting invalid dates to 0/-Infinity for sorting.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/change-section-utils.ts` around lines 415 - 445, The
group.latestDate and row sorting rely on new Date(change.date).getTime() which
can be NaN for invalid dates; update the logic in the block that sets
group.latestDate (referencing group.latestDate and change.date) to parse the
date once, check Number.isFinite(parsedTime) and only assign if parsedTime is
finite (or use a fallback like 0 or -Infinity), and update both sorts (the rows
sort comparing a.date/b.date and the outer groups sort comparing
group.latestDate) to use parsed numeric timestamps with the same finite fallback
so invalid dates don't break ordering.
report/facet.go-201-244 (1)

201-244: ⚠️ Potential issue | 🟡 Minor

SrcDir cache is never invalidated across binary upgrades.

sync.OnceValues only caches for the life of the current process, so this isn't a problem within one run. But ExtractFiles writes into a stable ~/.cache/incident-commander/facet-report path without ever removing files that were present in an old embed FS but not the new one. After upgrades, the cache dir can accumulate stale TSX files that might still be picked up by the facet CLI via relative imports.

Consider either:

  • writing a version marker (e.g., binary build ID) and wiping the dir on mismatch, or
  • clearing the dir at the start of ExtractFiles before re-extracting.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/facet.go` around lines 201 - 244, The cache directory can accumulate
stale embedded files across upgrades; change ExtractFiles to clear the
destination directory before writing new files: at the start of
ExtractFiles(destDir string) remove or empty destDir (e.g., call
os.RemoveAll(destDir) then os.MkdirAll(destDir, 0750)) so stale TSX files from
previous embeds are removed, then proceed with fs.WalkDir(FS, ...) and writing
files; update any callers (SrcDir using sync.OnceValues) to rely on this cleaned
extraction behavior.
cmd/connection_browser.go-810-812 (1)

810-812: ⚠️ Potential issue | 🟡 Minor

Inverted strings.Contains semantics.

strings.Contains(flags.RequireBearerAud, "graph.microsoft.com") treats the configured audience as the haystack and the literal as the needle, which is opposite to the usual convention. It happens to work because the Azure preRun sets RequireBearerAud = "graph.microsoft.com" exactly, but any user override like RequireBearerAud = "graph" would silently stop redirecting to the Graph /me URL.

🛠️ Proposed fix
-	if props["bearer"] != "" && strings.Contains(flags.RequireBearerAud, "graph.microsoft.com") {
+	if props["bearer"] != "" && strings.Contains("graph.microsoft.com", flags.RequireBearerAud) && flags.RequireBearerAud != "" {
 		connURL = "https://graph.microsoft.com/v1.0/me"
 	}

Or, more clearly, check the selected token's decoded audience instead of the flag.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/connection_browser.go` around lines 810 - 812, The current check uses
strings.Contains(flags.RequireBearerAud, "graph.microsoft.com") which inverts
needle/haystack and can fail for partial overrides; update the condition in the
block that sets connURL (where props["bearer"], flags.RequireBearerAud and
connURL are used) to either flip the arguments to
strings.Contains("graph.microsoft.com", flags.RequireBearerAud) or, preferably,
decode the selected bearer token's audience and check that the token audience
contains or equals "graph.microsoft.com" before setting connURL =
"https://graph.microsoft.com/v1.0/me".
cmd/connection_browser.go-566-588 (1)

566-588: ⚠️ Potential issue | 🟡 Minor

Potential race between stdin press and bearer auto-detection.

When both the bearer-detect goroutine and the stdin reader finish around the same time, doneCh fires and the inner select may non-deterministically pick <-stdinCh (setting Interrupted = true) even though the bearer token was actually detected. The user then falls through to manual token selection despite auto-detection having succeeded.

Consider prioritizing bearerCh by polling it first (non-blocking) before falling through to the multi-case select:

🛠️ Proposed fix
 	case <-doneCh:
+		select {
+		case tokens := <-bearerCh:
+			result.AutoDetected = true
+			result.Tokens = tokens
+			return result
+		default:
+		}
 		select {
-		case tokens := <-bearerCh:
-			result.AutoDetected = true
-			result.Tokens = tokens
 		case <-stdinCh:
 			result.Interrupted = true
 		default:
 			result.AutoDetected = true
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/connection_browser.go` around lines 566 - 588, When doneCh fires, avoid
the race by polling bearerCh first: do a non-blocking receive from bearerCh and
if you get tokens set result.AutoDetected = true and result.Tokens = tokens;
only if bearerCh is empty then check stdinCh (non-blocking) and set
result.Interrupted = true if stdin was pressed; do not set result.AutoDetected
in the default/fallback path—leave it false so manual selection proceeds when no
token was actually received. Reference the existing channels doneCh, bearerCh,
stdinCh and the result fields AutoDetected, Tokens, Interrupted when making this
change.

Comment thread artifacts/controllers.go
Comment on lines 73 to +91
func DownloadArtifact(c echo.Context) error {
ctx := c.Request().Context().(context.Context)

_id := c.Param("id")
artifactID, err := uuid.Parse(_id)
artifactID, err := uuid.Parse(c.Param("id"))
if err != nil {
return api.WriteError(c, api.Errorf(api.EINVALID, "invalid id(%s). must be a uuid. %v", _id, err))
return api.WriteError(c, api.Errorf(api.EINVALID, "invalid id: %v", err))
}

artifact, err := db.FindArtifact(ctx, artifactID)
blobs, err := ctx.Blobs()
if err != nil {
return api.WriteError(c, err)
} else if artifact == nil {
return api.WriteError(c, api.Errorf(api.ENOTFOUND, "artifact(%s) was not found", artifactID))
}
defer blobs.Close()

conn, err := pkgConnection.Get(ctx, artifact.ConnectionID.String())
data, err := blobs.Read(artifactID)
if err != nil {
return api.WriteError(c, err)
} else if conn == nil {
return api.WriteError(c, api.Errorf(api.ENOTFOUND, "artifact's connection was not found"))
}
defer data.Content.Close()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Look for any per-artifact access checks elsewhere and the previous FindArtifact/GetFSForConnection path on main.
rg -nP -C3 '\bFindArtifact\b|\bGetFSForConnection\b' --type=go
rg -nP -C3 'ObjectArtifact|ActionRead' --type=go
rg -nP -C3 'artifact.*(team|playbook|tenant|agent|scope)' --type=go -i

Repository: flanksource/mission-control

Length of output: 50383


🏁 Script executed:

# Check artifact model definition and relationships
fd -e go -type f | xargs rg -l 'type Artifact struct' | head -5

Repository: flanksource/mission-control

Length of output: 241


🏁 Script executed:

# Look at artifact model to understand scoping
rg -A 20 'type Artifact struct' --type=go

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

# Check the full DownloadArtifact implementation to see current state
rg -B 5 -A 30 'func DownloadArtifact' artifacts/controllers.go

Repository: flanksource/mission-control

Length of output: 982


🏁 Script executed:

# Look for any artifact authorization checks (HasPermission calls with artifacts)
rg -C 5 'artifact.*HasPermission|HasPermission.*artifact' --type=go -i

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

# Find artifact model definition
find . -name "*.go" | xargs grep -l 'type Artifact struct' 2>/dev/null

Repository: flanksource/mission-control

Length of output: 55


🏁 Script executed:

# Search for Artifact in models directory
fd Artifact models/ --type f

Repository: flanksource/mission-control

Length of output: 165


🏁 Script executed:

# Look for artifact relationships (fields that reference parent entities)
rg 'PlaybookRunActionID|ConfigChangeID|CheckID' --type go -A 2 -B 2 | head -50

Repository: flanksource/mission-control

Length of output: 2245


🏁 Script executed:

# Check if artifacts table has any scoping fields
rg -A 30 'type Artifact' models/ --type go

Repository: flanksource/mission-control

Length of output: 118


🏁 Script executed:

# Find all Go files with Artifact struct definition
rg 'type Artifact struct' --type go

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

# Look at artifacts/controllers.go for context around the handler
cat -n artifacts/controllers.go | head -50

Repository: flanksource/mission-control

Length of output: 1787


🏁 Script executed:

# Check if artifacts have parent entity relationships by looking at ListArtifacts
rg -A 20 'func ListArtifacts' artifacts/controllers.go

Repository: flanksource/mission-control

Length of output: 773


🏁 Script executed:

# Look for the Artifact model definition - check duty package imports
rg 'models.Artifact' artifacts/controllers.go -B 3 -A 3

Repository: flanksource/mission-control

Length of output: 306


🏁 Script executed:

# Check query.ArtifactsByCheck to see if it filters by user permissions
find . -name "*.go" -type f | xargs grep -l 'ArtifactsByCheck' 2>/dev/null

Repository: flanksource/mission-control

Length of output: 97


🏁 Script executed:

# Look for duty/models or check what models are available locally
git ls-files | grep -i artifact | grep -i model

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

# Check if there are any per-artifact permission checks in the playbook/mcp code
rg -B 3 -A 3 'FindArtifact.*ctx|artifact.*permission' --type go -i | head -80

Repository: flanksource/mission-control

Length of output: 1144


🏁 Script executed:

# Check the Artifact model fields to understand scoping
rg -A 15 'type Artifact struct' --type go | head -50

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

# Look at the query functions to see how they filter artifacts
grep -n 'ArtifactsByCheck\|ArtifactsByPlaybookRun\|ArtifactsByConfigChange' artifacts/controllers.go

Repository: flanksource/mission-control

Length of output: 256


🏁 Script executed:

# Check if there's a git history or diff showing what changed
git log --oneline -20 -- artifacts/controllers.go 2>/dev/null || echo "No git history available"

Repository: flanksource/mission-control

Length of output: 239


🏁 Script executed:

# Check how MCP uses FindArtifact and what context it provides
cat mcp/artifacts.go

Repository: flanksource/mission-control

Length of output: 4491


🏁 Script executed:

# Search for ctx.Blobs() implementation to see if it has authorization
rg 'func.*Blobs' --type go

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

# Look at the refactoring commit to understand the change
git show 8d61cff --stat 2>/dev/null | head -30

Repository: flanksource/mission-control

Length of output: 1533


🏁 Script executed:

# Search for any per-artifact authorization or scoping in the codebase
rg 'artifact.*scope|artifact.*playbook|artifact.*check|artifact.*config' --type go -i | grep -v test | head -40

Repository: flanksource/mission-control

Length of output: 1480


🏁 Script executed:

# Check if artifacts have PlaybookRunID, CheckID, ConfigChangeID fields
rg 'PlaybookRunID|PlaybookRunActionID|CheckID|ConfigChangeID|ConfigID' --type go | grep -i artifact | head -20

Repository: flanksource/mission-control

Length of output: 453


🏁 Script executed:

# Look at the actual refactor diff for DownloadArtifact
git show 8d61cff -- artifacts/controllers.go 2>/dev/null | head -100

Repository: flanksource/mission-control

Length of output: 3632


🏁 Script executed:

# Check if ctx.Blobs() has any authorization or scoping
rg -B 5 -A 15 'func.*Blobs' --type go

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

# Look at the complete ListArtifacts function
rg -A 40 'func ListArtifacts' artifacts/controllers.go

Repository: flanksource/mission-control

Length of output: 1272


🏁 Script executed:

# Check the database schema or model for Artifact to see foreign keys
rg 'type Artifact' -A 25 --type go | grep -E 'ID|PlaybookRun|Check|Config|Action'

Repository: flanksource/mission-control

Length of output: 368


🏁 Script executed:

# Look for the Blobs() implementation in duty or incident-commander
find . -name "*.go" -type f | xargs grep -l 'func.*Blobs()' 2>/dev/null

Repository: flanksource/mission-control

Length of output: 55


🏁 Script executed:

# Check if there's an Artifact model definition in duty imports
rg 'duty.*Artifact|models.Artifact' artifacts/controllers.go -B 2 -A 2

Repository: flanksource/mission-control

Length of output: 167


🏁 Script executed:

# Look for any middleware or RLS that might filter artifacts by parent scope
rg 'RLSMiddleware|row.*level.*security|artifact.*scope' --type go -i | head -30

Repository: flanksource/mission-control

Length of output: 1230


🏁 Script executed:

# Check if the query functions filter by user scope
rg -A 10 'ArtifactsByCheck|ArtifactsByPlaybookRun|ArtifactsByConfigChange' --type go | head -50

Repository: flanksource/mission-control

Length of output: 1147


🏁 Script executed:

# Check if artifacts have PlaybookRunID, CheckID, ConfigID or similar fields
rg 'PlaybookRunActionID|CheckID|ConfigChangeID' --type go | grep -i artifact | head -20

Repository: flanksource/mission-control

Length of output: 214


🏁 Script executed:

# Look at the previous version of DownloadArtifact before the refactor
git show 8d61cff~1:artifacts/controllers.go 2>/dev/null || echo "Previous version not available"

Repository: flanksource/mission-control

Length of output: 101


🏁 Script executed:

# Check the RLS middleware implementation
rg -A 20 'func RLSMiddleware' --type go

Repository: flanksource/mission-control

Length of output: 1156


🏁 Script executed:

# Look for any authorization checks in ListArtifacts that might be inherited
rg -B 5 'ArtifactsByCheck\|ArtifactsByPlaybookRun\|ArtifactsByConfigChange' query/ --type go 2>/dev/null | head -80

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

# Check if artifacts route has RLSMiddleware
rg -B 5 -A 5 'ObjectArtifact.*ActionRead' artifacts/controllers.go

Repository: flanksource/mission-control

Length of output: 459


🏁 Script executed:

# Look at auth.WithRLS to understand how it works
rg -A 20 'func WithRLS' --type go

Repository: flanksource/mission-control

Length of output: 902


🏁 Script executed:

# Check if there are any tests showing authorization model for artifacts
rg -B 5 -A 10 'artifact.*permission|artifact.*rbac|artifact.*scope' tests/ --type go -i | head -100

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

# Look for artifact scoping in db/rbac or auth
rg 'artifact' db/rbac.go auth/rls.go -i

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

# Check if artifacts route should have RLSMiddleware by looking at similar patterns
rg -B 10 'RLSMiddleware' echo/serve.go | grep -A 10 'catalog\|resources/search'

Repository: flanksource/mission-control

Length of output: 651


🏁 Script executed:

# Look for any artifact-specific authorization checks elsewhere
rg 'artifact.*ctx.Subject|ctx.Subject.*artifact' --type go -i

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

# Check if DownloadArtifact is called in any test to understand expected behavior
rg 'DownloadArtifact|/download/' tests/ --type go -i

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

# Look at the git log to see if there are any related issues/PRs
git log --oneline --all --grep='artifact' -i 2>/dev/null | head -20

Repository: flanksource/mission-control

Length of output: 1318


🏁 Script executed:

# Look for how artifacts are related to parent entities in the database or models
rg -B 5 -A 15 'ArtifactsByPlaybookRun|ArtifactsByCheck|ArtifactsByConfigChange' --type go | head -100

Repository: flanksource/mission-control

Length of output: 1769


🏁 Script executed:

# Check if RLS is applied to artifacts queries
rg -A 10 'GetRLSPayload' auth/rls.go

Repository: flanksource/mission-control

Length of output: 712


🏁 Script executed:

# Look at what the RLS payload filters on
rg 'postgres.*rls|SetPostgresSessionRLS' --type go -A 5 | head -50

Repository: flanksource/mission-control

Length of output: 3649


Confirm per-artifact authorization is still enforced after dropping the DB lookup.

The refactoring changed from db.FindArtifact + scope-aware connection lookup to direct blob storage access by UUID. Since ListArtifacts explicitly requires parent entity IDs (check, playbook_run, config_change) to filter results, artifacts are clearly scoped to parent entities. The new DownloadArtifact eliminates the database check that previously validated artifact scope, replacing it with only route-level rbac.Authorization(policy.ObjectArtifact, policy.ActionRead).

Any user with Artifact:Read can now download any artifact by UUID, bypassing per-artifact scope restrictions. Either confirm this coarse access model is intended, or restore per-artifact authorization by re-introducing the artifact lookup to verify scope visibility before streaming.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@artifacts/controllers.go` around lines 73 - 91, DownloadArtifact now reads
blobs directly by UUID which bypasses per-artifact scope checks; restore the DB
lookup and scope/permission validation before streaming. After parsing
artifactID in DownloadArtifact, call the existing db.FindArtifact (or the
repository method that returns the artifact and its parent IDs) and verify the
caller can see that artifact (re-use the same scope/visibility logic
ListArtifacts relies on or invoke the rbac.Authorization(policy.ObjectArtifact,
policy.ActionRead) with the artifact's parent context) and only then proceed to
call ctx.Blobs().Read(artifactID) and stream; ensure
blobs.Close()/data.Content.Close() remain in place.

Comment thread artifacts/controllers.go
Comment on lines +97 to +100
if filename := c.QueryParam("filename"); filename != "" {
c.Response().Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=%q", filename))
}
defer file.Close()

return c.Stream(http.StatusOK, artifact.ContentType, file)
return c.Stream(http.StatusOK, data.ContentType, data.Content)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Locate the Blobs().Read return type and its fields to see if a filename is available.
rg -nP -C3 '\bfunc\s+\([^)]*\)\s+Read\s*\(' --type=go
rg -nP -C2 '\bContentType\b|\bContentLength\b|\bFilename\b|\bName\b' --type=go -g '**/blob*'
rg -nP -C3 '\bBlobs\s*\(\s*\)' --type=go

Repository: flanksource/mission-control

Length of output: 591


🏁 Script executed:

#!/bin/bash
# 1. Read the DownloadArtifact function to see the complete context
cat -n artifacts/controllers.go | sed -n '70,105p'

Repository: flanksource/mission-control

Length of output: 1202


🏁 Script executed:

#!/bin/bash
# 2. Find Blobs interface/type definition and Read method
fd -t f -e go | xargs rg -A 10 'type Blobs' | head -50
fd -t f -e go | xargs rg -B 2 -A 5 'func.*Read\(' | grep -A 5 'Blobs\|blob' | head -60

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

#!/bin/bash
# 3. Search for the actual Blobs().Read return type definition
rg -A 10 'func \(.*\) Read' --type=go | head -80

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

#!/bin/bash
# Search more broadly for Blobs type/interface definition in the codebase
fd -t f -e go | xargs grep -l "type Blobs\|interface Blobs" | head -10

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

#!/bin/bash
# Check what package ctx.Blobs() comes from and search there
rg 'ctx\.Blobs\(\)' --type=go -B 5 -A 5 | head -40

Repository: flanksource/mission-control

Length of output: 589


🏁 Script executed:

#!/bin/bash
# Search for blob-related packages or interfaces
fd -t d -name "*blob*" | head -10
rg 'type.*Read\(' --type=go | grep -i blob

Repository: flanksource/mission-control

Length of output: 301


🏁 Script executed:

#!/bin/bash
# Check git history to see what the previous implementation looked like
cd artifacts && git log --oneline -n 20 controllers.go 2>/dev/null | head -20
git show HEAD~1:artifacts/controllers.go 2>/dev/null | sed -n '70,110p'

Repository: flanksource/mission-control

Length of output: 239


🏁 Script executed:

#!/bin/bash
# Get the previous implementation using git show with the commit hash
git show 8d61cff:artifacts/controllers.go 2>/dev/null | sed -n '70,120p'

Repository: flanksource/mission-control

Length of output: 978


🏁 Script executed:

#!/bin/bash
# Check imports in artifacts/controllers.go to find where Blobs comes from
head -30 artifacts/controllers.go | grep -E "^import|github.com"

Repository: flanksource/mission-control

Length of output: 477


🏁 Script executed:

#!/bin/bash
# Search for Blobs definition in the entire codebase - look for struct/interface
rg -U 'type\s+Blobs\s+(interface|struct)' --type=go
rg 'Blobs.*interface' --type=go -A 10

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

#!/bin/bash
# Get the parent commit to see previous implementation
git show HEAD~1:artifacts/controllers.go 2>/dev/null | sed -n '70,120p'

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

#!/bin/bash
# Try to find information about Blobs in external dependencies or vendor directories
find . -path ./node_modules -prune -o -type f -name "*.go" -exec grep -l "type.*Blobs.*interface\|type.*Blobs.*struct" {} \;

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

#!/bin/bash
# Search for the Read method and its return type more carefully
rg 'Read.*func\(' --type=go -A 3 | head -60

Repository: flanksource/mission-control

Length of output: 654


🏁 Script executed:

#!/bin/bash
# Check git log for artifacts/controllers.go to understand history
git log --oneline artifacts/controllers.go | head -10

Repository: flanksource/mission-control

Length of output: 239


🏁 Script executed:

#!/bin/bash
# Try to see the diff of the most recent change to artifacts/controllers.go
git diff HEAD~1 artifacts/controllers.go 2>/dev/null | head -150

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

#!/bin/bash
# Look for Artifact model definition in models package
fd -t f -e go models | xargs grep -l "type Artifact" | head -5

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

#!/bin/bash
# Search for any previous artifact-to-filename mapping or DB lookup
rg 'FindArtifact|GetArtifact|GetFSForConnection' --type=go -B 2 -A 5

Repository: flanksource/mission-control

Length of output: 8905


🏁 Script executed:

#!/bin/bash
# Find the Artifact model definition
find . -type f -name "*.go" -path "*/models/*" -o -path "*/model/*" | xargs grep -l "type Artifact struct" | head -5

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

#!/bin/bash
# Search for Artifact struct definition anywhere
rg 'type Artifact struct' --type=go -A 15

Repository: flanksource/mission-control

Length of output: 53


🏁 Script executed:

#!/antml/bash
# Look at the db/artifacts.go file to understand what FindArtifact returns
cat -n db/artifacts.go | head -50

Repository: flanksource/mission-control

Length of output: 690


🏁 Script executed:

#!/bin/bash
# Check if there's any reference to artifact.Filename or similar in the codebase
rg '\.Filename|\.Path' --type=go | grep -i artifact | head -20

Repository: flanksource/mission-control

Length of output: 1060


🏁 Script executed:

#!/bin/bash
# Find where DownloadArtifact route is registered
rg 'DownloadArtifact' --type=go -B 3 -A 3

Repository: flanksource/mission-control

Length of output: 804


🏁 Script executed:

#!/bin/bash
# Check for RBAC configuration on artifact routes
rg 'artifacts.*authorization|rbac.*artifact' --type=go -i -B 2 -A 2

Repository: flanksource/mission-control

Length of output: 453


🏁 Script executed:

#!/bin/bash
# Look for route registration in init or router setup
rg 'RegisterRoutes|GET.*artifact|POST.*artifact' --type=go -i -B 2 -A 2 | head -60

Repository: flanksource/mission-control

Length of output: 2666


🏁 Script executed:

#!/bin/bash
# Check if the Blobs interface provides any metadata methods
rg 'type.*Blobs.*interface|func.*Blobs.*\(\)' --type=go -A 15

Repository: flanksource/mission-control

Length of output: 53


Content-Disposition default filename regression confirmed.

The models.Artifact struct includes a Filename field, which was previously available when the endpoint performed a direct DB lookup via db.FindArtifact(). The refactored code now retrieves artifacts via ctx.Blobs().Read(artifactID), which returns only ContentType, ContentLength, and Content — the Filename field is no longer exposed. As a result, when no ?filename= query parameter is provided, browsers will use the artifact UUID or a generic default, which is a user-visible regression.

This represents an architectural shift: the new Blobs interface abstracts away artifact metadata. If ctx.Blobs().Read() cannot be extended to expose filename, you will need to restore the DB lookup to retrieve the artifact's stored filename and use it as a fallback default.

Additionally, verify whether the removal of the DB lookup eliminates any per-artifact authorization or tenancy scope validation that was previously enforced.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@artifacts/controllers.go` around lines 97 - 100, The refactor removed the
filename by switching from db.FindArtifact() to ctx.Blobs().Read(artifactID);
restore a fallback so responses include a sensible Content-Disposition filename:
either extend the Blobs interface/Read return value to include Filename (and
update its implementations) or re-add a DB lookup using models.Artifact via
db.FindArtifact(artifactID) to read Artifact.Filename and use it when
c.QueryParam("filename") is empty; also ensure you preserve any per-artifact
authorization/tenancy checks that existed with the DB path (re-run the same
auth/tenant validations before returning the blob).

Comment thread auth/middleware.go
"/revoke",
"/device_authorization",
"/endsession",
"/end_session",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
rg -nP -C2 '(end_?session|EndSession)' --type=go

Repository: flanksource/mission-control

Length of output: 487


Endpoint path mismatch: /endsession registered but /end_session whitelisted.

The OIDC provider in auth/oidc/routes.go:44 registers the endpoint at /endsession (no underscore), but the skip-auth middleware in auth/middleware.go:75 has /end_session (with underscore). This mismatch will cause the actual OIDC endpoint to be blocked by authentication, breaking the end_session flow. Update the skip-auth list to /endsession.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@auth/middleware.go` at line 75, Update the skip-auth whitelist entry in the
middleware where the list of paths is defined (in auth/middleware.go) to match
the OIDC route registration: replace the incorrect "/end_session" entry with
"/endsession" so the skipAuth middleware (used by the auth middleware function
that checks the whitelist) allows the OIDC provider's end session endpoint
registered in auth/oidc/routes.go.

Comment thread auth/oidc_validate.go
}

issuer := strings.TrimRight(api.FrontendURL, "/")
issuer := strings.TrimRight(api.PublicURL, "/")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Issuer now hard-coded to PublicURL — breaks Basic-mode OIDC.

In auth/middleware.go, Basic mode still mounts the OIDC provider at api.FrontendURL (lines 109 and 112), while only Kratos mode was switched to api.PublicURL. Tokens issued by the Basic-mode provider will therefore carry iss == FrontendURL and fail the iss != issuer check here whenever PublicURL != FrontendURL, silently rejecting all OIDC tokens in that mode.

Either mount Basic-mode OIDC on PublicURL too, or derive issuer from the same source used to mount the routes (e.g. a shared api.OIDCIssuer).

Comment thread cmd/auth_login.go
Comment on lines +162 to 170
func openBrowser(url string) error {
switch runtime.GOOS {
case "darwin":
cmd = "open"
args = []string{url}
return exec.Command("open", url).Start()
case "windows":
cmd = "cmd"
args = []string{"/c", "start", url}
return exec.Command("cmd", "/c", "start", "", url).Start()
default:
cmd = "xdg-open"
args = []string{url}
return exec.Command("xdg-open", url).Start()
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Go openBrowser cmd /c start URL with ampersand Windows escape

💡 Result:

To open a URL containing ampersands (&) using Go's cmd/internal/browser Open function (or equivalent custom openBrowser using exec.Command("cmd", "/c", "start", url)) on Windows, the & is interpreted as a command separator by cmd.exe, causing failure. Fixes from sources: 1. Replace each & with ^& in the URL before passing: strings.ReplaceAll(url, "&", "^&"). Then use exec.Command("cmd", "/c", "start", escapedURL). This escapes for cmd parsing while preserving the original URL for the browser. Example: import ( "os/exec" "runtime" "strings" ) func openBrowser(url string) error { if runtime.GOOS == "windows" { url = strings.ReplaceAll(url, "&", "^&") return exec.Command("cmd", "/c", "start", url).Start } // other platforms... return nil } 2. Use explorer instead: exec.Command("explorer", url).Start. Handles & without escaping (quotes URL automatically). 3. Use rundll32: exec.Command("rundll32", "url.dll,FileProtocolHandler", url).Start. No escaping needed for &. The standard cmd/internal/browser uses cmd /c start without escaping, so it fails on URLs with &. Use one of the above modifications for reliability. Confirmed across Go gists, Stack Overflow, and Windows cmd docs as of 2026.

Citations:


Windows cmd /c start breaks on URLs with ampersands in query parameters.

authURL built at lines 98–105 contains multiple & query separators (state, nonce, code_challenge). On Windows, exec.Command("cmd", "/c", "start", "", url) passes the URL through cmd.exe, which interprets & as a command separator. The browser receives only the URL up to the first &, breaking the OIDC flow (missing state/nonce/code_challenge → callback validation fails) and introducing a command-injection risk if url becomes attacker-influenced.

Use exec.Command("rundll32", "url.dll,FileProtocolHandler", url).Start() instead, which bypasses cmd.exe parsing and handles & correctly. Alternatively, exec.Command("explorer", url).Start() is simpler and also avoids the issue.

🛠️ Suggested fix
 func openBrowser(url string) error {
 	switch runtime.GOOS {
 	case "darwin":
 		return exec.Command("open", url).Start()
 	case "windows":
-		return exec.Command("cmd", "/c", "start", "", url).Start()
+		return exec.Command("rundll32", "url.dll,FileProtocolHandler", url).Start()
 	default:
 		return exec.Command("xdg-open", url).Start()
 	}
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func openBrowser(url string) error {
switch runtime.GOOS {
case "darwin":
cmd = "open"
args = []string{url}
return exec.Command("open", url).Start()
case "windows":
cmd = "cmd"
args = []string{"/c", "start", url}
return exec.Command("cmd", "/c", "start", "", url).Start()
default:
cmd = "xdg-open"
args = []string{url}
return exec.Command("xdg-open", url).Start()
}
func openBrowser(url string) error {
switch runtime.GOOS {
case "darwin":
return exec.Command("open", url).Start()
case "windows":
return exec.Command("rundll32", "url.dll,FileProtocolHandler", url).Start()
default:
return exec.Command("xdg-open", url).Start()
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/auth_login.go` around lines 162 - 170, The openBrowser function uses
exec.Command("cmd", "/c", "start", "", url) on Windows which causes cmd.exe to
parse ampersands and breaks OIDC URLs; replace that branch in openBrowser to
call a command that bypasses cmd.exe parsing (e.g., use exec.Command("rundll32",
"url.dll,FileProtocolHandler", url).Start() or exec.Command("explorer",
url).Start()) so URLs with & are passed intact and avoid command-injection
risks; update the Windows case in openBrowser accordingly and ensure Start() is
invoked on the returned *Cmd.

Comment thread db/rbac.go
Comment on lines +41 to +43
func GetRBACAccessByConfigIDs(ctx context.Context, configIDs []uuid.UUID) ([]RBACAccessRow, error) {
return GetRBACAccess(ctx, nil, false, configIDs...)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Confirm callers and whether any rely on empty-input-returns-all behaviour.
rg -n -C3 'GetRBACAccessByConfigIDs|GetRBACAccess\b'

Repository: flanksource/mission-control

Length of output: 1866


🏁 Script executed:

#!/bin/bash

# Check the context of targetIDs in report/catalog/report.go
echo "=== Checking report/catalog/report.go around line 278 ==="
sed -n '260,290p' report/catalog/report.go

echo -e "\n=== Checking db/rbac.go for GetGroupMembersForConfigs and GetRBACAccess logic ==="
sed -n '41,150p' db/rbac.go

echo -e "\n=== Checking if targetIDs comes from a potentially empty operation ==="
rg -B 20 'GetRBACAccessByConfigIDs' report/catalog/report.go | head -60

Repository: flanksource/mission-control

Length of output: 6355


🏁 Script executed:

#!/bin/bash

# Find where targetIDs is defined in report/catalog/report.go
echo "=== Looking for targetIDs definition in report/catalog/report.go ==="
rg -n 'targetIDs' report/catalog/report.go | head -20

# Check function signature and early returns
echo -e "\n=== Broader context around the report function ==="
sed -n '1,100p' report/catalog/report.go | head -80

Repository: flanksource/mission-control

Length of output: 2709


🏁 Script executed:

#!/bin/bash

# Look at the full BuildReport function to understand tree initialization
echo "=== Full BuildReport function ==="
sed -n '100,250p' report/catalog/report.go

echo -e "\n=== Check what OutgoingIDs returns ==="
rg -n 'OutgoingIDs' report/catalog/report.go -A 2 -B 2

Repository: flanksource/mission-control

Length of output: 4934


🏁 Script executed:

#!/bin/bash

# Verify the ConfigTree and OutgoingIDs behavior - check if it can be empty
echo "=== Searching for ConfigTree and OutgoingIDs definitions ==="
rg -n 'func.*OutgoingIDs|ConfigTree' --type go

echo -e "\n=== Check if there's any guard between targetIDs assignment and usage ==="
sed -n '195,210p' report/catalog/report.go

Repository: flanksource/mission-control

Length of output: 2535


🏁 Script executed:

#!/bin/bash

# Final confirmation: check the exact query construction in GetRBACAccess
echo "=== GetRBACAccess query construction ==="
sed -n '45,90p' db/rbac.go

echo -e "\n=== Confirm the behavior: no WHERE clause when both conditions are empty ==="
sed -n '75,92p' db/rbac.go

Repository: flanksource/mission-control

Length of output: 2352


Critical: GetRBACAccessByConfigIDs returns all rows when configIDs is empty.

When configIDs is empty, the call becomes GetRBACAccess(ctx, nil, false) where both selectors and configIDs are empty — the WHERE clause at line 82 is skipped and the query runs with no config_id filter, returning the entire config_access_summary table. The actual call site at report/catalog/report.go:278 can pass empty targetIDs when tree.OutgoingIDs() finds no outgoing edges, making this a real data-scope hazard.

Note: GetGroupMembersForConfigs already guards this correctly (line 127–129); mirror that here.

🔒 Proposed fix
 func GetRBACAccessByConfigIDs(ctx context.Context, configIDs []uuid.UUID) ([]RBACAccessRow, error) {
+	if len(configIDs) == 0 {
+		return nil, nil
+	}
 	return GetRBACAccess(ctx, nil, false, configIDs...)
 }

Optionally also reject empty inputs inside GetRBACAccess itself so that rbac_report.Report with no selectors and no IDs cannot accidentally dump the whole table:

 func GetRBACAccess(ctx context.Context, selectors []types.ResourceSelector, recursive bool, configIDs ...uuid.UUID) (results []RBACAccessRow, err error) {
 	timer := dutyQuery.NewQueryLogger(ctx).Start("RBACAccess").Arg("configIDs", len(configIDs)).Arg("selectors", len(selectors))
 	defer timer.End(&err)
+
+	if len(selectors) == 0 && len(configIDs) == 0 {
+		return nil, ctx.Oops().Code(dutyAPI.EINVALID).Errorf("at least one selector or config id is required")
+	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func GetRBACAccessByConfigIDs(ctx context.Context, configIDs []uuid.UUID) ([]RBACAccessRow, error) {
return GetRBACAccess(ctx, nil, false, configIDs...)
}
func GetRBACAccessByConfigIDs(ctx context.Context, configIDs []uuid.UUID) ([]RBACAccessRow, error) {
if len(configIDs) == 0 {
return nil, nil
}
return GetRBACAccess(ctx, nil, false, configIDs...)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@db/rbac.go` around lines 41 - 43, GetRBACAccessByConfigIDs currently calls
GetRBACAccess with no selectors and can return the entire table when configIDs
is empty; change GetRBACAccessByConfigIDs to explicitly guard for
len(configIDs)==0 and return an empty []RBACAccessRow (nil, nil) immediately
(mirror the behavior in GetGroupMembersForConfigs), and optionally add a
defensive check inside GetRBACAccess to return an empty result or error when
both selectors and configIDs are empty to prevent accidental full-table scans.

Comment thread rbac_report/report.go
Comment on lines +195 to +224
func resolveSubjectAndParents(ctx context.Context, configItems []models.ConfigItem, configMap map[string]models.ConfigItem) (*models.ConfigItem, []models.ConfigItem) {
if len(configItems) == 0 {
return nil, nil
}

first := configItems[0]

var parents []models.ConfigItem
current := first
for current.ParentID != nil {
parent, ok := configMap[current.ParentID.String()]
if !ok {
loaded, err := query.GetConfigsByIDs(ctx, []uuid.UUID{*current.ParentID})
if err != nil || len(loaded) == 0 {
break
}
parent = loaded[0]
configMap[parent.ID.String()] = parent
}
parents = append(parents, parent)
current = parent
}

// Reverse so root is first
for i, j := 0, len(parents)-1; i < j; i, j = i+1, j-1 {
parents[i], parents[j] = parents[j], parents[i]
}

return &first, parents
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add cycle protection when walking ParentID chain.

for current.ParentID != nil will loop forever if the parent chain ever contains a cycle (e.g., self-parent or A→B→A from bad data). Track visited IDs and break when a repeat is seen.

🛡️ Proposed fix
 func resolveSubjectAndParents(ctx context.Context, configItems []models.ConfigItem, configMap map[string]models.ConfigItem) (*models.ConfigItem, []models.ConfigItem) {
 	if len(configItems) == 0 {
 		return nil, nil
 	}

 	first := configItems[0]

 	var parents []models.ConfigItem
 	current := first
+	visited := map[uuid.UUID]struct{}{current.ID: {}}
 	for current.ParentID != nil {
+		if _, seen := visited[*current.ParentID]; seen {
+			break
+		}
 		parent, ok := configMap[current.ParentID.String()]
 		if !ok {
 			loaded, err := query.GetConfigsByIDs(ctx, []uuid.UUID{*current.ParentID})
 			if err != nil || len(loaded) == 0 {
 				break
 			}
 			parent = loaded[0]
 			configMap[parent.ID.String()] = parent
 		}
+		visited[parent.ID] = struct{}{}
 		parents = append(parents, parent)
 		current = parent
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rbac_report/report.go` around lines 195 - 224, The loop in
resolveSubjectAndParents can infinite-loop on cyclic parent chains (self-parent
or A→B→A); add cycle protection by tracking visited IDs (e.g., a
map[string]bool) keyed by current.ID or *current.ParentID and check before
following a parent: if the parent ID is already visited, break the loop (or
return error/stop traversal). Update resolveSubjectAndParents to mark the
starting item's ID as visited, check visited before loading/using parent from
configMap or query.GetConfigsByIDs, and stop appending/traversing when a repeat
is detected so the parents slice cannot grow indefinitely.

Comment on lines +15 to +21
const grouped = new Map<string, { configName: string; configType?: string; changes: ConfigChange[] }>();
for (const c of withArtifacts) {
const key = c.configName || c.configID || 'unknown';
if (!grouped.has(key)) {
grouped.set(key, { configName: c.configName || 'Unknown', configType: c.configType, changes: [] });
}
grouped.get(key)!.changes.push(c);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Group appendix entries by stable config identity, not configName.

Line 17 uses configName as the primary grouping key, so two different configs with the same display name collapse into one section. In multi-namespace or same-name environments that will mix screenshots and change history from unrelated resources. Use configID as the grouping key and keep the name only for display.

Suggested fix
-  const grouped = new Map<string, { configName: string; configType?: string; changes: ConfigChange[] }>();
+  const grouped = new Map<string, { configName: string; configType?: string; changes: ConfigChange[] }>();
   for (const c of withArtifacts) {
-    const key = c.configName || c.configID || 'unknown';
+    const key = c.configID || `${c.configType || ''}:${c.configName || 'unknown'}`;
     if (!grouped.has(key)) {
-      grouped.set(key, { configName: c.configName || 'Unknown', configType: c.configType, changes: [] });
+      grouped.set(key, {
+        configName: c.configName || c.configID || 'Unknown',
+        configType: c.configType,
+        changes: [],
+      });
     }
     grouped.get(key)!.changes.push(c);
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const grouped = new Map<string, { configName: string; configType?: string; changes: ConfigChange[] }>();
for (const c of withArtifacts) {
const key = c.configName || c.configID || 'unknown';
if (!grouped.has(key)) {
grouped.set(key, { configName: c.configName || 'Unknown', configType: c.configType, changes: [] });
}
grouped.get(key)!.changes.push(c);
const grouped = new Map<string, { configName: string; configType?: string; changes: ConfigChange[] }>();
for (const c of withArtifacts) {
const key = c.configID || `${c.configType || ''}:${c.configName || 'unknown'}`;
if (!grouped.has(key)) {
grouped.set(key, {
configName: c.configName || c.configID || 'Unknown',
configType: c.configType,
changes: [],
});
}
grouped.get(key)!.changes.push(c);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/ArtifactAppendix.tsx` around lines 15 - 21, The current
grouping uses configName as the key which collapses different configs with the
same display name; change the grouping key to use configID (falling back to
configName or 'unknown' only for display) so entries are grouped by stable
identity: update the logic that builds grouped (the Map named grouped over
withArtifacts) to compute key = c.configID || c.configName || 'unknown' and keep
c.configName/c.configType only in the stored value (changes: ConfigChange[]),
ensuring grouped.get(key)!.changes.push(c) remains unchanged.

Comment on lines +51 to +58
if (diffDays <= 30) {
const weekStart = new Date(d);
weekStart.setDate(d.getDate() - d.getDay() + 1);
const weekEnd = new Date(weekStart);
weekEnd.setDate(weekStart.getDate() + 4);
const fmt = (dt: Date) => dt.toLocaleDateString('en-US', { month: 'short', day: 'numeric' });
return { key: `week-${fmt(weekStart)}`, label: `${fmt(weekStart)} – ${fmt(weekEnd)}`, dateFormat: 'monthDay' };
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

getTimeBucket puts Sundays in the wrong week.

weekStart.setDate(d.getDate() - d.getDay() + 1) assumes getDay() of Monday is 1 through Saturday is 6. For Sunday (getDay() === 0) this becomes d.getDate() + 1 — i.e. the next Monday, not the previous one. A Sunday timestamp ends up in a different week bucket than the Saturday that immediately preceded it.

Additionally, weekEnd = weekStart + 4 produces a Mon–Fri range, so any Saturday in the 7–30 day window gets a bucket label that excludes the day itself (e.g. a Saturday shows as “Oct 13 – Oct 17”). If weekend grouping is intentional, consider extending the range to Sunday.

🐛 Proposed fix
-  if (diffDays <= 30) {
-    const weekStart = new Date(d);
-    weekStart.setDate(d.getDate() - d.getDay() + 1);
-    const weekEnd = new Date(weekStart);
-    weekEnd.setDate(weekStart.getDate() + 4);
+  if (diffDays <= 30) {
+    const weekStart = new Date(d);
+    // Shift so Monday is day 0; Sunday (0) maps to 6.
+    weekStart.setDate(d.getDate() - ((d.getDay() + 6) % 7));
+    const weekEnd = new Date(weekStart);
+    weekEnd.setDate(weekStart.getDate() + 6);
     const fmt = (dt: Date) => dt.toLocaleDateString('en-US', { month: 'short', day: 'numeric' });
     return { key: `week-${fmt(weekStart)}`, label: `${fmt(weekStart)} – ${fmt(weekEnd)}`, dateFormat: 'monthDay' };
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/utils.ts` around lines 51 - 58, The week-bucketing logic in
getTimeBucket mishandles Sundays and creates a Mon–Fri label; adjust weekStart
and weekEnd calculation to compute the previous Monday correctly and include the
full week: compute an offset like (d.getDay()+6)%7 (days since Monday) and set
weekStart by subtracting that offset from d (use weekStart.setDate(d.getDate() -
offset)); set weekEnd to weekStart + 6 days to cover Monday–Sunday (instead of
+4), keep using fmt for label and key (e.g., `${fmt(weekStart)} –
${fmt(weekEnd)}`) so Sunday timestamps fall into the same week as the preceding
Saturday.

Comment thread report/facet.go
Comment on lines +165 to +199
response, err := client.R(ctx).
Header("Content-Type", mw.FormDataContentType()).
Post("/render", &body)
if err != nil {
return nil, fmt.Errorf("facet render request failed: %w", err)
}
if !response.IsOK() {
errBody, _ := response.AsString()
return nil, fmt.Errorf("facet render failed (status %d): %s", response.StatusCode, errBody)
}

if format == "html" {
return io.ReadAll(response.Body)
}

renderResult, err := response.AsJSON()
if err != nil {
return nil, fmt.Errorf("failed to parse render response: %w", err)
}
resultURL, _ := renderResult["url"].(string)
if resultURL == "" {
return nil, fmt.Errorf("render response missing 'url' field")
}

pdfResponse, err := client.R(ctx).Get(resultURL)
if err != nil {
return nil, fmt.Errorf("failed to fetch rendered result: %w", err)
}
if !pdfResponse.IsOK() {
errBody, _ := pdfResponse.AsString()
return nil, fmt.Errorf("result fetch failed (status %d): %s", pdfResponse.StatusCode, errBody)
}

return io.ReadAll(pdfResponse.Body)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Inspect how commons/http handles Body close after AsJSON/AsString/raw ReadAll access.
fd -t f -e go . | xargs rg -l 'commons/http' 2>/dev/null | head -20
rg -nP --type=go -C3 '\bR\(ctx\)\.' -g '!**/vendor/**' | head -80
rg -nP --type=go -C5 'response\.Body' -g '!**/vendor/**' | head -60
# Look at other consumers of commons/http to see the prevailing pattern
rg -nP --type=go -C3 'AsString\(\)|AsJSON\(\)' -g '!**/vendor/**' | head -60

Repository: flanksource/mission-control

Length of output: 10621


🌐 Web query:

flanksource commons http Response AsJSON AsString body close behavior

💡 Result:

flanksource/commons is a Go library of common utilities (https://github.com/flanksource/commons, latest v1.50.2 as of 2026-04-09). It likely includes an HTTP client wrapper with a Response type featuring methods like AsJSON and AsString for parsing the response body into JSON objects or strings. These methods read the response body (io.ReadCloser). In standard Go HTTP practice (net/http), the response.Body must be fully read and closed to allow connection reuse and prevent resource leaks. Closing without full read may prevent reuse but still closes the body; neither alone is sufficient if not reading fully. For flanksource/commons, follow the same: call AsJSON or AsString (reads body), then explicitly call Close on the Response or body if provided, e.g., resp := client.Get(url); json := resp.AsJSON; defer resp.Close or resp.Body.Close. No official docs found detailing exact API, but behavior aligns with Go conventions (sources 1,2,4,5,8). Check pkg.go.dev/github.com/flanksource/commons/http or repo source for precise signature.

Citations:


Add defer response.Body.Close() and defer pdfResponse.Body.Close() to prevent connection leaks.

Lines 177 and 198 call io.ReadAll(response.Body) and io.ReadAll(pdfResponse.Body) without closing the bodies. The io.ReadAll() function reads but does not close the underlying io.ReadCloser. Without explicit Close(), the response bodies remain open, preventing HTTP connection reuse and leaking TCP connections.

Other code in the repository (e.g., connection/check.go) explicitly uses defer response.Body.Close() after similar response operations, confirming the pattern. Even if response.AsJSON() or response.AsString() (used at line 180 and line 172/194 respectively) were called, they do not auto-close the body.

Add defer statements immediately after creating each response:

  • After line 167 (response, err := client.R(ctx)...): defer response.Body.Close()
  • After line 189 (pdfResponse, err := client.R(ctx)...): defer pdfResponse.Body.Close()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/facet.go` around lines 165 - 199, The HTTP response bodies are not
closed causing connection leaks; after creating the first response (the variable
response from client.R(...).Post("/render", &body)) add a defer
response.Body.Close() immediately after the error check, and after creating
pdfResponse (the variable from client.R(...).Get(resultURL)) add a defer
pdfResponse.Body.Close() immediately after its error check so both
io.ReadAll(response.Body) and io.ReadAll(pdfResponse.Body) run with the bodies
properly closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant