Skip to content

chore(build): simplify makefile tool paths and regenerate kitchen-sink test data#2958

Open
moshloop wants to merge 48 commits intomainfrom
pr/reports
Open

chore(build): simplify makefile tool paths and regenerate kitchen-sink test data#2958
moshloop wants to merge 48 commits intomainfrom
pr/reports

Conversation

@moshloop
Copy link
Copy Markdown
Member

@moshloop moshloop commented Apr 14, 2026

Summary by CodeRabbit

  • New Features

    • Applications/Views: Access and Access Logs sections; UI filters for access/search/date/MFA.
    • Catalog reporting: rich catalog reports with relationships, grouped configs, access/access‑logs, audit, and new export formats; new report pages and components (Catalog, Findings, Kitchen Sink, RBAC matrix).
    • CLI: new catalog commands (get, tree, report) and --report-source to override embedded reports.
  • Bug Fixes

    • OIDC/JWT issuer validation now uses the public endpoint.
    • API client better detects HTML error responses.
  • Chores

    • Report rendering/tooling and icon libraries updated.

@gitguardian
Copy link
Copy Markdown

gitguardian bot commented Apr 14, 2026

️✅ There are no secrets present in this pull request anymore.

If these secrets were true positive and are still valid, we highly recommend you to revoke them.
While these secrets were previously flagged, we no longer have a reference to the
specific commits where they were detected. Once a secret has been leaked into a git
repository, you should consider it compromised, even if it was deleted immediately.
Find here more information about risks.


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 14, 2026

Walkthrough

Adds a full catalog reporting stack (types, builders, CEL mappings, settings, CLI), integrates facet rendering (CLI/HTTP) with source overrides, expands access/audit types and DB queries, large React report frontend, RBAC enhancements, CLI/auth/SDK adjustments, and numerous tests and fixtures.

Changes

Cohort / File(s) Summary
Build & tooling
Makefile, .gitignore, .gavel.yaml, report/package.json
Adjust Makefile targets/prereqs and tool install commands; add report build target; expand .gitignore; add CI pre-step; update facet/icon JS deps.
API surface
api/...
api/application.go, api/catalog_report.go, api/rbac_report.go, api/scraper_report.go, api/global.go, api/view.go, api/zz_generated.deepcopy.go, api/zz_generated.deepcopy.go
Add CatalogReport types, constructors, AccessItem/AccessLogItem, extend ApplicationSection/UIRef/RBAC shapes, add BuildCommit global, and update deepcopy logic.
Database & queries
db/...
db/applications.go, db/rbac.go, db/rbac_test.go
Add GetAccessForUIRef/GetAccessLogsForUIRef; refactor GetRBACAccess signature (support recursion/configIDs); add GroupMemberRow and GetGroupMembersForConfigs; add query-logging.
Catalog report backend
report/catalog/*
report/catalog/report.go, report/catalog/export.go, report/catalog/change_mappings.go, report/catalog/settings.go, report/catalog/default-settings.yaml, tests...
Implement report Options, BuildReport, export pipeline, CEL-driven change-mapping, settings with embedded defaults+overrides, audit enrichment, and tests.
Facet rendering infra
report/facet.go, report/embed.go, report/embed.go
Add RenderCLI/RenderHTTP, embedded asset extraction/archive builder, SourceDir override, RenderResult metadata.
Catalog CLI & commands
cmd/...
cmd/catalog.go, cmd/catalog_get.go, cmd/catalog_report.go, cmd/catalog_tree.go, cmd/application.go, cmd/view.go, cmd/rbac.go
Add catalog get/report/tree commands; wire --report-source flag to report.SourceDir; refactor catalog query/get/report flows.
Frontend types & React reports
report/*.tsx, report/components/*, report/kitchen-sink/*
Large addition/refactor of React report pages/components (CatalogReport, Findings, RBAC matrix, KitchenSink) and many UI components; reorganize PDF/Document layout and remove legacy RBACReport page.
Report assets & fixtures
report/testdata/kitchen-sink.yaml, report/kitchen-sink-data.ts, report/build-kitchen-sink.ts, report/sample-findings.json, report/finding-schema.json
Add comprehensive kitchen-sink fixture, generator script, sample findings JSON and schema; expose mission-control data as exportable JSON.
Consumers & rendering callers
application/render_facet.go, rbac_report/render_facet.go, views/render_facet.go, views/render_facet_test.go
Replace subprocess-based facet invocation with report.RenderCLI/RenderHTTP and update callers/tests to use shared rendering API.
Artifacts & blobs
artifacts/controllers.go
Switch artifact download to blobs API streaming; add /list/config_change route handling.
Auth / OIDC / storage / middleware
auth/*, tests/e2e/oidc/*, auth/oidc/models.go, auth/oidc/storage.go, auth/oidcclient/oidcclient.go
Remove certain Kratos redirect/callback interfaces; switch issuer checks to PublicURL; simplify RandomBase64; replace custom StringList with pq.StringArray for scopes and update storage; adjust middleware exclusions and tests.
SDK & HTTP client
sdk/client.go, sdk/client_test.go
Add Accept header, HTML-response detection with ErrHTMLResponse, centralized decodeJSON; add tests for HTML detection.
CLI helpers & UX
cmd/context.go, cmd/connection_browser.go, cmd/connection_test_cmd.go, cmd/auth_login.go
Add context add and ensureAPIBase probe; enhance browser login bearer handling/auto-select; encapsulate API connection tests; simplify auth_login output/flow.
Report components utilities
report/components/*, report/components/change-section-utils.ts
Add many helper utilities for change classification, typed-change rendering, calendar/backup utilities, RBAC visuals, and shared formatting helpers.

Sequence Diagram(s)

sequenceDiagram
    participant CLI as CLI (catalog report)
    participant Report as Catalog Report Builder
    participant DB as Database
    participant Facet as Facet Renderer
    participant Files as Embedded/Source files

    CLI->>Report: Export(configs, options)
    Report->>DB: Query configs, changes, analyses, access, accessLogs, relationships
    DB-->>Report: Rows (changes/analyses/access/relationships)
    Report->>Report: Build api.CatalogReport (apply CEL mappings, attach artifacts, audit)
    Report->>Files: Resolve SourceDir / embedded assets
    Report->>Facet: RenderCLI or RenderHTTP(reportData, entryFile)
    Facet-->>Report: RenderResult (Data, SrcDir, Entry, DataFile)
    Report-->>CLI: Return bytes / write file
Loading
sequenceDiagram
    participant User as Browser
    participant Headless as Headless Browser
    participant CLI as CLI (connection/browser login)
    participant TokenStore as Captured Tokens

    CLI->>Headless: Open auth endpoint
    Headless->>Headless: Auto-select account tile (Azure) (optional)
    Headless->>TokenStore: Capture bearer tokens (audiences, scopes)
    TokenStore->>TokenStore: Pick best token by audience & scope (ScopeCount)
    Headless-->>CLI: Return selected token/session
    CLI->>CLI: Save connection with selected bearer token and props
Loading

Possibly related PRs

Suggested labels

ready, feature, large, api, database, ui, auth, report, cli

Suggested reviewers

  • adityathebe
✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch pr/reports
⚔️ Resolve merge conflicts
  • Resolve merge conflict in branch pr/reports
✨ Simplify code
  • Create PR with simplified code
  • Commit simplified code in branch pr/reports

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
cmd/catalog.go (1)

53-59: ⚠️ Potential issue | 🟡 Minor

Transaction not rolled back on error paths.

The code calls ctx.DB().Begin() but on the error path (lines 56-58), it exits without calling Rollback(). While Commit() is called on success, the transaction is left open on failure. Consider using a deferred rollback pattern or removing the explicit transaction if not needed.

🔧 Suggested pattern
-		ctx.DB().Begin()
+		tx := ctx.DB().Begin()
+		defer func() {
+			if r := recover(); r != nil || err != nil {
+				tx.Rollback()
+			}
+		}()
 		response, err = query.SearchResources(ctx, req)
 		if err != nil {
 			logger.Fatalf(err.Error())
 			os.Exit(1)
 		}
-		ctx.DB().Commit()
+		tx.Commit()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/catalog.go` around lines 53 - 59, The transaction started with
ctx.DB().Begin() isn't rolled back on error—if query.SearchResources returns an
error you call logger.Fatalf/os.Exit without calling ctx.DB().Rollback();
replace this with the standard pattern: after Begin() defer a rollback (e.g.,
defer ctx.DB().Rollback()) and only call ctx.DB().Commit() on success, or ensure
you explicitly call ctx.DB().Rollback() in the error branch before exiting;
update the block around ctx.DB().Begin(), query.SearchResources,
ctx.DB().Commit(), logger.Fatalf and os.Exit accordingly.
cmd/context.go (1)

12-12: ⚠️ Potential issue | 🔴 Critical

Pipeline failure: missing go.sum entry.

The CI build is failing because github.com/charmbracelet/bubbles is missing from go.sum. Run go mod tidy to update the module checksums.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/context.go` at line 12, CI is failing due to a missing go.sum entry for
github.com/charmbracelet/bubbles; run `go mod tidy` in the module root to
refresh go.mod/go.sum so the checksum for github.com/charmbracelet/bubbles is
added, then commit the updated go.mod and go.sum (verify imports like
"github.com/charmbracelet/huh" still compile after tidying).
cmd/connection_browser.go (1)

530-563: ⚠️ Potential issue | 🟡 Minor

Duplicate sessionStorage marshaling logic.

The sessionStorage is marshaled twice: once at lines 530-535 as raw JSON, and again at lines 553-563 with origin wrapping. The second block (553-563) overwrites the first, making the first assignment dead code.

🐛 Proposed fix to remove dead code
-	if len(data.SessionStorage) > 0 {
-		sessionJSON, err := json.Marshal(data.SessionStorage)
-		if err == nil {
-			props["sessionStorage"] = string(sessionJSON)
-		}
-	}
-
 	// Also store cookies as headers for HTTP connection compatibility
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/connection_browser.go` around lines 530 - 563, The first marshaling block
that sets props["sessionStorage"] from data.SessionStorage is redundant and is
overwritten later by the origin-wrapped payload; remove the initial block (the
if len(data.SessionStorage) > 0 { sessionJSON, err :=
json.Marshal(data.SessionStorage) ... props["sessionStorage"] =
string(sessionJSON) }) so only the origin-wrapped payload (which uses
url.Parse(flags.URL), builds payload := map[string]any{"origin": u.Scheme +
"://" + u.Host, "items": data.SessionStorage} and marshals to
props["sessionStorage"]) remains; keep error handling around url.Parse and the
subsequent json.Marshal as-is.
🟠 Major comments (25)
auth/oidc/models.go-18-39 (1)

18-39: ⚠️ Potential issue | 🟠 Major

Use pq.StringArray instead of manual array encoding.

Lines 18-39 implement a simplified PostgreSQL array codec that breaks with real data: a value like foo,bar will round-trip as two separate elements because the implementation does not handle required quoting/escaping per PostgreSQL's array format spec. Additionally, Scan() only accepts string types, but drivers may provide []byte, causing failures. Both lib/pq (already a dependency) and pgx provide proper array codecs—use pq.StringArray or the equivalent from your PostgreSQL driver instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@auth/oidc/models.go` around lines 18 - 39, The custom PostgreSQL array codec
in StringList.Value and (*StringList).Scan is fragile (doesn't handle PG
quoting/escaping and only accepts string types)—replace it with the
driver-provided codec by switching usages of StringList to pq.StringArray (or
the project's pgx equivalent) and remove/stop exporting the manual Value() and
Scan() implementations; where StringList is used in structs or DB bindings, use
pq.StringArray type or convert to/from pq.StringArray when scanning/valuing to
ensure proper encoding and to accept []byte inputs from drivers.
artifacts/controllers.go-97-99 (1)

97-99: ⚠️ Potential issue | 🟠 Major

Always force Content-Disposition: attachment from this download endpoint.

Right now the response is only marked as an attachment when filename is passed. For HTML/SVG/text artifacts, browsers can otherwise render the blob inline on the app origin instead of downloading it.

🔐 Proposed fix
 	c.Response().Header().Set("Content-Type", data.ContentType)
 	if data.ContentLength > 0 {
 		c.Response().Header().Set("Content-Length", fmt.Sprintf("%d", data.ContentLength))
 	}
-	if filename := c.QueryParam("filename"); filename != "" {
-		c.Response().Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=%q", filename))
-	}
+	disposition := "attachment"
+	if filename := c.QueryParam("filename"); filename != "" {
+		disposition = fmt.Sprintf("attachment; filename=%q", filename)
+	}
+	c.Response().Header().Set("Content-Disposition", disposition)
 	return c.Stream(http.StatusOK, data.ContentType, data.Content)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@artifacts/controllers.go` around lines 97 - 99, The download handler
currently only sets Content-Disposition to attachment when filename != "", which
lets browsers render HTML/SVG/text inline; change the logic in the controller
(where c.QueryParam("filename") and c.Response().Header().Set are used) to
always set Content-Disposition to "attachment" and, if filename is non-empty,
include the filename parameter (e.g., Content-Disposition: attachment;
filename="..."), otherwise set plain "attachment" so all responses are forced to
download.
.gitignore-1-1 (1)

1-1: ⚠️ Potential issue | 🟠 Major

Revert 7.bin/ to .bin/ to keep built binaries untracked.

Line 1 currently ignores 7.bin/, but the repo still uses .bin/... paths (for example, Line 60 in tests/e2e/oidc/suite_test.go). This can accidentally commit generated binaries.

Suggested fix
-7.bin/
+.bin/
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.gitignore at line 1, The .gitignore currently ignores the wrong directory
name "7.bin/"; update the ignore entry to ".bin/" so built binaries under .bin/
(referenced by tests like tests/e2e/oidc/suite_test.go) remain untracked—replace
the "7.bin/" pattern with ".bin/" in the .gitignore file.
auth/oidcclient/oidcclient.go-55-59 (1)

55-59: ⚠️ Potential issue | 🟠 Major

Silently ignoring rand.Read errors compromises OIDC security.

If rand.Read fails (e.g., entropy exhaustion), the byte slice remains zero-initialized, producing a predictable base64 string. Since state prevents CSRF and nonce prevents token replay, using predictable values creates exploitable vulnerabilities.

Consider panicking on failure (since this is catastrophic) or restoring the error return:

🛡️ Option A: Panic on failure (recommended for CLI)
 func RandomBase64(n int) string {
 	b := make([]byte, n)
-	_, _ = rand.Read(b)
+	if _, err := rand.Read(b); err != nil {
+		panic("crypto/rand failed: " + err.Error())
+	}
 	return base64.RawURLEncoding.EncodeToString(b)
 }
🛡️ Option B: Restore error return
-func RandomBase64(n int) string {
+func RandomBase64(n int) (string, error) {
 	b := make([]byte, n)
-	_, _ = rand.Read(b)
-	return base64.RawURLEncoding.EncodeToString(b)
+	if _, err := rand.Read(b); err != nil {
+		return "", err
+	}
+	return base64.RawURLEncoding.EncodeToString(b), nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@auth/oidcclient/oidcclient.go` around lines 55 - 59, RandomBase64 currently
ignores errors from rand.Read which can yield predictable zeroed output; change
RandomBase64 to handle the error instead of discarding it: either (A) for
CLI/critical paths, detect the error from rand.Read and panic with a clear
message inside RandomBase64, or (B) make RandomBase64 return (string, error),
propagate the error to callers that generate OIDC state/nonce (update their
signatures/usages), and ensure any caller handles or fails fast on the error;
update all references to RandomBase64 accordingly (search for RandomBase64
usages) so state/nonce generation never proceeds on a random-source failure.
rbac_report/report.go-195-223 (1)

195-223: ⚠️ Potential issue | 🟠 Major

Don't derive the report subject from configItems[0].

This makes Subject and Parents depend on whichever config ends up in configItems[0], which is wrong for recursive or multi-resource reports. Please derive the subject from a deterministic root/common ancestor, and return an error instead of silently breaking when parent loading fails.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rbac_report/report.go` around lines 195 - 223, The function
resolveSubjectAndParents currently uses configItems[0] as the subject which is
non-deterministic; change its API to return (*models.ConfigItem,
[]models.ConfigItem, error) and implement deterministic subject selection by
computing the common ancestor (root) for all entries in configItems: for each
config in configItems, walk its parent chain (using configMap and
query.GetConfigsByIDs as needed), returning an error if any parent load fails
(do not break). After collecting ancestor chains for every config, find the
deepest common ancestor across those chains and set that as the subject (and
build its parent list from root down to the parent of the subject); use
resolveSubjectAndParents, configItems, configMap, and query.GetConfigsByIDs
identifiers to locate code to modify.
report/MatrixDemo.tsx-38-38 (1)

38-38: ⚠️ Potential issue | 🟠 Major

Use Document component for pageSize prop instead of Page.

The pageSize prop should be set on the Document component (the page root), not on individual Page components. Based on the pattern used in report files like RBACMatrixReport.tsx, the correct structure is:

<Document pageSize="a4" margins={...}>
  <Page margins={...}>
    {content}
  </Page>
</Document>

Currently, MatrixDemo.tsx uses <Page pageSize="a4" margins={...}> directly, which doesn't follow the established pattern in the codebase. Add a Document wrapper and move pageSize to it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/MatrixDemo.tsx` at line 38, The Page component in MatrixDemo.tsx is
receiving the pageSize prop incorrectly; wrap the existing Page in a Document
component and move the pageSize and global margins there (e.g., change from
<Page pageSize="a4" margins={{...}}> to <Document pageSize="a4"
margins={{...}}><Page margins={{...}}>{...}</Page></Document>), keeping the
inner Page for content and only passing margins to it; update imports if needed
to ensure Document is imported and remove pageSize from the Page component.
report/KitchenSink.tsx-24-24 (1)

24-24: ⚠️ Potential issue | 🟠 Major

Avoid wall-clock timestamps in the kitchen-sink artifact.

Lines 24 and 45 bake the current time into the rendered document. That makes regenerated fixtures/golden data churn on every run, and the cover/footer timestamps can even differ within the same render. Please thread a single deterministic generatedAt value in here once and reuse it everywhere on the page.

Also applies to: 45-45

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/KitchenSink.tsx` at line 24, The footer is currently created with a
fresh wall-clock timestamp inline (const footer = <PageFooterComponent
generatedAt={new Date().toISOString()} />) and the same pattern is repeated
elsewhere, causing nondeterministic churn; instead compute a single
deterministic generatedAt once at the top of the KitchenSink component (or
accept it from props) and pass that same generatedAt value to every
PageFooterComponent/cover/footer instance (referenced by the footer variable and
PageFooterComponent) so the page reuses one stable timestamp for the whole
render.
report/components/DeploymentChanges.tsx-50-54 (1)

50-54: ⚠️ Potential issue | 🟠 Major

Keep the stat cards on the same classification path as the table.

Line 106 already falls back to spec, but Lines 50-54 only count explicit 'spec'. Any uncategorized deployment change will therefore render as Spec in the table while being omitted from the spec summary card.

♻️ Suggested fix
-  const counts = {
-    scale: relevant.filter((change) => classifyDeploymentChange(change) === 'scale').length,
-    policy: relevant.filter((change) => classifyDeploymentChange(change) === 'policy').length,
-    spec: relevant.filter((change) => classifyDeploymentChange(change) === 'spec').length,
-  };
+  const categorized = relevant.map((change) => ({
+    change,
+    category: classifyDeploymentChange(change) ?? 'spec',
+  }));
+
+  const counts = categorized.reduce(
+    (acc, { category }) => {
+      acc[category] += 1;
+      return acc;
+    },
+    { scale: 0, policy: 0, spec: 0 },
+  );

Then build the table rows from categorized so both views share the same category value.

Also applies to: 106-112

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/DeploymentChanges.tsx` around lines 50 - 54, The summary
cards currently count only explicit 'spec' results by re-filtering relevant with
classifyDeploymentChange, which skips uncategorized entries that later default
to 'spec' in the table; instead derive a categorized array once (as in the
existing categorized variable) and compute counts from that normalized category
values (using classifyDeploymentChange's result or its default-to-'spec'
mapping) so the counts object (scale, policy, spec) and the table rows are built
from the same source; update the logic around counts, the categorize usage, and
any places that call classifyDeploymentChange twice (e.g., where counts and
table rows are computed) to read from the single categorized mapping so both
views share the same category values.
report/scraper/scraper.go-34-35 (1)

34-35: ⚠️ Potential issue | 🟠 Major

Differentiate DB errors: "not found" only for missing records.

First(...).Error returns gorm.ErrRecordNotFound when no row matches, but also returns connection, query, and other failures. Treating all as "scraper not found" hides real DB incidents and misleads debugging during outages. Check for the specific error type and wrap other failures as a load error.

🩹 Suggested fix
+import (
+	"errors"
+	...
+	"gorm.io/gorm"
+)
+
 func BuildScraperInfo(ctx context.Context, scraperID uuid.UUID) (*api.ScraperInfo, error) {
 	var scraper models.ConfigScraper
 	if err := ctx.DB().Where("id = ?", scraperID).First(&scraper).Error; err != nil {
-		return nil, ctx.Oops().Wrapf(err, "scraper %s not found", scraperID)
+		if errors.Is(err, gorm.ErrRecordNotFound) {
+			return nil, ctx.Oops().Wrapf(err, "scraper %s not found", scraperID)
+		}
+		return nil, ctx.Oops().Wrapf(err, "load scraper %s", scraperID)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/scraper/scraper.go` around lines 34 - 35, The DB call using
ctx.DB().Where(...).First(&scraper).Error must distinguish
gorm.ErrRecordNotFound from other DB errors: after capturing err, use
errors.Is(err, gorm.ErrRecordNotFound) to return ctx.Oops().Wrapf(err, "scraper
%s not found") only for missing rows; for any other err return a different
wrapped error like ctx.Oops().Wrapf(err, "failed loading scraper %s") so
connection/query failures are not misreported as "not found". Ensure the code
references the existing call to First(&scraper).Error and import/use errors and
gorm.ErrRecordNotFound.
report/components/BackupActivityCalendar.tsx-40-45 (1)

40-45: ⚠️ Potential issue | 🟠 Major

Fix date normalization and remove locale/timezone dependencies.

Line 40 buckets entries using raw YYYY-MM-DD slice, but lines 72–79 and 104 convert to Date objects using the browser's local timezone, then reconstruct the calendar key. For UTC-offset timestamps (e.g., 2024-12-26T03:00:00Z), this mismatch causes lookups to fail: an entry bucketed as 2024-12-26 may render under the wrong calendar date if timezone conversion shifts the date. Additionally, line 86 renders the month label with host locale, breaking fixture determinism.

Normalize all date operations to YYYY-MM-DD string comparisons and use fixed en-US with UTC timezone for rendering:

Suggested fix
+function dayKey(value: string) {
+  return value.slice(0, 10);
+}
+
 function aggregateEntries(entries: BackupCalendarEntry[]): AggregatedEntry[] {
   const byDay = new Map<string, AggregatedEntry>();

   for (const entry of entries) {
-    const key = entry.date.slice(0, 10);
+    const key = dayKey(entry.date);
     const current = byDay.get(key);
     ...
   }

   return [...byDay.values()];
 }

 export default function BackupActivityCalendar({ entries }: Props) {
   if (!entries.length) {
     return null;
   }

   const aggregated = aggregateEntries(entries);
-  const referenceDate = new Date(aggregated.reduce((latest, entry) => (
-    new Date(entry.date).getTime() > new Date(latest.date).getTime() ? entry : latest
-  )).date);
-
-  const year = referenceDate.getFullYear();
-  const month = referenceDate.getMonth();
+  const referenceKey = aggregated.reduce((latest, entry) => {
+    const key = dayKey(entry.date);
+    return key > latest ? key : latest;
+  }, dayKey(aggregated[0].date));
+
+  const [yearString, monthString] = referenceKey.split('-');
+  const year = Number(yearString);
+  const month = Number(monthString) - 1;
   const daysInMonth = new Date(year, month + 1, 0).getDate();
   const firstDow = new Date(year, month, 1).getDay();
   ...
-  const monthLabel = referenceDate.toLocaleString('default', { month: 'long', year: 'numeric' });
+  const monthLabel = new Date(Date.UTC(year, month, 1)).toLocaleString('en-US', {
+    month: 'long',
+    year: 'numeric',
+    timeZone: 'UTC',
+  });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/BackupActivityCalendar.tsx` around lines 40 - 45, The code
currently mixes raw string slices and local-time Date conversions causing
mismatched keys; update BackupActivityCalendar to normalize all dates to a
canonical "YYYY-MM-DD" UTC string and use that everywhere: when creating the
bucket key (replace entry.date.slice(0,10) with a UTC-normalized YYYY-MM-DD
formatter), when building lookup keys later (ensure the same formatter is used
instead of new Date(...).toISOString().slice(0,10) that may be affected by local
timezone), and when generating the month label use Intl.DateTimeFormat('en-US',
{ timeZone: 'UTC', month: 'long', year: 'numeric' }) or equivalent UTC formatter
so rendering is deterministic; ensure all references to key, byDay, and any
date-to-key conversions call the single normalization helper to keep lookups
consistent.
report/scraper/scraper.go-48-58 (1)

48-58: ⚠️ Potential issue | 🟠 Major

Use time.RFC3339 for timestamp formatting to include timezone offset.

The custom format string "2006-01-02T15:04:05Z" appends a literal Z and omits the timezone offset. While database timestamps are typically UTC, using time.RFC3339 is the standard practice in this codebase (see report/catalog/report.go, rbac_report/export.go, and db/middleware_test.go). This ensures proper RFC 3339 compliance with offset information.

Suggested fix
+import "time"
+
 	info := &api.ScraperInfo{
 		ID:        scraper.ID.String(),
 		Name:      scraper.Name,
 		Namespace: scraper.Namespace,
 		Source:    scraper.Source,
-		CreatedAt: scraper.CreatedAt.Format("2006-01-02T15:04:05Z"),
+		CreatedAt: scraper.CreatedAt.Format(time.RFC3339),
 		SpecHash:  specSHA256(scraper.Spec),
 		Types:     types,
 	}
 	...
 	if scraper.UpdatedAt != nil {
-		info.UpdatedAt = scraper.UpdatedAt.Format("2006-01-02T15:04:05Z")
+		info.UpdatedAt = scraper.UpdatedAt.Format(time.RFC3339)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/scraper/scraper.go` around lines 48 - 58, Replace the hard-coded
timestamp format "2006-01-02T15:04:05Z" with the standard time.RFC3339 when
formatting scraper.CreatedAt and scraper.UpdatedAt (the places setting
CreatedAt: scraper.CreatedAt.Format(...) and info.UpdatedAt =
scraper.UpdatedAt.Format(...)); ensure the time package is imported if not
already and update both occurrences so timestamps include proper timezone offset
and follow RFC3339 like the rest of the codebase.
report/components/ConfigInsightsSection.tsx-38-43 (1)

38-43: ⚠️ Potential issue | 🟠 Major

Don't drop analyses outside the hard-coded type list.

InsightEntry already handles a missing analysisType, but the section only groups and renders the fixed ANALYSIS_TYPES. Any analysis with no type, or with a newly added type, will still be counted in the severity cards and then silently disappear from the detailed report. Please add an other bucket or derive the groups from the input.

Also applies to: 89-108

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/ConfigInsightsSection.tsx` around lines 38 - 43, The report
currently groups and renders only the fixed ANALYSIS_TYPES so any analysis with
a missing or new analysisType gets counted but not shown; update the grouping
logic (where ANALYSIS_TYPES is used to build sections and where InsightEntry is
used) to include a fallback bucket (e.g., 'other') or, better, derive groups
from the input analyses array by using analysis.analysisType ?? 'other' as the
group key so all analyses are included in the rendered sections and the severity
cards remain consistent with the detailed report; make the same change where
sections are built (lines ~89-108 area) to ensure unknown/missing types are
displayed.
report/ViewReport.tsx-34-47 (1)

34-47: ⚠️ Potential issue | 🟠 Major

Guard the empty multi-view case before rendering the cover.

If data.views is [], firstView is undefined and ViewCoverPage will throw on Line 10. Return null or a fallback page before reading viewsList[0].

Proposed fix
 export default function ViewReportPage({ data }: ViewReportProps) {
   const viewsList = isMultiView(data) ? data.views : [data];
+  if (viewsList.length === 0) {
+    return null;
+  }
   const firstView = viewsList[0];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/ViewReport.tsx` around lines 34 - 47, The component accesses
viewsList[0] (stored in firstView) and passes it to ViewCoverPage without
guarding an empty multi-view array; if isMultiView(data) and data.views is []
this yields undefined and crashes. Fix by checking the resolved viewsList length
before reading viewsList[0] — e.g., if viewsList.length === 0 return null or
render a fallback Page/placeholder — and only compute/pass firstView to
ViewCoverPage when viewsList[0] is defined; update the logic around isMultiView,
viewsList, firstView and the JSX that renders ViewCoverPage accordingly.
views/render_facet.go-109-122 (1)

109-122: ⚠️ Potential issue | 🟠 Major

timestampURL is resolved and then discarded.

resolveFacetConnection still pulls opts.TimestampURL / connection timestampUrl, but renderFacetWithData binds it to _ and never forwards it to either renderer. That turns the new timestamping option into a no-op.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@views/render_facet.go` around lines 109 - 122, resolveFacetConnection is
returning a timestampURL but the code currently binds it to _ so the
timestamping option is ignored; update the call to resolveFacetConnection to
capture the timestampURL (e.g., baseURL, token, timestampURL, err :=
resolveFacetConnection(...)) and forward that timestampURL into the renderers:
pass it into report.RenderHTTP and report.RenderCLI (or their appropriate
parameter positions / overloads) from renderFacetWithData so the resolved
timestamp URL is used by both rendering paths.
report/components/BackupsSection.tsx-14-29 (1)

14-29: ⚠️ Potential issue | 🟠 Major

Align the failed filter with the new status taxonomy.

BACKUP_TAG_MAPPING now distinguishes running/queued states from actual failures, but failedRows still uses status !== 'success'. That makes in-progress backups show up under Failed Backups, and it also inflates the failed stat above.

Proposed fix
+const isFailedStatus = (status?: string) => String(status).toLowerCase().includes('fail');
+
 export default function BackupsSection({ backups, restores }: Props) {
   const successCount = backups.filter((b) => b.status === 'success').length;
-  const failedCount = backups.filter((b) => b.status !== 'success').length;
+  const failedCount = backups.filter((b) => isFailedStatus(b.status)).length;
   const calendarEntries = backups.map((backup) => ({
     date: backup.date,
     status: (backup.status === 'success' ? 'success' : backup.status === 'failed' ? 'failed' : 'warning') as BackupCalendarStatus,
     label: backup.size || undefined,
   }));
 
-  const failedRows = backups.filter((b) => b.status !== 'success');
+  const failedRows = backups.filter((b) => isFailedStatus(b.status));

Also applies to: 41-41

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/BackupsSection.tsx` around lines 14 - 29, The failed-count
logic is using a broad status !== 'success' which pulls running/queued items
into "Failed Backups"; update the filter used by failedRows to match the failure
criteria in BACKUP_TAG_MAPPING (i.e., normalize the status string and treat a
row as failed only if the normalized status includes 'fail' or otherwise matches
the mapping's failure conditions), or refactor failedRows to reuse the same
normalization/matching logic as BACKUP_TAG_MAPPING to ensure running/queued
states are excluded from failures.
report/catalog/report_test.go-33-71 (1)

33-71: ⚠️ Potential issue | 🟠 Major

These date-range specs don't hit the production path.

Each example builds api.CatalogReport{From: ...} inline and then round-trips it through time.Parse, so the tests will still pass if the real report-construction code stops setting From correctly. Please drive the actual builder/helper under test instead of asserting on a hand-built struct.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/catalog/report_test.go` around lines 33 - 71, Tests currently create
api.CatalogReport inline and then parse report.From, so they don't exercise the
real report builder; update each spec to call the production builder/path that
generates a CatalogReport from Options (use Options.WithDefaults() as the source
of truth) instead of constructing api.CatalogReport{From: ...} manually, then
assert on the returned report.From (and parsed time) so the tests validate the
actual builder logic for From; apply this change to the three tests that
reference Options, Options.WithDefaults(), and api.CatalogReport.From.
rbac_report/render_facet.go-34-36 (1)

34-36: 🛠️ Refactor suggestion | 🟠 Major

Wrap renderer failures with ctx.Oops().

Returning the raw RenderCLI error here drops the structured context the rest of this Go path expects.

♻️ Proposed fix
 	result, err := report.RenderCLI(initSlices(r), format, entryFile)
 	if err != nil {
-		return nil, err
+		return nil, ctx.Oops().Wrapf(err, "failed to render RBAC report as %s", format)
 	}

As per coding guidelines, For wrapping internal errors, use ctx.Oops().Wrap(err) or ctx.Oops().Wrapf(err, "context").

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rbac_report/render_facet.go` around lines 34 - 36, The call to
report.RenderCLI in render_facet.go currently returns the raw error which loses
structured context; change the error return to wrap the failure with the request
context by calling ctx.Oops().Wrap(err) or ctx.Oops().Wrapf(err, "rendering
facet with format %s", format) so that failures from
report.RenderCLI(initSlices(r), format, entryFile) are wrapped before being
returned; locate the code around the RenderCLI call and replace the plain
"return nil, err" with the appropriate ctx.Oops().Wrap/Wrapf usage.
report/components/RBACMatrixSection.tsx-42-46 (1)

42-46: ⚠️ Potential issue | 🟠 Major

Don't classify missing login history as >30d stale.

lastSignedInAt == null currently maps to stale30d, so accounts with no recorded login get the same visual treatment as the stalest bucket.

🐛 Proposed fix
 function staleColor(lastSignedInAt?: string | null): string | null {
   const days = loginAgeDays(lastSignedInAt);
-  if (days === null || days > 30) return STALE_COLORS.stale30d;
+  if (days === null) return null;
+  if (days > 30) return STALE_COLORS.stale30d;
   if (days > 7) return STALE_COLORS.stale7d;
   return null;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/RBACMatrixSection.tsx` around lines 42 - 46, The function
staleColor currently treats a missing login (loginAgeDays returning null) as
stale30d due to the condition order; update staleColor to handle days === null
first and return null for no login history, then check thresholds (days > 30 =>
STALE_COLORS.stale30d, days > 7 => STALE_COLORS.stale7d). Modify the staleColor
function (and keep using loginAgeDays and STALE_COLORS) so that null means "no
color" instead of being classified into the >30d bucket.
cmd/catalog_report.go-68-75 (1)

68-75: ⚠️ Potential issue | 🟠 Major

Return option-loading errors instead of fataling or silently continuing.

Invalid --since currently disables time filtering, while bad settings call logger.Fatalf, which bypasses normal RunE cleanup/error handling. Have buildCatalogReportOptions() return (catalog.Options, error) and fail before duty.Start(...).

🛠️ Proposed fix
-		ctx, stop, err := duty.Start("mission-control", duty.ClientOnly)
+		opts, err := buildCatalogReportOptions()
+		if err != nil {
+			return err
+		}
+
+		ctx, stop, err := duty.Start("mission-control", duty.ClientOnly)
 		if err != nil {
 			return err
 		}
 		shutdown.AddHookWithPriority("database", shutdown.PriorityCritical, stop)
 		shutdown.WaitForSignal()
-
-		opts := buildCatalogReportOptions()
-func buildCatalogReportOptions() catalog.Options {
+func buildCatalogReportOptions() (catalog.Options, error) {
 	opts := catalog.Options{
 		Title:           catalogReportTitle,
 		Recursive:       catalogReportRecursive,
 		GroupBy:         catalogReportGroupBy,
@@
 	}
 
 	if catalogReportSince != "" {
-		if d, err := duration.ParseDuration(catalogReportSince); err == nil {
-			opts.Since = time.Duration(d)
-		}
+		d, err := duration.ParseDuration(catalogReportSince)
+		if err != nil {
+			return catalog.Options{}, fmt.Errorf("invalid --since: %w", err)
+		}
+		opts.Since = time.Duration(d)
 	}
 
 	settings, settingsSource, err := catalog.ResolveSettings(catalogReportSettings)
 	if err != nil {
-		logger.Fatalf("failed to load settings: %v", err)
+		return catalog.Options{}, fmt.Errorf("failed to load settings: %w", err)
 	}
 	opts.Settings = settings
 	opts.SettingsPath = settingsSource
 
-	return opts
+	return opts, nil
 }

Also applies to: 127-160

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/catalog_report.go` around lines 68 - 75, buildCatalogReportOptions
currently handles bad CLI/flag input by calling logger.Fatalf (and time-parsing
errors silently disable filtering) and is invoked after duty.Start; change
buildCatalogReportOptions to return (catalog.Options, error), validate/parse
flags (including --since) there and return a descriptive error on bad input
instead of fataling, then call it before duty.Start in cmd/catalog_report.go so
you can return the error from RunE; update call sites and error handling to
propagate the error rather than exiting the process (references:
buildCatalogReportOptions(), duty.Start()).
db/applications.go-590-595 (1)

590-595: ⚠️ Potential issue | 🟠 Major

Use the repo duration parser for these filters instead of the standard library.

time.ParseDuration rejects 30d, 2w, 1y style values, so stale, from, and to silently no-op for the formats this repo already exposes elsewhere. Replace with github.com/flanksource/commons/duration.ParseDuration, return an invalid-filter error on parse failure, and base the cutoff on duty.Now() per coding guidelines.

🛠️ Proposed fix
 	var staleCutoff time.Time
 	if filters.Stale != "" {
-		if d, err := time.ParseDuration(filters.Stale); err == nil {
-			staleCutoff = time.Now().Add(-d)
-		}
+		d, err := durationutils.ParseDuration(filters.Stale)
+		if err != nil {
+			return nil, ctx.Oops().Wrapf(err, "invalid stale filter %q", filters.Stale)
+		}
+		staleCutoff = duty.Now().Add(-time.Duration(d))
 	}
 	if filters.From != "" {
-		if d, err := time.ParseDuration(filters.From); err == nil {
-			q = q.Where("config_access_logs.created_at >= ?", time.Now().Add(-d))
-		}
+		d, err := durationutils.ParseDuration(filters.From)
+		if err != nil {
+			return nil, ctx.Oops().Wrapf(err, "invalid from filter %q", filters.From)
+		}
+		q = q.Where("config_access_logs.created_at >= ?", duty.Now().Add(-time.Duration(d)))
 	}
 
 	if filters.To != "" {
-		if d, err := time.ParseDuration(filters.To); err == nil {
-			q = q.Where("config_access_logs.created_at <= ?", time.Now().Add(-d))
-		}
+		d, err := durationutils.ParseDuration(filters.To)
+		if err != nil {
+			return nil, ctx.Oops().Wrapf(err, "invalid to filter %q", filters.To)
+		}
+		q = q.Where("config_access_logs.created_at <= ?", duty.Now().Add(-time.Duration(d)))
 	}

Import: durationutils "github.com/flanksource/commons/duration"

Also applies to: 663-672

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@db/applications.go` around lines 590 - 595, Replace uses of
time.ParseDuration for filter parsing with the repo's duration parser: import
durationutils "github.com/flanksource/commons/duration" and use
durationutils.ParseDuration to parse filters.Stale (and similarly for
filters.From and filters.To in the other block at lines ~663-672), returning an
invalid-filter error when parse fails instead of silently no-op; compute the
cutoff as duty.Now().Add(-d) (use duty.Now() instead of time.Now()) and assign
to staleCutoff (and the corresponding cutoff variables) so the code accepts
"30d", "2w", "1y" formats and fails fast on bad input.
report/CatalogReport.tsx-165-180 (1)

165-180: ⚠️ Potential issue | 🟠 Major

Honor data.sections when deciding what to render.

The cover page already uses these flags for counts, but the body still renders changes, insights, relationships, access/access logs, and config JSON whenever the data exists. If a caller disables one of those sections, the PDF still includes it and the cover metadata no longer matches the body.

Also applies to: 184-194

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/CatalogReport.tsx` around lines 165 - 180, The body rendering in
CatalogReport.tsx ignores the data.sections flags and always renders changes,
insights, relationships, access, and access logs; update the conditional
rendering around CategorizedChangesSection, ConfigInsightsSection,
ConfigTreeSection/ConfigRelationshipGraph, CatalogAccessSection, and
CatalogAccessLogsSection to check the corresponding flags in data.sections
(e.g., only render CategorizedChangesSection when data.sections.changes is true,
ConfigInsightsSection when data.sections.insights is true, the tree/graph when
data.sections.relationships is true, and access/access logs when
data.sections.access and data.sections.accessLogs are enabled) so the PDF body
matches the cover metadata.
Makefile-67-69 (1)

67-69: ⚠️ Potential issue | 🟠 Major

Create ./.bin before ci-test builds into it.

Line 68 writes the binary to ./.bin/$(NAME), but ci-test doesn't depend on the existing .bin target. On a clean checkout this target can fail before Ginkgo starts.

📦 Minimal fix
-ci-test: $(TAILWIND_JS)
+ci-test: $(TAILWIND_JS) .bin
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Makefile` around lines 67 - 69, The ci-test Makefile target writes the binary
to ./.bin/$(NAME) but doesn't ensure that directory exists; update the ci-test
target (or add a dependency) so ./.bin is created before building the binary —
e.g., make ci-test depend on a .bin target or run mkdir -p ./.bin as a
prerequisite step before the go build that produces ./.bin/$(NAME), referencing
the ci-test target and the $(NAME) output location.
report/CatalogReport.tsx-196-196 (1)

196-196: ⚠️ Potential issue | 🟠 Major

Feed the appendix from the same change collections the report body uses.

Line 196 only flattens data.entries[].changes. For non-grouped reports the rendered changes come from data.changes, and grouped reports can also populate data.configGroups[].changes, so artifact attachments disappear from the appendix even though their parent changes were rendered above. The kitchen-sink fixture added in this PR already hits this case: artifacts live under catalogReport.changes while entries[].changes are empty.

📎 Suggested aggregation
+ const appendixChanges = [
+   ...(data.changes ?? []),
+   ...(data.entries ?? []).flatMap((entry) => entry.changes ?? []),
+   ...(data.configGroups ?? []).flatMap((group) => group.changes ?? []),
+ ];
...
-        <ArtifactAppendix changes={(data.entries || []).flatMap((e) => e.changes || [])} />
+        <ArtifactAppendix changes={appendixChanges} />
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/CatalogReport.tsx` at line 196, The appendix is only fed from
entries[].changes but the report body may render changes from data.changes and
data.configGroups[].changes as well; update the prop passed to ArtifactAppendix
so it aggregates all change collections used by the report (include
data.changes, (data.entries || []).flatMap(e => e.changes || []), and
(data.configGroups || []).flatMap(g => g.changes || [])) into one flattened
array before passing to ArtifactAppendix so artifacts attached to changes in any
of those locations appear in the appendix.
report/catalog/change_mappings.go-53-76 (1)

53-76: ⚠️ Potential issue | 🟠 Major

Return structured duty errors from mapper setup.

These branches currently collapse both invalid category mappings and internal CEL setup failures into plain fmt.Errorf / raw errors, so callers lose EINVALID classification for bad config and consistent wrapping for internal failures.

🛠️ Suggested error handling
+ dutyAPI "github.com/flanksource/duty/api"
...
-		return nil, err
+		return nil, ctx.Oops().Wrap(err)
...
-			return nil, fmt.Errorf("categoryMappings[%d] filter is required", i)
+			return nil, dutyAPI.Errorf(dutyAPI.EINVALID, "categoryMappings[%d] filter is required", i)
...
-			return nil, fmt.Errorf("categoryMappings[%d] must define category or transform", i)
+			return nil, dutyAPI.Errorf(dutyAPI.EINVALID, "categoryMappings[%d] must define category or transform", i)
...
-			return nil, fmt.Errorf("failed to compile categoryMappings[%d] filter: %w", i, err)
+			return nil, ctx.Oops().Wrapf(err, "failed to compile categoryMappings[%d] filter", i)
...
-				return nil, fmt.Errorf("failed to compile categoryMappings[%d] transform: %w", i, err)
+				return nil, ctx.Oops().Wrapf(err, "failed to compile categoryMappings[%d] transform", i)
As per coding guidelines: "For validation errors with no underlying error, use `dutyAPI.Errorf(dutyAPI.EINVALID, "message")`." and "For wrapping internal errors, use `ctx.Oops().Wrap(err)` or `ctx.Oops().Wrapf(err, "context")`."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/catalog/change_mappings.go` around lines 53 - 76, Validation and
internal errors in change mapping setup are being returned as plain fmt.Errorf;
update the branches in the loop that validate mapping.Filter and
mapping.Category/Transform to return dutyAPI.Errorf(dutyAPI.EINVALID, "...") for
missing/invalid mapping data, and change CEL compilation failures (calls to
compileChangeMappingProgram when creating filter and transform) to be wrapped
using ctx.Oops().Wrapf(err, "failed to compile categoryMappings[%d] filter") or
ctx.Oops().Wrapf(err, "failed to compile categoryMappings[%d] transform")
respectively so callers can distinguish EINVALID validation errors from internal
CEL setup failures; locate these changes around env, compiledCategoryMapping,
mapping, and the compileChangeMappingProgram calls.
report/catalog/report.go-250-253 (1)

250-253: ⚠️ Potential issue | 🟠 Major

mapper.Apply called without nil check when mapper can be nil.

When buildEntry (line 183-185) calls buildEntryWithMapper with nil mapper, the code at line 250 will panic on mapper.Apply(&r). The nil check should guard this call.

🐛 Proposed fix
 			r := newCatalogReportChangeFromRow(c, name, typ, detailsByID[c.ID])
-			if err := mapper.Apply(&r); err != nil {
+			if mapper != nil {
+				if err := mapper.Apply(&r); err != nil {
+					return nil, nil, fmt.Errorf("failed to apply change mappings for %s: %w", c.ID, err)
+				}
+			}
-				return nil, nil, fmt.Errorf("failed to apply change mappings for %s: %w", c.ID, err)
-			}
 			entry.Changes = append(entry.Changes, r)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/catalog/report.go` around lines 250 - 253, The code calls
mapper.Apply(&r) without checking for a nil mapper which can panic when
buildEntry passes nil; update buildEntryWithMapper to guard the call by checking
if mapper != nil before invoking mapper.Apply(&r) and only return the mapped
error if Apply runs (otherwise skip mapping and still append r to
entry.Changes); reference symbols: mapper.Apply, buildEntryWithMapper,
buildEntry, entry.Changes, r, and c.ID in the error message.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 15064690-8eaa-4421-8aa2-d02f0221efb7

📥 Commits

Reviewing files that changed from the base of the PR and between 7c339b6 and f6e4a4c.

⛔ Files ignored due to path filters (3)
  • go.sum is excluded by !**/*.sum
  • report/package-lock.json is excluded by !**/package-lock.json
  • tests/oidc_e2e/oidc-signing-key.pem is excluded by !**/*.pem
📒 Files selected for processing (141)
  • .gitignore
  • Makefile
  • api/application.go
  • api/catalog_report.go
  • api/global.go
  • api/rbac_report.go
  • api/scraper_report.go
  • api/v1/playbook_actions.go
  • api/v1/view_types.go
  • api/v1/zz_generated.deepcopy.go
  • api/view.go
  • api/zz_generated.deepcopy.go
  • application/application.go
  • application/render_facet.go
  • artifacts/controllers.go
  • auth/basic.go
  • auth/kratos.go
  • auth/middleware.go
  • auth/oidc/login.go
  • auth/oidc/models.go
  • auth/oidc/routes.go
  • auth/oidc_test.go
  • auth/oidc_validate.go
  • auth/oidcclient/oidcclient.go
  • cmd/application.go
  • cmd/auth_login.go
  • cmd/catalog.go
  • cmd/catalog_get.go
  • cmd/catalog_get_test.go
  • cmd/catalog_report.go
  • cmd/catalog_tree.go
  • cmd/connection_browser.go
  • cmd/connection_test_cmd.go
  • cmd/context.go
  • cmd/rbac.go
  • cmd/view.go
  • config/crds/mission-control.flanksource.com_applications.yaml
  • config/crds/mission-control.flanksource.com_playbooks.yaml
  • config/crds/mission-control.flanksource.com_views.yaml
  • config/schemas/application.schema.json
  • config/schemas/playbook-spec.schema.json
  • config/schemas/playbook.schema.json
  • config/schemas/view.schema.json
  • connection/jwt.go
  • db/applications.go
  • db/rbac.go
  • db/rbac_test.go
  • fixtures/views/ui-ref-examples.yaml
  • go.mod
  • main.go
  • mcp/access.go
  • rbac_report/export.go
  • rbac_report/render_facet.go
  • rbac_report/report.go
  • report/.gitignore
  • report/Application.tsx
  • report/CatalogReport.tsx
  • report/FindingsReport.tsx
  • report/KitchenSink.tsx
  • report/MatrixDemo.tsx
  • report/RBACByUserReport.tsx
  • report/RBACMatrixReport.tsx
  • report/RBACReport.tsx
  • report/ViewReport.tsx
  • report/build-kitchen-sink.ts
  • report/catalog-report-types.ts
  • report/catalog/change_mappings.go
  • report/catalog/change_mappings_test.go
  • report/catalog/default-settings.yaml
  • report/catalog/export.go
  • report/catalog/render_facet.go
  • report/catalog/report.go
  • report/catalog/report_test.go
  • report/catalog/settings.go
  • report/catalog/settings_test.go
  • report/components/ApplicationDetails.tsx
  • report/components/ArtifactAppendix.tsx
  • report/components/AuditPage.tsx
  • report/components/BackupActivityCalendar.tsx
  • report/components/BackupChanges.tsx
  • report/components/BackupsSection.tsx
  • report/components/CatalogAccessLogsSection.tsx
  • report/components/CatalogAccessSection.tsx
  • report/components/CatalogList.tsx
  • report/components/ConfigChangesExamples.tsx
  • report/components/ConfigChangesSection.tsx
  • report/components/ConfigInsightsSection.tsx
  • report/components/ConfigItemCard.tsx
  • report/components/ConfigLink.tsx
  • report/components/ConfigRelationshipGraph.tsx
  • report/components/ConfigTreeSection.tsx
  • report/components/CoverPage.tsx
  • report/components/DeploymentChanges.tsx
  • report/components/DynamicSection.tsx
  • report/components/FindingsSection.tsx
  • report/components/GitRef.tsx
  • report/components/IncidentsSection.tsx
  • report/components/PageFooter.tsx
  • report/components/PageHeader.tsx
  • report/components/RBACChangelogSection.tsx
  • report/components/RBACChanges.tsx
  • report/components/RBACCoverContent.tsx
  • report/components/RBACMatrixSection.tsx
  • report/components/RBACResourceSection.tsx
  • report/components/RBACSummarySection.tsx
  • report/components/RBACUserSection.tsx
  • report/components/ScraperCard.tsx
  • report/components/ViewResultSection.tsx
  • report/components/change-section-utils.ts
  • report/components/rbac-visual.tsx
  • report/components/utils.ts
  • report/config-types.ts
  • report/embed.go
  • report/facet.go
  • report/icons.ts
  • report/kitchen-sink-data.ts
  • report/kitchen-sink.json
  • report/kitchen-sink/ApplicationPage.tsx
  • report/kitchen-sink/CatalogPage.tsx
  • report/kitchen-sink/ChangesPage.tsx
  • report/kitchen-sink/ConfigComponentsPage.tsx
  • report/kitchen-sink/DynamicSectionsPage.tsx
  • report/kitchen-sink/InsightsAndGraphPage.tsx
  • report/kitchen-sink/KitchenSinkTypes.ts
  • report/kitchen-sink/LayoutComponentsPage.tsx
  • report/kitchen-sink/RBACPage.tsx
  • report/kitchen-sink/ViewPage.tsx
  • report/mission-control.ts
  • report/package.json
  • report/rbac-types.ts
  • report/scraper-types.ts
  • report/scraper/scraper.go
  • report/testdata/kitchen-sink.yaml
  • report/types.ts
  • report/view-types.ts
  • sdk/client.go
  • sdk/client_test.go
  • tests/e2e/oidc/oidc_login_test.go
  • tests/e2e/oidc/suite_test.go
  • views/render_facet.go
  • views/render_facet_test.go
💤 Files with no reviewable changes (6)
  • auth/basic.go
  • auth/oidc/routes.go
  • report/components/RBACResourceSection.tsx
  • auth/oidc/login.go
  • auth/kratos.go
  • report/RBACReport.tsx

Comment thread artifacts/controllers.go
Comment on lines +76 to 88
artifactID, err := uuid.Parse(c.Param("id"))
if err != nil {
return api.WriteError(c, api.Errorf(api.EINVALID, "invalid id(%s). must be a uuid. %v", _id, err))
return api.WriteError(c, api.Errorf(api.EINVALID, "invalid id: %v", err))
}

artifact, err := db.FindArtifact(ctx, artifactID)
blobs, err := ctx.Blobs()
if err != nil {
return api.WriteError(c, err)
} else if artifact == nil {
return api.WriteError(c, api.Errorf(api.ENOTFOUND, "artifact(%s) was not found", artifactID))
}
defer blobs.Close()

conn, err := pkgConnection.Get(ctx, artifact.ConnectionID.String())
data, err := blobs.Read(artifactID)
if err != nil {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

printf '\n== Artifact struct ==\n'
ast-grep --lang go --pattern $'type Artifact struct { $$$ }'

printf '\n== Artifact storage-related fields ==\n'
rg -n -C2 --type go '\b(BlobID|Connection|Path|Filename)\b'

printf '\n== Blob store entry points ==\n'
rg -n -C3 --type go '\bBlobs\s*\(\)|\bblobs\.Read\s*\('

Repository: flanksource/mission-control

Length of output: 93


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Finding artifacts/controllers.go ==="
fd -type f "controllers\.go" | grep -i artifact

echo -e "\n=== Finding Artifact model definition ==="
fd -type f -name "*.go" | xargs rg -l "type Artifact struct" | head -5

echo -e "\n=== Checking artifacts/controllers.go exists and its size ==="
if [ -f "artifacts/controllers.go" ]; then
  wc -l artifacts/controllers.go
  echo "File found."
else
  echo "File not found at artifacts/controllers.go. Searching for it..."
  find . -name "controllers.go" -path "*/artifact*" 2>/dev/null || echo "No artifact controllers found"
fi

Repository: flanksource/mission-control

Length of output: 299


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Using git ls-files to find artifact files ==="
git ls-files | grep -i artifact

echo -e "\n=== Looking for models with 'Artifact' struct ==="
git ls-files "*.go" | xargs grep -l "type Artifact struct" 2>/dev/null | head -5

echo -e "\n=== Checking artifacts directory structure ==="
find . -type d -name artifacts 2>/dev/null

echo -e "\n=== Looking for controllers.go in artifacts ==="
find . -path "*/artifacts/controllers.go" 2>/dev/null

Repository: flanksource/mission-control

Length of output: 668


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Reading artifacts/controllers.go (first 100 lines to check structure) ==="
head -100 artifacts/controllers.go

echo -e "\n=== Checking lines 76-88 specifically ==="
sed -n '76,88p' artifacts/controllers.go

echo -e "\n=== Searching for Artifact type definition (broader search) ==="
rg -n "type Artifact" --type go | head -10

Repository: flanksource/mission-control

Length of output: 3492


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Reading artifacts/artifacts.go ==="
cat artifacts/artifacts.go

echo -e "\n=== Searching for Artifact model in duty models ==="
rg -n "type Artifact" . --type go -A 15 | head -50

echo -e "\n=== Searching for blobs.Read usage and implementation ==="
rg -n "\.Read\(" . --type go -B 2 -A 2 | grep -A 4 "blobs"

Repository: flanksource/mission-control

Length of output: 6744


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Searching for ctx.Blobs() definition ==="
rg -n "func.*Blobs\(" . --type go -A 5

echo -e "\n=== Searching for blobs.Read interface/implementation ==="
rg -n "\.Read\(" . --type go -B 3 -A 3 | head -100

echo -e "\n=== Looking for git history of DownloadArtifact ==="
git log -p --follow -S "DownloadArtifact" -- artifacts/controllers.go | head -150

echo -e "\n=== Checking if models.Artifact has Path field ==="
rg -n "PlaybookRunActionID\|ConnectionID\|Path" . --type go | grep -i artifact

Repository: flanksource/mission-control

Length of output: 115


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Searching for models.Artifact definition ==="
rg -n "models\.Artifact" artifacts/ --type go -B 2 -A 10 | head -100

echo -e "\n=== Looking at db/artifacts.go which likely has DB operations ==="
head -150 db/artifacts.go

echo -e "\n=== Checking imports in artifacts.go for clues about Blobs ==="
head -30 artifacts/artifacts.go | grep -E "import|Blobs"

echo -e "\n=== Git log to see what DownloadArtifact looked like before ==="
git log --oneline -20 -- artifacts/controllers.go

Repository: flanksource/mission-control

Length of output: 5040


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Looking for duty/context.Context definition or Blobs method ==="
rg -n "Blobs.*\(\)" . --type go -B 2 -A 8 | head -100

echo -e "\n=== Checking if there's a different blob store abstraction ==="
rg -n "interface.*Blob|type.*Blob" . --type go -B 1 -A 5 | head -80

echo -e "\n=== Recent git commits to understand the change ==="
git log --oneline -10

echo -e "\n=== Check if there's a recent change to DownloadArtifact ==="
git log -p -- artifacts/controllers.go | head -200

Repository: flanksource/mission-control

Length of output: 723


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Check git show for specific commit or recent changes ==="
git show HEAD:artifacts/controllers.go | tail -50

echo -e "\n=== Look for any TODOs or comments about blob storage in controllers ==="
rg -n "TODO|FIXME|NOTE|blob|Blob" artifacts/controllers.go

echo -e "\n=== Search for models.Artifact fields by looking at SaveArtifact ==="
rg -n "SaveArtifact" . --type go -B 2 -A 15 | head -100

echo -e "\n=== Check what duty package provides ==="
grep -r "github.com/flanksource/duty" artifacts/ --include="*.go" | head -20

Repository: flanksource/mission-control

Length of output: 5649


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Check if artifacts.Path is an exported field in models.Artifact ==="
rg -n "Path.*string" . --type go | grep -i artifact | head -20

echo -e "\n=== Look for any adapter/wrapper that might map UUID to Path ==="
rg -n "UUID.*Path|Path.*UUID|artifact.*blob|blob.*artifact" . --type go -i | head -30

echo -e "\n=== Check if there are any tests that show how DownloadArtifact should work ==="
find . -name "*test*.go" -o -name "*_test.go" | xargs rg -l "DownloadArtifact" 2>/dev/null

echo -e "\n=== Check mcp/artifacts.go for any alternative patterns ==="
cat mcp/artifacts.go | head -100

Repository: flanksource/mission-control

Length of output: 1097


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Search external duty package for Artifact model ==="
find ~/.go/pkg -name "*.a" 2>/dev/null | head -5

echo -e "\n=== Look at db/artifacts.go to see FindArtifact which shows the query pattern ==="
cat db/artifacts.go

echo -e "\n=== Compare UploadArtifact pattern (uses DB query + artifact.Path) ==="
sed -n '120,160p' artifacts/artifacts.go

echo -e "\n=== Extract the DownloadArtifact full implementation ==="
sed -n '76,110p' artifacts/controllers.go

Repository: flanksource/mission-control

Length of output: 126


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Full UploadArtifact implementation to understand the pattern ==="
sed -n '127,155p' artifacts/artifacts.go

echo -e "\n=== Full DownloadArtifact implementation ==="
sed -n '73,105p' artifacts/controllers.go

echo -e "\n=== Check if artifact.Path is actually set/stored ==="
rg -n "artifact\.Path\s*=" artifacts/ --type go -B 2 -A 2

Repository: flanksource/mission-control

Length of output: 1942


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Check SaveArtifact from duty package to understand blob storage ==="
rg -n "SaveArtifact|artifacts\.SaveArtifact" artifacts/artifacts.go -B 2 -A 2

echo -e "\n=== Check all uses of fs.Read to understand the key pattern ==="
rg -n "fs\.Read\(" artifacts/ --type go -B 2 -A 2

echo -e "\n=== Check if there's any mapping or bridge between UUID and Path in the codebase ==="
rg -n "blobs\." . --type go -B 1 -A 1

echo -e "\n=== Look for previous version of DownloadArtifact in git history ==="
git log --all --full-history -- artifacts/controllers.go | head -20

Repository: flanksource/mission-control

Length of output: 2386


Fetch the artifact record from the database before reading from blob storage.

The DownloadArtifact handler passes the artifact UUID directly to blobs.Read(), but artifacts are stored by their Path field. Compare this to UploadArtifact (lines 127-145 in artifacts/artifacts.go), which queries the database first to retrieve the artifact record and then uses artifact.Path when accessing the filesystem. This handler will fail to retrieve the correct artifact data.

Add a database lookup:

var artifact models.Artifact
if err := ctx.DB().Where("id = ?", artifactID).First(&artifact).Error; err != nil {
    return api.WriteError(c, err)
}

Then use artifact.Path or adjust the blob storage call accordingly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@artifacts/controllers.go` around lines 76 - 88, The DownloadArtifact handler
currently calls blobs.Read(artifactID) directly but must first fetch the
artifact record from the DB to use its stored path; query the DB via ctx.DB()
into a models.Artifact (e.g., var artifact models.Artifact; ctx.DB().Where("id =
?", artifactID).First(&artifact).Error) and return api.WriteError on failure,
then call blobs.Read(artifact.Path) (and keep the existing blobs.Close defer) so
the blob lookup uses artifact.Path rather than the UUID.

Comment thread report/build-kitchen-sink.ts
Comment thread report/catalog/change_mappings.go
Comment thread report/package.json Outdated
moshloop and others added 27 commits April 15, 2026 15:44
Cover storage CRUD (auth requests, refresh tokens, revocation, cleanup),
JWT validation (valid, wrong key, expired, wrong audience, unknown subject),
and login handler form rendering/credential rejection.
- Export CredentialChecker and PersonLookup interfaces from oidc package
- Decouple MountRoutes from htpasswd: accept CredentialChecker directly
- Add KratosCredentialChecker that validates credentials via Kratos API
- Add LookupKratosPersonByUsername for Kratos identity-to-person mapping
- Add OIDC Bearer token validation to Kratos Session middleware
- Fix unique constraint violation: use *string for AuthRequest.Code
- Clean up OIDC test data in AfterEach to prevent constraint conflicts
- Escape auth_request_id in login form HTML to prevent XSS
- Remove duplicate AuthCmd, add login subcommand to existing Auth
- Use SHA-256 for AES key derivation instead of raw byte copy
- Add 30s timeout to HTTP client in CLI auth login
- Use safe type assertion for JWT audience array
- Fix "notificatino" typo in jobs error message
Set done=true in SetAuthRequestSubject when persisting the authenticated subject.

 Zitadel's /authorize/callback requires AuthRequest.Done() to be true before
 issuing an authorization code. Without this, the flow fails with
 interaction_required even after valid credentials.
Return identity scope-filter functions for the CLI OIDC client instead of nil
from RestrictAdditionalIdTokenScopes and RestrictAdditionalAccessTokenScopes.

Zitadel calls these hooks during token creation. Returning nil caused a nil
function dereference in pkg/op/token.go (CreateJWT/CreateIDToken), leading to
panic during code exchange.
… hashing

Implements a complete OIDC provider for CLI authentication with token security enhancements. Introduces cryptographically secure token storage using SHA-256 hashing for short-lived auth codes and refresh tokens, and separate AES-256 key generation for OIDC state encryption. Adds professionally styled login and callback success pages with Tailwind CSS. Includes CLI tools for PKCE-based OAuth flows and comprehensive security documentation.
Remove /oidc suffix from issuer URL construction so that the provider uses the base public URL (e.g., http://localhost:8080) instead of appending /oidc. Add static asset serving for embedded resources at /oidc/static/.
Adds Tailwind-styled login and success pages with embedded assets, proper token hashing in storage, crypto key management, and refresh token grant support. Extracts OIDC client logic to reusable package and adds browser-based e2e tests.
…uild targets

golangci-lint v2.6.0 was built with Go 1.25 and cannot target Go 1.26.

Add tailwind.min.js download as dependency for ci-test, e2e, and linux
targets so the go:embed in auth/oidc/static/embed.go resolves in CI.
The TAILWIND_JS variable and download rule were defined after the e2e
and ci-test targets. GNU Make 3.81 doesn't resolve forward-referenced
file target prerequisites, causing the curl download to be skipped.
Move duplicated facetSrcDir, extractReportFiles, and CLI/HTTP rendering
logic from application/, rbac_report/, and views/ into report/facet.go.
Each package now delegates to report.RenderCLI or report.RenderHTTP.
…rendering

Adjust page margins, reduce header/footer sizes, switch RBAC reports to landscape orientation, replace date formatting for consistency, refactor changelog and role source into custom badge components with color coding, remove health color references, add legend section, and update facet dependency to support pdf-lib for improved rendering
Add GetAccessForUIRef and GetAccessLogsForUIRef functions to query RBAC access data with filtering. Refactor GetRBACAccess to use ResourceSelector directly with optional recursive config expansion, replacing inline selector application.
Introduces new `catalog get` command that displays comprehensive config item details including relationships, insights, changes, access logs, and playbook runs. Also adds --report-source flag across export commands to support local TSX report overrides.
Introduces AccessItem, AccessLogItem, AccessUIFilters, and AccessLogsUIFilters types to support rendering access control and audit log sections in application UI. Refactors FacetOptions to use embedded FacetRenderOptions struct and adds PDF rendering support to ViewSpec.
Introduces CatalogReport, CatalogReportPage, KitchenSink, MatrixDemo, and RBACMatrixReport components for rendering comprehensive PDF reports. Includes new shared components for config changes, insights, relationships, access control, and RBAC matrix visualization with identity types, access patterns, and staleness indicators. Renames internal CoverPage functions to CoverContent to avoid confusion with Page component.
Replace defer stop() with shutdown hook registration to ensure proper cleanup ordering. Add non-blocking sends to avoid goroutine hangs. Support array-type JWT audience claims and persist session storage.
Add from/to fields to CatalogReport struct, populated from the resolved
sinceTime in BuildReport. Display the date range on the cover page.
moshloop added 14 commits April 15, 2026 15:44
- Create CoverPage component with unified props for title, subtitle,
  icon, breadcrumbs, subjects, tags, stats, dateRange, and children slot
- Create CatalogList component extracting ListTable + EntryDetail from
  CatalogReport
- Update all four report types (Application, ViewReport, CatalogReport,
  RBAC) to use shared CoverPage
Adds `catalog report` and `catalog tree` commands with configurable export formats (JSON, HTML, PDF), time-range filtering, and recursive config item support. Enhances RBAC and catalog reports with additional metadata fields (config class, path, subject, parents). Refactors report rendering to use Document/Header/Footer model for cleaner page layout.
Remove unused imports, consolidate artifact download to use blob storage abstraction, add config_change artifact listing endpoint, switch check/playbook conditional to case statement, and update query builder calls with BaseCatalogSearch wrapper.
Split monolithic KitchenSink.tsx into specialized page modules (Layout, Config, Changes, Dynamic, Insights, Application, RBAC, Catalog, View) with shared type definitions. Introduces smart change section routing that auto-selects RBAC/backup/deployment renderers based on title and change patterns.
…ation to catalog reports

Introduces configurable report settings (filters, thresholds, category mappings) loaded from YAML, an audit page showing build metadata and query performance, and automatic categorization of config changes for specialized rendering of RBAC, backup, and deployment events. The Export API now returns detailed render metadata and supports build commit tracking.
…terface

Changed categoryMappings from Record<string, string[]> to CatalogReportCategoryMapping[] array. This provides a more explicit schema with category, filter, and transform fields instead of implicit type-based routing rules. Updated categorizeChanges to use change.category field directly.
…ation

Introduces FindingsReport component for rendering security audit findings with severity-based styling, evidence tables, and data source tracking. Adds group membership audit section showing user assignments and timeline. Includes comprehensive icon set for attack categories, kill chain phases, and identity types.
…h enhanced change mappings

Reorganizes catalog_report package into report/catalog namespace and redesigns category mappings to use CEL-based filters and transforms instead of simple string lists. Adds scraper report extraction, group membership auditing, change details deserialization, limit/pagination controls, and cycle protection for config tree traversal.

BREAKING CHANGE: move catalog_report package to report/catalog with enhanced change mappings
Adds detection for HTML responses from JSON endpoints (indicating frontend at wrong URL), auto-healing by appending /api and retrying. Includes new context add command and requires specific bearer audience for azure login flow.
Support filtering bearer tokens by scope in addition to audience. Automatically selects account in azure portal, displays token counts, and validates tokens match both audience and scope requirements before saving connections.
…k test data

Removes $(LOCALBIN)/ prefix from deps tool invocations since deps is now directly in PATH. Adds build target for kitchen-sink.json compiled from schema examples and test fixtures. Updates gitignore with additional test artifacts and generated files.
Upgrade flanksource/duty to v1.0.1260, add charmbracelet TUI components and chromedp CDP protocol, clean up gitignore for test artifacts and tools
Add MCP field to ViewSpec to enable tool registration with LLM clients (Claude, Gemini, Codex). This allows views to be properly exposed and configured when used as tools in language model integrations.

Updates include:
- New MCP field in ViewSpec with kubebuilder validation marker
- Generated deepcopy implementation for MCP field
- Updated CRD schema with MCP metadata description
- Updated JSON schema with MCPMetadata reference
Add .gavel.yaml configuration file to define pre-build steps that run `make build` to ensure dependencies are installed before test execution.
…ed functions

Add defensive nil check before calling issues.Err() in CEL program compilation to prevent potential nil pointer dereference. Remove unused buildEntry wrapper function and buildConfigGroups/sortedConfigIDs helper functions that are no longer needed. Update @flanksource/facet dependency from local file path to published version ^0.1.32 and add iconify icon set dependencies for enhanced icon support.
…tes before OIDC catch-all

Replace if-else chain with switch statement for MFA filter comparison in GetAccessLogsForUIRef for improved readability and maintainability.

Mount OAuth 2.0 Protected Resource Metadata endpoints (RFC 9728) before the OIDC provider catch-all route to ensure they take precedence over /.well-known/* routes.

Update go.sum with additional Google Cloud and dependency entries.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cmd/connection_browser.go (1)

444-468: ⚠️ Potential issue | 🟠 Major

Don't let stale tokens displace the usable token for an audience.

extractBearerTokens keeps only the highest-scope token per audience. MSAL caches often hold multiple tokens for the same audience, and an expired broader token can replace the fresh one here, which then makes hasRequiredToken miss a valid login or causes a dead bearer to be saved later.

Proposed fix
 		jwt := connection.DecodeJWT(secret)
 		if jwt == nil || jwt.Audience == "" {
 			continue
 		}
+		if !jwt.ExpiresAt.IsZero() && time.Until(jwt.ExpiresAt) <= 0 {
+			continue
+		}
 		if jwt.ScopeCount() > scopeCounts[jwt.Audience] {
 			tokens[jwt.Audience] = secret
 			scopeCounts[jwt.Audience] = jwt.ScopeCount()
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/connection_browser.go` around lines 444 - 468, extractBearerTokens
currently picks the token with the largest scope per audience but can let an
expired high-scope token override a fresh lower-scope token; update
extractBearerTokens to check JWT expiration (use the JWT expiry/exp claim via
the decoded jwt object) and prefer non-expired tokens: when encountering a token
for jwt.Audience, if the existing tokens[jwt.Audience] is expired and the new
token is not, replace it regardless of scope; if both are non-expired, keep the
one with greater jwt.ScopeCount(); if both expired, you may keep the one with
greater scope as before. Use the jwt variable and
tokens/jwt.Audience/scopeCounts to implement these rules so stale tokens can't
displace usable ones.
♻️ Duplicate comments (1)
report/build-kitchen-sink.ts (1)

21-21: ⚠️ Potential issue | 🔴 Critical

Fix schemaPath so it resolves inside this repository.

Line 21 currently resolves to ../duty/... from repo root, which points outside the repo and can break schema loading.

Proposed fix
-const schemaPath = resolve(__dirname, '../../duty/schema/openapi/change-types.schema.json');
+const schemaPath = resolve(__dirname, '../duty/schema/openapi/change-types.schema.json');
#!/bin/bash
set -euo pipefail

python - <<'PY'
from pathlib import Path
repo = Path(".").resolve()
script_dir = (repo / "report").resolve()

current = (script_dir / "../../duty/schema/openapi/change-types.schema.json").resolve()
expected = (script_dir / "../duty/schema/openapi/change-types.schema.json").resolve()

def within_repo(p: Path, r: Path) -> bool:
    return p == r or r in p.parents

print(f"repo: {repo}")
print(f"current:  {current}")
print(f"expected: {expected}")
print(f"current_within_repo={within_repo(current, repo)}")
print(f"expected_within_repo={within_repo(expected, repo)}")
print(f"current_exists={current.exists()}")
print(f"expected_exists={expected.exists()}")
PY
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/build-kitchen-sink.ts` at line 21, The schemaPath constant currently
resolves '../../duty/schema/openapi/change-types.schema.json' relative to
__dirname which points outside the repository; update the path so it resolves
inside this repo (use '../duty/schema/openapi/change-types.schema.json' or
otherwise compute from the report directory) by changing the schemaPath
assignment in report/build-kitchen-sink.ts (the schemaPath variable that uses
resolve(__dirname, ...)) to the corrected relative path so the file is located
within the repository and schema loading no longer breaks.
🟡 Minor comments (9)
report/components/ConfigChangesExamples.tsx-54-54 (1)

54-54: ⚠️ Potential issue | 🟡 Minor

Use a user-facing section title instead of a component identifier.

At Line 54, title="ConfigChangesExamples" reads like an internal symbol and is inconsistent with the human-readable titles below.

✏️ Suggested fix
-<Section variant="hero" title="ConfigChangesExamples" size="md">
+<Section variant="hero" title="Examples" size="md">
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/ConfigChangesExamples.tsx` at line 54, The Section
component is using an internal identifier as its title
(title="ConfigChangesExamples"); change the title prop on Section to a
human-facing string (for example "Configuration Changes" or "Config Changes —
Examples") so the section header is user-friendly and consistent with other
headings; locate the Section invocation (Section variant="hero" title=...) and
replace the title value accordingly.
report/components/GitRef.tsx-34-43 (1)

34-43: ⚠️ Potential issue | 🟡 Minor

Use the link prop in rendering to preserve source navigation.

link is accepted by GitRefProps and forwarded by GitRefFromSource, but never consumed in GitRef. That drops clickable navigation and leaves dead prop plumbing.

Suggested fix
 export default function GitRef({ url, branch, file, dir, link, size = 'xs' }: GitRefProps) {
   if (!url && !file) return null;
   const textClass = SIZE_CLASSES[size];

   return (
     <span className={`inline-flex items-center gap-[1mm] flex-wrap ${textClass}`}>
       <Icon name="Git" size={size === 'xs' ? 10 : 12} />
-      {url && <Tag className={textClass}>{url}{branch ? ` @ ${branch}` : ''}</Tag>}
+      {url &&
+        (link ? (
+          <a href={link} className="inline-flex">
+            <Tag className={textClass}>{url}{branch ? ` @ ${branch}` : ''}</Tag>
+          </a>
+        ) : (
+          <Tag className={textClass}>{url}{branch ? ` @ ${branch}` : ''}</Tag>
+        ))}
       {dir && <Tag className={`text-gray-400 ${textClass}`}>{dir}/</Tag>}
       {file && <Tag className={`text-gray-400 ${textClass}`}>{file}</Tag>}
     </span>
   );
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/GitRef.tsx` around lines 34 - 43, The GitRef component
currently ignores the link prop causing loss of navigation; update GitRef to use
the link prop (from GitRefProps) by rendering the relevant parts
(url/branch/dir/file Tag elements) inside an anchor when link is provided: wrap
either the whole content that represents the source (e.g., the url and branch
Tag or the combined Tags) with an <a> whose href is link and add target="_blank"
and rel="noopener noreferrer" for safety, otherwise keep the existing Tag-only
rendering; ensure Icon remains visible and styling (textClass, text-gray-400) is
preserved and that clickable area includes the Tag(s) representing the source so
GitRefFromSource-provided links work.
Makefile-42-50 (1)

42-50: ⚠️ Potential issue | 🟡 Minor

Remove duplicate Tailwind variable and rule declarations.

Lines 39–50 define the same variables (TAILWIND_VERSION, TAILWIND_JS) and target rule ($(TAILWIND_JS)) twice in succession. GNU Make will use the final recipe and emit override warnings.

Proposed fix
 TAILWIND_VERSION ?= 3.4.17
 TAILWIND_JS = auth/oidc/static/tailwind.min.js
 
 $(TAILWIND_JS):
 	curl -sL "https://cdn.tailwindcss.com/$(TAILWIND_VERSION)" -o $(TAILWIND_JS)
-
-TAILWIND_VERSION ?= 3.4.17
-TAILWIND_JS = auth/oidc/static/tailwind.min.js
-
-$(TAILWIND_JS):
-	curl -sL "https://cdn.tailwindcss.com/$(TAILWIND_VERSION)" -o $(TAILWIND_JS)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Makefile` around lines 42 - 50, Remove the duplicated TAILWIND_VERSION,
TAILWIND_JS and $(TAILWIND_JS) target block: keep a single declaration of
TAILWIND_VERSION (use the existing default assignment TAILWIND_VERSION ?=
3.4.17), a single assignment for TAILWIND_JS, and one recipe for the
$(TAILWIND_JS) target that downloads the file; delete the repeated copy of those
three lines so Make doesn’t emit override warnings and the recipe/variables are
defined only once.
cmd/catalog_tree.go-136-140 (1)

136-140: ⚠️ Potential issue | 🟡 Minor

Validate --direction before calling query.ConfigTree.

The flag help documents all, incoming, and outgoing, but any string is accepted and cast through. Typos will currently fall through to the query layer and are likely to fail ambiguously or return confusing results.

Also applies to: 149-150

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/catalog_tree.go` around lines 136 - 140, Validate the --direction flag
value before calling query.ConfigTree: check the treeDirection string (used to
build query.ConfigTreeOptions) against allowed values "all", "incoming", and
"outgoing" (or the enum/consts you use for RelationDirection) and return a clear
error if it's invalid; update the parsing/validation around the treeDirection
variable prior to the call to query.ConfigTree (and the similar use at lines
149-150) so only accepted values are mapped to
query.RelationDirection(treeDirection) and anything else fails fast with a
helpful message.
cmd/catalog_get.go-285-287 (1)

285-287: ⚠️ Potential issue | 🟡 Minor

Don't truncate multi-valued external IDs.

ExternalID is a slice, but this view only renders the first element. For a command that is supposed to show the full config item, that silently drops data.

Proposed fix
-	if len(c.ExternalID) > 0 {
-		items = append(items, api.KeyValuePair{Key: "External ID", Value: c.ExternalID[0]})
-	}
+	if len(c.ExternalID) == 1 {
+		items = append(items, api.KeyValuePair{Key: "External ID", Value: c.ExternalID[0]})
+	} else if len(c.ExternalID) > 1 {
+		items = append(items, api.KeyValuePair{Key: "External IDs", Value: strings.Join(c.ExternalID, ", ")})
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/catalog_get.go` around lines 285 - 287, The code currently only renders
the first element of c.ExternalID when building items (items = append(items,
api.KeyValuePair{Key: "External ID", Value: c.ExternalID[0]})), which truncates
multi-valued external IDs; change the logic to include all values (either join
c.ExternalID into a single string like strings.Join(c.ExternalID, ", ") for the
Value field, or append a separate api.KeyValuePair per element) and ensure the
updated construction uses c.ExternalID (the full slice) instead of
c.ExternalID[0] so no ExternalID entries are silently dropped.
report/components/BackupsSection.tsx-41-41 (1)

41-41: ⚠️ Potential issue | 🟡 Minor

Filter the “Failed Backups” section to actual failures.

status !== 'success' also pulls in queued/running/warning states, so this table now shows non-failed backups under a failure heading. Restrict it to failed/error statuses only.

Suggested fix
-  const failedRows = backups.filter((b) => b.status !== 'success');
+  const failedRows = backups.filter((b) => {
+    const status = b.status.toLowerCase();
+    return status.includes('fail') || status.includes('error');
+  });

Also applies to: 59-81

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/BackupsSection.tsx` at line 41, The current failedRows
calculation (const failedRows = backups.filter((b) => b.status !== 'success'))
is too broad and includes queued/running/warning states; change the filter to
only include true failure statuses (e.g., b.status === 'failed' || b.status ===
'error' or whichever canonical failure enums you use) and update the other
identical filter usages between lines 59-81 to the same predicate so the "Failed
Backups" table shows only actual failures; reference the failedRows variable and
any other backups.filter(...) calls in this component/class and replace the
predicate accordingly.
cmd/rbac.go-168-169 (1)

168-169: ⚠️ Potential issue | 🟡 Minor

Validate --view instead of silently accepting typos.

Any value other than "user" currently falls through to the default report path, so --view users or --view foo will generate the wrong output instead of failing fast. Please add an explicit allowlist check before export/render.

Suggested fix
+func validateRBACView(view string) error {
+	switch view {
+	case "resource", "user", "matrix":
+		return nil
+	default:
+		return fmt.Errorf("invalid --view %q (expected one of: resource, user, matrix)", view)
+	}
+}
+
 var ExportRBAC = &cobra.Command{
 	Use:   "export [application.yaml | config-query...]",
@@
 	RunE: func(cmd *cobra.Command, args []string) error {
+		if err := validateRBACView(rbacView); err != nil {
+			return err
+		}
+
 		logger.UseSlog()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/rbac.go` around lines 168 - 169, Validate the rbacView value before
rendering/exporting: add an allowlist check for the variable rbacView (accepted
values "resource", "user", "matrix") in the command handling for ExportRBAC
(before calling the export/render logic that may use report.SourceDir), and if
the value is not one of those, return an error or fail fast with a clear message
(e.g., "invalid --view: must be one of resource,user,matrix") so typos like
"users" or "foo" do not silently fall through.
report/components/RBACChanges.tsx-37-40 (1)

37-40: ⚠️ Potential issue | 🟡 Minor

The summary-card colors are semantically flipped.

Granted and net-positive are styled as warning/amber, while Revoked and net-negative are styled as positive/green. That conflicts with the row badges below and makes the rollup harder to read correctly.

Also applies to: 46-79

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/RBACChanges.tsx` around lines 37 - 40, The summary-card
color mapping is inverted: currently grantedCount and netCount > 0 use 'orange'
and revokedCount/netCount < 0 use 'green'. Update the mappings so Granted and
net-positive use the positive color (e.g., 'green') and Revoked and net-negative
use the warning/amber color (e.g., 'orange'); specifically change netColor and
any direct uses of grantedCount/revokedCount color logic (symbols: grantedCount,
revokedCount, netCount, netColor) and flip the color assignments in the
corresponding badge rendering code in the 46-79 region so row badges and rollup
colors are consistent.
report/components/AuditPage.tsx-142-149 (1)

142-149: ⚠️ Potential issue | 🟡 Minor

Use a membership-scoped row id here.

userId is not unique once a user leaves and later rejoins the same group. Those are distinct audit rows, but they will share the same id/key in this table.

Possible fix
-                    id: m.userId,
+                    id: `${m.userId}:${m.membershipAddedAt}:${m.membershipDeletedAt ?? ''}`,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/AuditPage.tsx` around lines 142 - 149, The table row id
currently uses m.userId which can collide when a user leaves and rejoins; change
the id in the g.members.map mapping to a membership-scoped identifier (for
example concatenate userId with a membership timestamp or use an existing
membershipId), e.g. replace the id: m.userId entry with a unique key built from
m.userId plus m.membershipAddedAt (or m.membershipId if available) so each audit
row is distinct.
🧹 Nitpick comments (7)
report/components/ViewResultSection.tsx (1)

1027-1027: Use nullish fallback instead of truthy fallback for variable defaults.

v.default || '-' will incorrectly hide valid falsy defaults like 0 and false. Use nullish handling so only null/undefined fall back.

💡 Proposed fix
-              value={v.default || '-'}
+              value={v.default == null ? '-' : String(v.default)}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/ViewResultSection.tsx` at line 1027, The value prop uses a
truthy fallback that hides valid falsy defaults: replace the expression
value={v.default || '-'} in ViewResultSection.tsx with a nullish fallback so
only null/undefined fall back (i.e., use v.default ?? '-' instead of v.default
|| '-'), ensuring values like 0 and false are preserved.
report/components/ConfigChangesExamples.tsx (2)

10-12: Avoid full-list scans when only a small capped sample is needed.

At Line 11, filter(...).slice(...) walks the entire array even when limit is small. A bounded collector exits early and avoids extra work on large change sets.

♻️ Suggested refactor
 function pickMatching(changes: ConfigChange[], predicate: (change: ConfigChange) => boolean, limit: number): ConfigChange[] {
-  return changes.filter(predicate).slice(0, limit);
+  const matches: ConfigChange[] = [];
+  for (const change of changes) {
+    if (!predicate(change)) continue;
+    matches.push(change);
+    if (matches.length === limit) break;
+  }
+  return matches;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/ConfigChangesExamples.tsx` around lines 10 - 12, The
pickMatching function currently does a full-filter then slice which scans the
entire changes array; replace it with an early-exit bounded collector that
iterates over changes, applies the predicate, pushes matching ConfigChange items
into a result array and returns as soon as result.length === limit; keep the
same signature and semantics (respecting predicate and limit) and ensure it
handles limit <= 0 by returning an empty array immediately.

25-37: Extract typed-kind matching to a module-level constant.

At Lines 27–36, the array literal is recreated on each predicate call. Moving it to a Set improves readability and avoids repeated allocations.

♻️ Suggested refactor
+const TYPED_DIFF_KINDS = new Set([
+  'ConfigChange/v1',
+  'Promotion/v1',
+  'Scale/v1',
+  'Restore/v1',
+  'Deployment/v1',
+  'Rollback/v1',
+  'Scaling/v1',
+  'CostChange/v1',
+]);
+
 const typedDiffs = pickMatching(
   available,
-  (change) => [
-    'ConfigChange/v1',
-    'Promotion/v1',
-    'Scale/v1',
-    'Restore/v1',
-    'Deployment/v1',
-    'Rollback/v1',
-    'Scaling/v1',
-    'CostChange/v1',
-  ].includes(change.typedChange?.kind ?? ''),
+  (change) => TYPED_DIFF_KINDS.has(change.typedChange?.kind ?? ''),
   5,
 );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/ConfigChangesExamples.tsx` around lines 25 - 37, Extract
the repeated array literal of allowed kinds into a module-level Set (e.g. const
ALLOWED_CHANGE_KINDS = new Set(['ConfigChange/v1', ...])) and update the
predicate passed to pickMatching in ConfigChangesExamples (the callback
receiving change and checking change.typedChange?.kind) to use
ALLOWED_CHANGE_KINDS.has(kind) instead of recreating the array on each call;
this avoids repeated allocations and improves readability while keeping the
existing pickMatching(available, ...) behavior and the 5 limit.
report/components/RBACChangelogSection.tsx (1)

47-48: Consider using a stable key instead of array index.

Using i (array index) as the key can cause subtle rendering bugs if the list is ever reordered or filtered client-side. If RBACChangeEntry has a unique identifier (e.g., configId + date combination), prefer that for a more stable key.

♻️ Suggested improvement
-        {changelog.map((entry, i) => (
-          <div key={i} className="flex items-baseline gap-[2mm] text-[6pt] text-gray-600">
+        {changelog.map((entry) => (
+          <div key={`${entry.configId}-${entry.date}`} className="flex items-baseline gap-[2mm] text-[6pt] text-gray-600">
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/RBACChangelogSection.tsx` around lines 47 - 48, The list is
using the array index `i` as the React key in the JSX returned from
`changelog.map(...)`, which can cause rendering bugs; replace `key={i}` with a
stable unique identifier from each `RBACChangeEntry` (for example
`entry.configId`, `entry.date`, or a concatenation like
`${entry.configId}-${entry.date}`) so the element key is stable across
reorders/filters; update the `changelog.map` callback to use that unique field
(or computed stable id) as the `key` instead of `i`.
report/RBACByUserReport.tsx (1)

35-37: Prefer user.userId as the key instead of array index.

RBACUserReport has a userId field that should be unique. Using this as the key provides more stable reconciliation if the user list changes.

♻️ Suggested improvement
-        {users.map((user, idx) => (
-          <RBACUserSection key={idx} user={user} />
+        {users.map((user) => (
+          <RBACUserSection key={user.userId} user={user} />
         ))}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/RBACByUserReport.tsx` around lines 35 - 37, The map over users in
RBACByUserReport uses the array index as key; change the key on RBACUserSection
to use the stable unique identifier by replacing key={idx} with
key={user.userId} (or key={user.userId ?? user.id} if you need a safe fallback)
in the users.map callback so React can reconcile RBACUserSection instances
correctly.
report/components/ConfigInsightsSection.tsx (1)

64-69: Sorting behavior for missing status/severity may place them unexpectedly first.

When status is undefined or not in statusOrder, indexOf returns -1, causing those items to sort before 'open'. Similarly for severity. If the intent is to sort unknowns last, consider handling the -1 case:

♻️ Suggested improvement for predictable sorting
   const sorted = [...analyses].sort((a, b) => {
     const statusOrder = ['open', 'silenced', 'resolved'];
-    const statusDiff = statusOrder.indexOf(a.status ?? '') - statusOrder.indexOf(b.status ?? '');
+    const aStatusIdx = statusOrder.indexOf(a.status ?? '');
+    const bStatusIdx = statusOrder.indexOf(b.status ?? '');
+    const statusDiff = (aStatusIdx === -1 ? statusOrder.length : aStatusIdx) - (bStatusIdx === -1 ? statusOrder.length : bStatusIdx);
     if (statusDiff !== 0) return statusDiff;
-    return SEVERITY_ORDER.indexOf(a.severity as ConfigSeverity) - SEVERITY_ORDER.indexOf(b.severity as ConfigSeverity);
+    const aSevIdx = SEVERITY_ORDER.indexOf(a.severity as ConfigSeverity);
+    const bSevIdx = SEVERITY_ORDER.indexOf(b.severity as ConfigSeverity);
+    return (aSevIdx === -1 ? SEVERITY_ORDER.length : aSevIdx) - (bSevIdx === -1 ? SEVERITY_ORDER.length : bSevIdx);
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/ConfigInsightsSection.tsx` around lines 64 - 69, Sorting
currently treats unknown status/severity (indexOf === -1) as -1 which sorts them
before known values; change the index calculations in the sorted comparator to
map any -1 result to the length of the corresponding order array so unknowns
sort last. Specifically, in the comparator that builds statusDiff and compares
severity, replace the raw indexOf results for a.status/b.status and
a.severity/b.severity with a value like (idx === -1 ? statusOrder.length : idx)
and similarly use SEVERITY_ORDER.length for severity so that unknown/undefined
values are pushed to the end when sorting analyses.
db/rbac_test.go (1)

3-8: Consider using qualified import for ginkgo.

The test uses dot-import for ginkgo, but coding guidelines recommend qualified import (ginkgo "github.com/onsi/ginkgo/v2") to avoid name collisions. This is a minor style inconsistency.

♻️ Suggested import style
 import (
 	"github.com/flanksource/duty/tests/fixtures/dummy"
 	"github.com/google/uuid"
-	. "github.com/onsi/ginkgo/v2"
+	ginkgo "github.com/onsi/ginkgo/v2"
 	. "github.com/onsi/gomega"
 )

Then update Describe, It to ginkgo.Describe, ginkgo.It.

As per coding guidelines: "Use qualified import for ginkgo (ginkgo "github.com/onsi/ginkgo/v2") to avoid name collisions."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@db/rbac_test.go` around lines 3 - 8, Replace the dot-import of ginkgo with a
qualified alias (use ginkgo "github.com/onsi/ginkgo/v2") and update all ginkgo
test calls in this file — e.g., change Describe, Context, It, BeforeEach,
AfterEach, etc. to be prefixed with ginkgo. (Keep the existing dot-import for
gomega if desired; only change the ginkgo import and its usages like Describe
and It to ginkgo.Describe and ginkgo.It.)
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cmd/auth_login.go`:
- Around line 135-136: The current login handler in cmd/auth_login.go prints raw
tokens (tokens.AccessToken and tokens.RefreshToken) to stdout; change this so
raw tokens are not printed by default—remove or gate the fmt.Fprintf calls that
output tokens behind an explicit opt-in flag (e.g., a --show-token or
--print-tokens boolean flag) and do not print the refresh token at all unless an
explicit flag is set; by default, print only a confirmation and token expiry
(tokens.ExpiresAt.Format(...)) or a masked/token-saved message. Update flag
parsing where the command is configured and guard the existing fmt.Fprintf lines
with that flag check, and ensure refresh token output is omitted unless opt-in.
- Around line 161-169: openBrowser currently uses xdg-open for all non-macOS
systems (breaking Windows) and discards the exec.Command(...).Start() error;
change openBrowser to handle runtime.GOOS == "windows" (use a Windows launcher
such as exec.Command("rundll32", "url.dll,FileProtocolHandler", url) or
exec.Command("cmd", "/c", "start", url)) while keeping "darwin" -> "open" and
default -> "xdg-open", and surface the Start() error instead of ignoring it
(e.g., change openBrowser to return error and return the err from
exec.Command(...).Start(), or at minimum log the returned error), referencing
openBrowser, exec.Command and Start() in your changes.

In `@cmd/catalog_get.go`:
- Around line 120-125: Replace the call to time.Now() with the repo clock source
duty.Now() when computing the database time window: after parsing sinceStr with
duration.ParseDuration (the existing since variable), set sinceTime using
duty.Now().Add(-time.Duration(since)) instead of time.Now().Add(...), and update
imports to include the duty package if it's not already imported; adjust any
references to the sinceTime variable as needed (context: the sinceTime
assignment near config.ID in this block).

In `@cmd/catalog_report.go`:
- Around line 147-150: The current parsing of catalogReportSince swallows
duration.ParseDuration errors and silently leaves opts.Since unchanged; change
this so that when duration.ParseDuration(catalogReportSince) returns an error
you return that error from the command's RunE instead of ignoring it. Locate the
block using catalogReportSince and duration.ParseDuration, validate the parsed
value, and on parse failure propagate a wrapped/clear error (e.g., "invalid
--since value: ...") back through RunE so the CLI fails fast rather than
generating a report with the default window.
- Around line 68-73: The command is blocking because shutdown.WaitForSignal() is
called immediately after starting duty (duty.Start) which prevents the one-shot
work from running; instead, defer the stop() returned by duty.Start (or move
shutdown.WaitForSignal() to the very end after the export is complete) and
remove the immediate call to shutdown.WaitForSignal(); keep
shutdown.AddHookWithPriority("database", shutdown.PriorityCritical, stop) as
needed but ensure stop is invoked via defer or after work completes so the
catalog report runs to completion before waiting for process termination.

In `@cmd/connection_browser.go`:
- Around line 471-487: selectBearerToken currently only filters by audience and
scope count, so it can pick a token that doesn't satisfy the required scope;
change selectBearerToken to accept an additional requiredScope parameter (e.g.,
requiredScope string) and when iterating tokens verify that for a candidate jwt
the required scope is present (e.g., jwt.HasScope(requiredScope) or equivalent)
before considering jwt.ScopeCount() for ranking; then update all callers
(notably runBrowserLogin and saveConnection) to pass flags.RequireBearerScope
into selectBearerToken so the chosen/saved bearer always satisfies both
RequireBearerAud and RequireBearerScope.

In `@cmd/context.go`:
- Around line 251-258: The current code creates a new MCContext from flags
(using contextAddName, contextAddServer, contextAddDB, contextAddToken) and
calls cfg.SetContext, which overwrites any existing entry and clears fields like
Token, DB, and Properties; instead, fetch the existing context via
cfg.GetContext(contextAddName), merge only the flag-provided fields into that
existing MCContext (e.g., update Server using
strings.TrimRight(contextAddServer, "/") only if the server flag was provided,
update DB/Token only if corresponding flags are non-empty), and then call
cfg.SetContext with the merged context so unspecified fields (including
Properties) are preserved.
- Around line 311-329: The code currently treats any JSON-like probe as valid
and persists stored.Server + "/api"; change the probe validation to require a
successful HTTP status (e.g., resp.StatusCode in 200..299) in addition to the
content-type/body checks before mutating stored.Server and saving the config. In
practice, update the conditional logic around ct/body (which reads
resp.Header.Get("Content-Type") and body) to also check resp.StatusCode, then
only set stored.Server = strings.TrimRight(stored.Server, "/") + "/api",
ctx.Server = stored.Server and call SaveConfig(cfg) when the status is a
success; keep using LoadConfig(), cfg.GetContext(ctx.Name) and SaveConfig(cfg)
as currently referenced.

In `@db/applications.go`:
- Around line 629-640: The query building for
Select(...).Joins(...).Joins(...).Order(...) uses inner JOINs which drop audit
rows when config_items or external_users are deleted; change the Joins calls to
left (outer) joins so config_access_logs rows are preserved even if referenced
config_items or external_users are missing: replace the "JOIN config_items ON
..." and "JOIN external_users ON ..." calls with LEFT JOIN (or LEFT OUTER JOIN)
variants in the same query that selects from config_access_logs, keeping the
same selected columns (config_access_logs.*, config_items.name AS config_name,
external_users.name AS user_name, etc.) so the audit trail remains intact.

In `@Makefile`:
- Around line 214-215: The Makefile rule for target report/kitchen-sink.json is
missing the schema file used by report/build-kitchen-sink.ts; add
duty/schema/openapi/change-types.schema.json as an explicit prerequisite for the
report/kitchen-sink.json rule so changes to that schema will trigger rebuilds
(update the rule that currently lists report/build-kitchen-sink.ts and
report/testdata/kitchen-sink.yaml to also include
duty/schema/openapi/change-types.schema.json).

In `@rbac_report/report.go`:
- Around line 195-223: The code picks an arbitrary subject via first :=
configItems[0], which is unstable when query.GetConfigsByIDs returns multiple
configs; change resolveSubjectAndParents to deterministically choose a subject
(e.g., sort configItems by their ID string with sort.SliceStable and then pick
the first element) before building parents, so the chosen subject and resulting
parents are stable across runs; ensure you still use the selected subject (not
the original order) when walking ParentID and when storing into configMap; keep
references to resolveSubjectAndParents, configItems, configMap, ParentID, and
query.GetConfigsByIDs when making the change.

In `@report/Application.tsx`:
- Around line 44-55: Split the single <Page> into multiple pages by wrapping
each major component (ApplicationDetails, AccessControlSection,
IncidentsSection, BackupsSection, FindingsSection, DynamicSection,
LocationsSection) in its own <Page> and insert a <PageBreak /> between pages;
for conditional sections use the same data checks as rendering logic (e.g.,
data.incidents.length > 0, (data.backups.length > 0 || data.restores.length >
0)) so you only render the <PageBreak /> + <Page> for sections that have data,
ensuring no section overflows or is clipped by the footer.

In `@report/catalog/export.go`:
- Around line 28-43: Export currently lets BuildReport normalize options
internally, causing audit metadata drift because the raw opts are passed to
buildAudit; normalize opts first in Export (call opts = opts.WithDefaults() or
equivalent) before calling BuildReport and buildAudit so both use the same
normalized Options instance (reference functions Export, BuildReport, buildAudit
and the opts.WithDefaults method used in report.go).

In `@report/catalog/report_test.go`:
- Around line 16-19: Move the Ginkgo bootstrap out of report_test.go: remove the
TestCatalogReport function (the RegisterFailHandler(ginkgo.Fail) and
ginkgo.RunSpecs(t, "CatalogReport") call) from report_test.go and place it into
a new suite_test.go file in the same package as the tests; implement the same
top-level TestCatalogReport(t *testing.T) that calls
RegisterFailHandler(ginkgo.Fail) and ginkgo.RunSpecs(t, "CatalogReport") so the
suite has a single entrypoint.

In `@report/catalog/report.go`:
- Around line 288-295: The current code caps access logs after fetching all
rows, causing excessive scanning; modify getAccessLogs(ctx, targetIDs,
sinceTime) to accept an integer limit (derived from opts.effectiveMax(0)) and
apply that limit in the SQL query (or query builder) so the DB returns at most
the effective max; update all call sites (including the block at lines shown and
the similar block around 438-458) to pass the computed limit and remove the
in-memory slice truncation (logs = logs[:limit]) so the cap is enforced in SQL
rather than after scanning.
- Around line 274-286: The branch that handles opts.Sections.Access populates
entry.RBACResources and AccessCount but never sets entry.Access, so merged
reports later (via BuildReport and CatalogAccessSection) see no access data;
after grouping RBAC rows with groupRBACByConfig (using
GetRBACAccessByConfigIDs), aggregate those grouped RBAC entries into
entry.Access (the same structure BuildReport expects) — e.g., transform or copy
the grouped results into entry.Access so BuildReport/CatalogAccessSection will
include the access info; ensure this assignment happens in the same block that
currently sets entry.RBACResources and AccessCount.
- Around line 301-303: The report currently only sets entry.RelationshipTree,
but the renderer expects the top-level payload under entry.Data (fields
data.relationshipTree / data.relationships / data.relatedConfigs); update the
block that runs when opts.Sections.Relationships && tree != nil to populate the
Data fields as well — e.g. set entry.Data.RelationshipTree using
configTreeNodeToReport(tree) and also populate entry.Data.Relationships and
entry.Data.RelatedConfigs (derive them from the same tree or existing helpers)
so the renderer can read the relationship data; keep
configTreeNodeToReport(tree) as the source and ensure both
entry.RelationshipTree (if still needed) and entry.Data.* are set.

In `@report/CatalogReport.tsx`:
- Around line 154-191: The grouped-by-config rendering path misses per-entry
access logs because the code renders CatalogAccessLogsSection only for
data.configGroups but the builder attaches logs to each entry
(entry.accessLogs); update the JSX under the data.groupBy === 'config' branch to
render <CatalogAccessLogsSection logs={entry.accessLogs} /> for each entry (near
ConfigGroupHeader/CategorizedChangesSection/ConfigInsightsSection) or
alternatively ensure the report builder populates data.configGroups with
accessLogs; specifically modify the map over (data.entries || []) to include
CatalogAccessSection and CatalogAccessLogsSection using entry.access and
entry.accessLogs so grouped reports display access logs.

In `@report/components/change-section-utils.ts`:
- Around line 170-217: inferChangeSectionVariant currently only inspects
explicit category strings from getCategoryKey and falls back to generic; update
it to, when getCategoryKey(change) returns falsy, use the existing type
classifiers isRBACChange(change), isBackupChange(change), and
isDeploymentChange(change) to count those kinds of changes (and similarly update
categorizeChanges to apply the same fallback logic), so that changes like
PermissionGranted/BackupFailed/ScalingReplicaSet are recognized even when CEL
category mappings are absent.

In `@views/render_facet.go`:
- Around line 109-116: resolveFacetConnection returns a timestampURL that
renderFacetWithData currently discards (baseURL, token, _, err :=
resolveFacetConnection(...)); capture that value instead of using the blank
identifier, and thread it into the rendering calls by updating report.RenderHTTP
and report.RenderCLI (and any intermediate functions they call) to accept a
timestampURL parameter so the timestamp URL from resolveFacetConnection is
forwarded into the rendering flow; update renderFacetWithData to call
report.RenderHTTP(ctx, baseURL, token, timestampURL, data, format,
viewEntryFile) or report.RenderCLI(ctx, baseURL, token, timestampURL, data,
format, viewEntryFile) as appropriate and propagate the new parameter through
the call chain.

---

Outside diff comments:
In `@cmd/connection_browser.go`:
- Around line 444-468: extractBearerTokens currently picks the token with the
largest scope per audience but can let an expired high-scope token override a
fresh lower-scope token; update extractBearerTokens to check JWT expiration (use
the JWT expiry/exp claim via the decoded jwt object) and prefer non-expired
tokens: when encountering a token for jwt.Audience, if the existing
tokens[jwt.Audience] is expired and the new token is not, replace it regardless
of scope; if both are non-expired, keep the one with greater jwt.ScopeCount();
if both expired, you may keep the one with greater scope as before. Use the jwt
variable and tokens/jwt.Audience/scopeCounts to implement these rules so stale
tokens can't displace usable ones.

---

Minor comments:
In `@cmd/catalog_get.go`:
- Around line 285-287: The code currently only renders the first element of
c.ExternalID when building items (items = append(items, api.KeyValuePair{Key:
"External ID", Value: c.ExternalID[0]})), which truncates multi-valued external
IDs; change the logic to include all values (either join c.ExternalID into a
single string like strings.Join(c.ExternalID, ", ") for the Value field, or
append a separate api.KeyValuePair per element) and ensure the updated
construction uses c.ExternalID (the full slice) instead of c.ExternalID[0] so no
ExternalID entries are silently dropped.

In `@cmd/catalog_tree.go`:
- Around line 136-140: Validate the --direction flag value before calling
query.ConfigTree: check the treeDirection string (used to build
query.ConfigTreeOptions) against allowed values "all", "incoming", and
"outgoing" (or the enum/consts you use for RelationDirection) and return a clear
error if it's invalid; update the parsing/validation around the treeDirection
variable prior to the call to query.ConfigTree (and the similar use at lines
149-150) so only accepted values are mapped to
query.RelationDirection(treeDirection) and anything else fails fast with a
helpful message.

In `@cmd/rbac.go`:
- Around line 168-169: Validate the rbacView value before rendering/exporting:
add an allowlist check for the variable rbacView (accepted values "resource",
"user", "matrix") in the command handling for ExportRBAC (before calling the
export/render logic that may use report.SourceDir), and if the value is not one
of those, return an error or fail fast with a clear message (e.g., "invalid
--view: must be one of resource,user,matrix") so typos like "users" or "foo" do
not silently fall through.

In `@Makefile`:
- Around line 42-50: Remove the duplicated TAILWIND_VERSION, TAILWIND_JS and
$(TAILWIND_JS) target block: keep a single declaration of TAILWIND_VERSION (use
the existing default assignment TAILWIND_VERSION ?= 3.4.17), a single assignment
for TAILWIND_JS, and one recipe for the $(TAILWIND_JS) target that downloads the
file; delete the repeated copy of those three lines so Make doesn’t emit
override warnings and the recipe/variables are defined only once.

In `@report/components/AuditPage.tsx`:
- Around line 142-149: The table row id currently uses m.userId which can
collide when a user leaves and rejoins; change the id in the g.members.map
mapping to a membership-scoped identifier (for example concatenate userId with a
membership timestamp or use an existing membershipId), e.g. replace the id:
m.userId entry with a unique key built from m.userId plus m.membershipAddedAt
(or m.membershipId if available) so each audit row is distinct.

In `@report/components/BackupsSection.tsx`:
- Line 41: The current failedRows calculation (const failedRows =
backups.filter((b) => b.status !== 'success')) is too broad and includes
queued/running/warning states; change the filter to only include true failure
statuses (e.g., b.status === 'failed' || b.status === 'error' or whichever
canonical failure enums you use) and update the other identical filter usages
between lines 59-81 to the same predicate so the "Failed Backups" table shows
only actual failures; reference the failedRows variable and any other
backups.filter(...) calls in this component/class and replace the predicate
accordingly.

In `@report/components/ConfigChangesExamples.tsx`:
- Line 54: The Section component is using an internal identifier as its title
(title="ConfigChangesExamples"); change the title prop on Section to a
human-facing string (for example "Configuration Changes" or "Config Changes —
Examples") so the section header is user-friendly and consistent with other
headings; locate the Section invocation (Section variant="hero" title=...) and
replace the title value accordingly.

In `@report/components/GitRef.tsx`:
- Around line 34-43: The GitRef component currently ignores the link prop
causing loss of navigation; update GitRef to use the link prop (from
GitRefProps) by rendering the relevant parts (url/branch/dir/file Tag elements)
inside an anchor when link is provided: wrap either the whole content that
represents the source (e.g., the url and branch Tag or the combined Tags) with
an <a> whose href is link and add target="_blank" and rel="noopener noreferrer"
for safety, otherwise keep the existing Tag-only rendering; ensure Icon remains
visible and styling (textClass, text-gray-400) is preserved and that clickable
area includes the Tag(s) representing the source so GitRefFromSource-provided
links work.

In `@report/components/RBACChanges.tsx`:
- Around line 37-40: The summary-card color mapping is inverted: currently
grantedCount and netCount > 0 use 'orange' and revokedCount/netCount < 0 use
'green'. Update the mappings so Granted and net-positive use the positive color
(e.g., 'green') and Revoked and net-negative use the warning/amber color (e.g.,
'orange'); specifically change netColor and any direct uses of
grantedCount/revokedCount color logic (symbols: grantedCount, revokedCount,
netCount, netColor) and flip the color assignments in the corresponding badge
rendering code in the 46-79 region so row badges and rollup colors are
consistent.

---

Duplicate comments:
In `@report/build-kitchen-sink.ts`:
- Line 21: The schemaPath constant currently resolves
'../../duty/schema/openapi/change-types.schema.json' relative to __dirname which
points outside the repository; update the path so it resolves inside this repo
(use '../duty/schema/openapi/change-types.schema.json' or otherwise compute from
the report directory) by changing the schemaPath assignment in
report/build-kitchen-sink.ts (the schemaPath variable that uses
resolve(__dirname, ...)) to the corrected relative path so the file is located
within the repository and schema loading no longer breaks.

---

Nitpick comments:
In `@db/rbac_test.go`:
- Around line 3-8: Replace the dot-import of ginkgo with a qualified alias (use
ginkgo "github.com/onsi/ginkgo/v2") and update all ginkgo test calls in this
file — e.g., change Describe, Context, It, BeforeEach, AfterEach, etc. to be
prefixed with ginkgo. (Keep the existing dot-import for gomega if desired; only
change the ginkgo import and its usages like Describe and It to ginkgo.Describe
and ginkgo.It.)

In `@report/components/ConfigChangesExamples.tsx`:
- Around line 10-12: The pickMatching function currently does a full-filter then
slice which scans the entire changes array; replace it with an early-exit
bounded collector that iterates over changes, applies the predicate, pushes
matching ConfigChange items into a result array and returns as soon as
result.length === limit; keep the same signature and semantics (respecting
predicate and limit) and ensure it handles limit <= 0 by returning an empty
array immediately.
- Around line 25-37: Extract the repeated array literal of allowed kinds into a
module-level Set (e.g. const ALLOWED_CHANGE_KINDS = new Set(['ConfigChange/v1',
...])) and update the predicate passed to pickMatching in ConfigChangesExamples
(the callback receiving change and checking change.typedChange?.kind) to use
ALLOWED_CHANGE_KINDS.has(kind) instead of recreating the array on each call;
this avoids repeated allocations and improves readability while keeping the
existing pickMatching(available, ...) behavior and the 5 limit.

In `@report/components/ConfigInsightsSection.tsx`:
- Around line 64-69: Sorting currently treats unknown status/severity (indexOf
=== -1) as -1 which sorts them before known values; change the index
calculations in the sorted comparator to map any -1 result to the length of the
corresponding order array so unknowns sort last. Specifically, in the comparator
that builds statusDiff and compares severity, replace the raw indexOf results
for a.status/b.status and a.severity/b.severity with a value like (idx === -1 ?
statusOrder.length : idx) and similarly use SEVERITY_ORDER.length for severity
so that unknown/undefined values are pushed to the end when sorting analyses.

In `@report/components/RBACChangelogSection.tsx`:
- Around line 47-48: The list is using the array index `i` as the React key in
the JSX returned from `changelog.map(...)`, which can cause rendering bugs;
replace `key={i}` with a stable unique identifier from each `RBACChangeEntry`
(for example `entry.configId`, `entry.date`, or a concatenation like
`${entry.configId}-${entry.date}`) so the element key is stable across
reorders/filters; update the `changelog.map` callback to use that unique field
(or computed stable id) as the `key` instead of `i`.

In `@report/components/ViewResultSection.tsx`:
- Line 1027: The value prop uses a truthy fallback that hides valid falsy
defaults: replace the expression value={v.default || '-'} in
ViewResultSection.tsx with a nullish fallback so only null/undefined fall back
(i.e., use v.default ?? '-' instead of v.default || '-'), ensuring values like 0
and false are preserved.

In `@report/RBACByUserReport.tsx`:
- Around line 35-37: The map over users in RBACByUserReport uses the array index
as key; change the key on RBACUserSection to use the stable unique identifier by
replacing key={idx} with key={user.userId} (or key={user.userId ?? user.id} if
you need a safe fallback) in the users.map callback so React can reconcile
RBACUserSection instances correctly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 41b78828-6891-475b-a251-99d9690534ce

📥 Commits

Reviewing files that changed from the base of the PR and between f6e4a4c and f1cc0d3.

⛔ Files ignored due to path filters (2)
  • go.sum is excluded by !**/*.sum
  • report/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (143)
  • .gavel.yaml
  • .gitignore
  • Makefile
  • api/application.go
  • api/catalog_report.go
  • api/global.go
  • api/rbac_report.go
  • api/scraper_report.go
  • api/v1/playbook_actions.go
  • api/v1/view_types.go
  • api/v1/zz_generated.deepcopy.go
  • api/view.go
  • api/zz_generated.deepcopy.go
  • application/application.go
  • application/render_facet.go
  • artifacts/controllers.go
  • auth/basic.go
  • auth/kratos.go
  • auth/middleware.go
  • auth/oidc/login.go
  • auth/oidc/models.go
  • auth/oidc/routes.go
  • auth/oidc_test.go
  • auth/oidc_validate.go
  • auth/oidcclient/oidcclient.go
  • cmd/application.go
  • cmd/auth_login.go
  • cmd/catalog.go
  • cmd/catalog_get.go
  • cmd/catalog_get_test.go
  • cmd/catalog_report.go
  • cmd/catalog_tree.go
  • cmd/connection_browser.go
  • cmd/connection_test_cmd.go
  • cmd/context.go
  • cmd/rbac.go
  • cmd/view.go
  • config/crds/mission-control.flanksource.com_applications.yaml
  • config/crds/mission-control.flanksource.com_playbooks.yaml
  • config/crds/mission-control.flanksource.com_views.yaml
  • config/schemas/application.schema.json
  • config/schemas/playbook-spec.schema.json
  • config/schemas/playbook.schema.json
  • config/schemas/view.schema.json
  • connection/jwt.go
  • db/applications.go
  • db/rbac.go
  • db/rbac_test.go
  • fixtures/views/ui-ref-examples.yaml
  • main.go
  • mcp/access.go
  • rbac_report/export.go
  • rbac_report/render_facet.go
  • rbac_report/report.go
  • report/.gitignore
  • report/Application.tsx
  • report/CatalogReport.tsx
  • report/FindingsReport.tsx
  • report/KitchenSink.tsx
  • report/MatrixDemo.tsx
  • report/RBACByUserReport.tsx
  • report/RBACMatrixReport.tsx
  • report/RBACReport.tsx
  • report/ViewReport.tsx
  • report/build-kitchen-sink.ts
  • report/catalog-report-types.ts
  • report/catalog/change_mappings.go
  • report/catalog/change_mappings_test.go
  • report/catalog/default-settings.yaml
  • report/catalog/export.go
  • report/catalog/render_facet.go
  • report/catalog/report.go
  • report/catalog/report_test.go
  • report/catalog/settings.go
  • report/catalog/settings_test.go
  • report/components/ApplicationDetails.tsx
  • report/components/ArtifactAppendix.tsx
  • report/components/AuditPage.tsx
  • report/components/BackupActivityCalendar.tsx
  • report/components/BackupChanges.tsx
  • report/components/BackupsSection.tsx
  • report/components/CatalogAccessLogsSection.tsx
  • report/components/CatalogAccessSection.tsx
  • report/components/CatalogList.tsx
  • report/components/ConfigChangesExamples.tsx
  • report/components/ConfigChangesSection.tsx
  • report/components/ConfigInsightsSection.tsx
  • report/components/ConfigItemCard.tsx
  • report/components/ConfigLink.tsx
  • report/components/ConfigRelationshipGraph.tsx
  • report/components/ConfigTreeSection.tsx
  • report/components/CoverPage.tsx
  • report/components/DeploymentChanges.tsx
  • report/components/DynamicSection.tsx
  • report/components/FindingsSection.tsx
  • report/components/GitRef.tsx
  • report/components/IncidentsSection.tsx
  • report/components/PageFooter.tsx
  • report/components/PageHeader.tsx
  • report/components/RBACChangelogSection.tsx
  • report/components/RBACChanges.tsx
  • report/components/RBACCoverContent.tsx
  • report/components/RBACMatrixSection.tsx
  • report/components/RBACResourceSection.tsx
  • report/components/RBACSummarySection.tsx
  • report/components/RBACUserSection.tsx
  • report/components/ScraperCard.tsx
  • report/components/ViewResultSection.tsx
  • report/components/change-section-utils.ts
  • report/components/rbac-visual.tsx
  • report/components/utils.ts
  • report/config-types.ts
  • report/embed.go
  • report/facet.go
  • report/finding-schema.json
  • report/icons.ts
  • report/kitchen-sink-data.ts
  • report/kitchen-sink.json
  • report/kitchen-sink/ApplicationPage.tsx
  • report/kitchen-sink/CatalogPage.tsx
  • report/kitchen-sink/ChangesPage.tsx
  • report/kitchen-sink/ConfigComponentsPage.tsx
  • report/kitchen-sink/DynamicSectionsPage.tsx
  • report/kitchen-sink/InsightsAndGraphPage.tsx
  • report/kitchen-sink/KitchenSinkTypes.ts
  • report/kitchen-sink/LayoutComponentsPage.tsx
  • report/kitchen-sink/RBACPage.tsx
  • report/kitchen-sink/ViewPage.tsx
  • report/mission-control.ts
  • report/package.json
  • report/rbac-types.ts
  • report/sample-findings.json
  • report/scraper-types.ts
  • report/scraper/scraper.go
  • report/testdata/kitchen-sink.yaml
  • report/types.ts
  • report/view-types.ts
  • sdk/client.go
  • sdk/client_test.go
  • tests/e2e/oidc/oidc_login_test.go
  • tests/e2e/oidc/suite_test.go
  • views/render_facet.go
  • views/render_facet_test.go
💤 Files with no reviewable changes (5)
  • auth/basic.go
  • report/RBACReport.tsx
  • report/components/RBACResourceSection.tsx
  • auth/kratos.go
  • auth/oidc/login.go
✅ Files skipped from review due to trivial changes (31)
  • .gavel.yaml
  • api/global.go
  • .gitignore
  • report/.gitignore
  • tests/e2e/oidc/suite_test.go
  • auth/oidc_validate.go
  • report/components/PageHeader.tsx
  • report/components/PageFooter.tsx
  • fixtures/views/ui-ref-examples.yaml
  • config/crds/mission-control.flanksource.com_applications.yaml
  • report/components/ConfigItemCard.tsx
  • report/components/RBACCoverContent.tsx
  • report/catalog/default-settings.yaml
  • report/catalog/render_facet.go
  • report/kitchen-sink/KitchenSinkTypes.ts
  • report/components/ConfigLink.tsx
  • report/kitchen-sink-data.ts
  • report/components/CatalogAccessSection.tsx
  • report/components/ConfigRelationshipGraph.tsx
  • report/sample-findings.json
  • report/catalog/change_mappings_test.go
  • api/scraper_report.go
  • report/kitchen-sink/ApplicationPage.tsx
  • config/schemas/application.schema.json
  • report/kitchen-sink/ChangesPage.tsx
  • report/scraper/scraper.go
  • report/scraper-types.ts
  • report/types.ts
  • report/config-types.ts
  • report/MatrixDemo.tsx
  • report/catalog-report-types.ts
🚧 Files skipped from review as they are similar to previous changes (50)
  • report/view-types.ts
  • main.go
  • auth/oidcclient/oidcclient.go
  • config/schemas/playbook-spec.schema.json
  • report/components/ApplicationDetails.tsx
  • report/package.json
  • report/mission-control.ts
  • cmd/view.go
  • report/components/RBACSummarySection.tsx
  • report/components/RBACUserSection.tsx
  • application/application.go
  • report/kitchen-sink/ViewPage.tsx
  • report/kitchen-sink/DynamicSectionsPage.tsx
  • sdk/client_test.go
  • config/schemas/playbook.schema.json
  • config/crds/mission-control.flanksource.com_playbooks.yaml
  • report/KitchenSink.tsx
  • auth/oidc/models.go
  • report/RBACMatrixReport.tsx
  • sdk/client.go
  • report/components/CatalogAccessLogsSection.tsx
  • report/ViewReport.tsx
  • api/zz_generated.deepcopy.go
  • report/components/DeploymentChanges.tsx
  • report/catalog/settings_test.go
  • report/kitchen-sink/RBACPage.tsx
  • report/components/BackupActivityCalendar.tsx
  • cmd/catalog_get_test.go
  • report/kitchen-sink/CatalogPage.tsx
  • report/kitchen-sink/ConfigComponentsPage.tsx
  • report/components/CoverPage.tsx
  • api/view.go
  • report/embed.go
  • api/v1/playbook_actions.go
  • report/kitchen-sink/LayoutComponentsPage.tsx
  • report/components/ConfigChangesSection.tsx
  • report/rbac-types.ts
  • auth/oidc_test.go
  • config/crds/mission-control.flanksource.com_views.yaml
  • api/application.go
  • report/components/RBACMatrixSection.tsx
  • report/components/ScraperCard.tsx
  • artifacts/controllers.go
  • report/catalog/settings.go
  • report/components/rbac-visual.tsx
  • report/components/utils.ts
  • report/testdata/kitchen-sink.yaml
  • report/facet.go
  • report/components/ArtifactAppendix.tsx
  • report/icons.ts

Comment thread cmd/auth_login.go Outdated
Comment thread cmd/auth_login.go Outdated
Comment thread cmd/catalog_get.go
Comment on lines +120 to +125
since, err := duration.ParseDuration(sinceStr)
if err != nil {
return nil, fmt.Errorf("invalid --since value %q: %w", sinceStr, err)
}
sinceTime := time.Now().Add(-time.Duration(since))
id := config.ID
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Use duty.Now() for the DB time window.

This timestamp is used to filter database-backed sections, so it should come from the repo-standard clock source instead of time.Now().

Proposed fix
 	since, err := duration.ParseDuration(sinceStr)
 	if err != nil {
 		return nil, fmt.Errorf("invalid --since value %q: %w", sinceStr, err)
 	}
-	sinceTime := time.Now().Add(-time.Duration(since))
+	sinceTime := duty.Now().Add(-time.Duration(since))
 	id := config.ID

As per coding guidelines Use duty.Now() time.Time instead of time.Now() for database timestamps and soft deletes.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
since, err := duration.ParseDuration(sinceStr)
if err != nil {
return nil, fmt.Errorf("invalid --since value %q: %w", sinceStr, err)
}
sinceTime := time.Now().Add(-time.Duration(since))
id := config.ID
since, err := duration.ParseDuration(sinceStr)
if err != nil {
return nil, fmt.Errorf("invalid --since value %q: %w", sinceStr, err)
}
sinceTime := duty.Now().Add(-time.Duration(since))
id := config.ID
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/catalog_get.go` around lines 120 - 125, Replace the call to time.Now()
with the repo clock source duty.Now() when computing the database time window:
after parsing sinceStr with duration.ParseDuration (the existing since
variable), set sinceTime using duty.Now().Add(-time.Duration(since)) instead of
time.Now().Add(...), and update imports to include the duty package if it's not
already imported; adjust any references to the sinceTime variable as needed
(context: the sinceTime assignment near config.ID in this block).

Comment thread cmd/catalog_report.go
Comment thread cmd/catalog_report.go
Comment thread report/catalog/report.go Outdated
Comment thread report/catalog/report.go
Comment on lines +301 to +303
if opts.Sections.Relationships && tree != nil {
entry.RelationshipTree = configTreeNodeToReport(tree)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The relationship data never reaches the top-level report payload.

Only entry.RelationshipTree is assigned here, but the renderer reads data.relationshipTree / data.relationships / data.relatedConfigs. As written, non-grouped reports end up with an empty relationship section even when relationships are enabled.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/catalog/report.go` around lines 301 - 303, The report currently only
sets entry.RelationshipTree, but the renderer expects the top-level payload
under entry.Data (fields data.relationshipTree / data.relationships /
data.relatedConfigs); update the block that runs when
opts.Sections.Relationships && tree != nil to populate the Data fields as well —
e.g. set entry.Data.RelationshipTree using configTreeNodeToReport(tree) and also
populate entry.Data.Relationships and entry.Data.RelatedConfigs (derive them
from the same tree or existing helpers) so the renderer can read the
relationship data; keep configTreeNodeToReport(tree) as the source and ensure
both entry.RelationshipTree (if still needed) and entry.Data.* are set.

Comment thread report/CatalogReport.tsx
Comment thread report/components/change-section-utils.ts
Comment thread views/render_facet.go Outdated
Comment on lines +109 to +116
baseURL, token, _, err := resolveFacetConnection(ctx, opts)
if err != nil {
return nil, err
}

if baseURL != "" {
return report.RenderHTTP(ctx, baseURL, token, data, format, viewEntryFile)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail
rg -n -C2 --type=go 'timestampURL|TimestampURL|timestampUrl|RenderHTTP\(|RenderCLI\(' views report api

Repository: flanksource/mission-control

Length of output: 4092


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Get the full renderFacetWithData function
sed -n '97,124p' views/render_facet.go

# Also check if there are other callers of renderFacetWithData or resolveFacetConnection
rg -n 'renderFacetWithData|resolveFacetConnection' --type=go views/

Repository: flanksource/mission-control

Length of output: 1551


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check the full RenderHTTP and RenderCLI implementations
sed -n '110,150p' report/facet.go

# Look for any historical usage of timestampURL in render operations
rg -n 'timestampURL|TimestampURL|timestampUrl' --type=go report/

# Check if FacetOptions is passed to render and if timestampURL is in the payload
rg -n 'FacetOptions' --type=go views/ report/

Repository: flanksource/mission-control

Length of output: 1240


timestampURL is discarded and never forwarded to render functions.

resolveFacetConnection resolves opts.TimestampURL and conn.Properties["timestampUrl"] correctly, but renderFacetWithData discards it immediately (baseURL, token, _, err := resolveFacetConnection(...)). Neither report.RenderHTTP nor report.RenderCLI accepts a timestampURL parameter, so any configuration-based or connection-based timestamp URL is silently dropped. If timestamping was previously available in the rendering flow, this change removes that capability without fallback.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@views/render_facet.go` around lines 109 - 116, resolveFacetConnection returns
a timestampURL that renderFacetWithData currently discards (baseURL, token, _,
err := resolveFacetConnection(...)); capture that value instead of using the
blank identifier, and thread it into the rendering calls by updating
report.RenderHTTP and report.RenderCLI (and any intermediate functions they
call) to accept a timestampURL parameter so the timestamp URL from
resolveFacetConnection is forwarded into the rendering flow; update
renderFacetWithData to call report.RenderHTTP(ctx, baseURL, token, timestampURL,
data, format, viewEntryFile) or report.RenderCLI(ctx, baseURL, token,
timestampURL, data, format, viewEntryFile) as appropriate and propagate the new
parameter through the call chain.

Upstream facet 0.1.33-0.1.38 add the Document/Page/Header/Footer
structural exports required by report/Application.tsx,
report/CatalogReport.tsx, and report/ViewReport.tsx. The previously
pinned 0.1.32 did not export Document, causing vite SSR builds in
facet-html/facet-pdf tests and the Playbook facet-pdf-report e2e
to fail with "Document is not exported".

Also regenerates package-lock.json — the previous lockfile was a
thin file: reference so npm install never pulled facet's runtime
deps into the tree.
@socket-security
Copy link
Copy Markdown

socket-security bot commented Apr 15, 2026

Warning

Review the following alerts detected in dependencies.

According to your organization's Security Policy, it is recommended to resolve "Warn" alerts. Learn more about Socket for GitHub.

Action Severity Alert  (click "▶" to expand/collapse)
Warn High
Obfuscated code: npm pdf-lib is 90.0% likely obfuscated

Confidence: 0.90

Location: Package overview

From: report/package-lock.jsonnpm/@flanksource/facet@0.1.38npm/pdf-lib@1.17.1

ℹ Read more on: This package | This alert | What is obfuscated code?

Next steps: Take a moment to review the security alert above. Review the linked package source code to understand the potential risk. Ensure the package is not malicious before proceeding. If you're unsure how to proceed, reach out to your security team or ask the Socket team for help at support@socket.dev.

Suggestion: Packages should not obfuscate their code. Consider not using packages with obfuscated code.

Mark the package as acceptable risk. To ignore this alert only in this pull request, reply with the comment @SocketSecurity ignore npm/pdf-lib@1.17.1. You can also ignore all packages with @SocketSecurity ignore-all. To ignore an alert for all future pull requests, use Socket's Dashboard to change the triage state of this alert.

View full report

…and error handling

- Add section visibility flags to conditionally render report components
- Consolidate artifact changes from multiple sources in ArtifactAppendix
- Extract timestamp generation to avoid multiple Date instantiations
- Wrap Document component around Page in MatrixDemo for proper structure
- Add early return for empty views in ViewReport
- Improve error handling in change_mappings with duty API errors
- Optimize date calculations in BackupActivityCalendar using string comparisons
- Extract failure status check into reusable function in BackupsSection
- Fix section title capitalization in ConfigChangesExamples
- Refactor analysis type grouping to handle unknown types gracefully
- Optimize deployment change categorization with single pass
- Add link support to GitRef component
- Improve stale color logic in RBACMatrixSection
- Add TimestampURL option to RenderHTTP for PDF signing
- Use time.RFC3339 constant for date formatting
- Improve error differentiation in scraper query handling
Remove custom StringList type and its Value/Scan implementations in favor of the standard pq.StringArray from lib/pq. This simplifies the codebase by using a well-tested PostgreSQL array type instead of maintaining custom serialization logic.

Also improve error handling in RandomBase64 by checking rand.Read errors instead of silently ignoring them, and add token expiration check in extractBearerTokens to skip expired tokens. Remove unused sessionStorage serialization from saveConnection.
…e rules

- Remove duplicate Tailwind configuration and build rules in Makefile
- Add $(LOCALBIN) dependency to ci-test target
- Remove manual database transaction management from catalog query commands
- Improve error handling in catalog_report and catalog_tree commands by returning errors instead of fataling
- Use duration.ParseDuration for consistent duration parsing with proper error handling in db/applications.go
- Display all external IDs in catalog details instead of just the first one
- Add validation for tree direction parameter
- Enhance error messages with context wrapping in RBAC and views rendering
- Pass timestampURL to RenderHTTP in views rendering
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
auth/oidcclient/oidcclient.go (1)

30-42: ⚠️ Potential issue | 🟠 Major

Missing HTTP status code validation in Discover.

The function now attempts JSON decoding regardless of HTTP status. If the server returns a non-2xx response (404, 500, etc.) with an HTML error page or non-JSON body, the error message will be a confusing JSON decode failure rather than the actual HTTP error.

This is inconsistent with ExchangeCode (line 78-80) and RefreshToken (line 113-115), which both validate resp.StatusCode != http.StatusOK before decoding.

🔧 Suggested fix
 func Discover(discoveryURL string) (*Discovery, error) {
 	resp, err := HTTPClient.Get(discoveryURL)
 	if err != nil {
 		return nil, err
 	}
 	defer resp.Body.Close()

+	if resp.StatusCode != http.StatusOK {
+		return nil, fmt.Errorf("discovery endpoint returned %d", resp.StatusCode)
+	}
+
 	var d Discovery
 	if err := json.NewDecoder(resp.Body).Decode(&d); err != nil {
 		return nil, err
 	}
 	return &d, nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@auth/oidcclient/oidcclient.go` around lines 30 - 42, Discover currently
decodes the response body without checking HTTP status, causing misleading JSON
errors for non-2xx responses; update the Discover function to check
resp.StatusCode (compare to http.StatusOK) after the GET call and before
json.NewDecoder.Decode, and if it's not OK, read the response body (or at least
include resp.Status and body snippet) and return a clear error that contains the
status and body; reference HTTPClient.Get, resp.StatusCode, and the Discover
function when making this change.
auth/oidc/storage.go (1)

60-73: 🛠️ Refactor suggestion | 🟠 Major

Use duty.Now() in these touched DB timestamp writes.

This file still uses time.Now() in DB timestamp paths in the modified blocks; please switch these to duty.Now().

Proposed patch
 import (
 	gocontext "context"
 	"crypto"
@@
 	"fmt"
 	"time"
 
+	"github.com/flanksource/duty"
 	"github.com/flanksource/duty/context"
 	"github.com/flanksource/duty/models"
@@
 func (s *Storage) CreateAuthRequest(_ gocontext.Context, req *oidc.AuthRequest, _ string) (op.AuthRequest, error) {
+	now := duty.Now()
 	ar := &AuthRequest{
@@
-		CreatedAt:           time.Now(),
-		ExpiresAt:           time.Now().Add(10 * time.Minute),
+		CreatedAt:           now,
+		ExpiresAt:           now.Add(10 * time.Minute),
 	}
@@
 func (s *Storage) CreateAccessAndRefreshTokens(_ gocontext.Context, req op.TokenRequest, currentRefreshToken string) (string, string, time.Time, error) {
@@
 	if currentRefreshToken != "" {
@@
-			s.ctx.DB().Model(&RefreshToken{}).Where("token = ?", hashToken(currentRefreshToken)).
-				Update("expires_at", time.Now())
+			s.ctx.DB().Model(&RefreshToken{}).Where("token = ?", hashToken(currentRefreshToken)).
+				Update("expires_at", duty.Now())
 		}
 	}
 
-	now := time.Now()
+	now := duty.Now()

As per coding guidelines: Use duty.Now() time.Time instead of time.Now() for database timestamps and soft deletes.

Also applies to: 110-127

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@auth/oidc/storage.go` around lines 60 - 73, Replace direct uses of time.Now()
with duty.Now() for DB timestamp writes in the affected storage functions: in
CreateAuthRequest (replace CreatedAt and ExpiresAt assignments), and similarly
update the other DB timestamp fields referenced around the block noted (lines
~110-127) to use duty.Now(); locate the AuthRequest struct assignments and any
soft-delete or timestamp fields in methods on type Storage and substitute
duty.Now() for time.Now() to follow the duty package convention.
♻️ Duplicate comments (5)
report/CatalogReport.tsx (1)

154-163: ⚠️ Potential issue | 🟠 Major

Grouped entries rendering still skips access/access logs and bypasses section flags.

In the data.groupBy === 'config' + data.entries path, Line 157 and Line 158 render regardless of data.sections, and access/access logs are never rendered there. If grouped data is sourced from entries, those sections are dropped.

Suggested fix
 {data.groupBy === 'config' && (data.entries || []).map((entry, idx) => (
   <React.Fragment key={entry.configItem?.id || idx}>
     <ConfigGroupHeader group={{ configItem: entry.configItem as any, changes: entry.changes, analyses: entry.analyses, access: entry.access, accessLogs: entry.accessLogs }} />
-    <CategorizedChangesSection changes={entry.changes} categoryMappings={data.categoryMappings} hideConfigName />
-    <ConfigInsightsSection analyses={entry.analyses} />
+    {data.sections?.changes && (
+      <CategorizedChangesSection changes={entry.changes} categoryMappings={data.categoryMappings} hideConfigName />
+    )}
+    {data.sections?.insights && <ConfigInsightsSection analyses={entry.analyses} />}
+    {data.sections?.access && <CatalogAccessSection access={entry.access} />}
+    {data.sections?.accessLogs && <CatalogAccessLogsSection logs={entry.accessLogs} />}
     {(entry.rbacResources || []).map((resource, rIdx) => (
       <RBACMatrixSection key={resource.configId || rIdx} resource={resource} />
     ))}
   </React.Fragment>
 ))}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/CatalogReport.tsx` around lines 154 - 163, In the data.groupBy ===
'config' branch where data.entries is mapped, update the rendering logic for
ConfigGroupHeader, CategorizedChangesSection, ConfigInsightsSection and
RBACMatrixSection so they respect data.sections flags and include
access/accessLogs for each entry; specifically, inside the (data.entries ||
[]).map block (entries mapping) conditionally render access and accessLogs and
pass through the same section-filter booleans used elsewhere (e.g., check
data.sections before rendering CategorizedChangesSection, ConfigInsightsSection,
and access/accessLogs) and ensure the hideConfigName flag behavior is preserved
when calling CategorizedChangesSection.
db/applications.go (1)

632-643: ⚠️ Potential issue | 🟠 Major

Use outer joins for access-log history retention.

Line 641 and Line 642 use inner joins, so access-log rows disappear when config_items or external_users rows are removed. That breaks audit/history completeness.

Suggested fix
-		Select(`config_access_logs.config_id,
-			config_items.name AS config_name,
-			config_items.type AS config_type,
-			config_access_logs.external_user_id,
-			external_users.name AS user_name,
+		Select(`config_access_logs.config_id,
+			COALESCE(config_items.name, '') AS config_name,
+			COALESCE(config_items.type, '') AS config_type,
+			config_access_logs.external_user_id,
+			COALESCE(external_users.name, '') AS user_name,
 			config_access_logs.created_at,
 			config_access_logs.mfa,
 			config_access_logs.count,
 			config_access_logs.properties`).
-		Joins("JOIN config_items ON config_items.id = config_access_logs.config_id").
-		Joins("JOIN external_users ON external_users.id = config_access_logs.external_user_id").
+		Joins("LEFT JOIN config_items ON config_items.id = config_access_logs.config_id").
+		Joins("LEFT JOIN external_users ON external_users.id = config_access_logs.external_user_id").
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@db/applications.go` around lines 632 - 643, The query building block using
Select(...).Joins("JOIN config_items ON config_items.id =
config_access_logs.config_id").Joins("JOIN external_users ON external_users.id =
config_access_logs.external_user_id") drops access-log rows when related
config_items or external_users are deleted; change both inner JOINs to LEFT
JOINs so config_access_logs rows are retained (update the Joins calls to use
"LEFT JOIN config_items ..." and "LEFT JOIN external_users ...") and ensure any
downstream code that expects non-nil config_name/config_type/user_name handles
NULLs appropriately.
cmd/catalog_report.go (1)

68-73: ⚠️ Potential issue | 🔴 Critical

Don't wait for a shutdown signal before doing the work.

shutdown.WaitForSignal() blocks this one-shot command before any configs are resolved or exported, so catalog report ... will just hang until the process is interrupted. Move the signal wait after the work completes, or simply defer stop().

 		ctx, stop, err := duty.Start("mission-control", duty.ClientOnly)
 		if err != nil {
 			return err
 		}
-		shutdown.AddHookWithPriority("database", shutdown.PriorityCritical, stop)
-		shutdown.WaitForSignal()
+		defer stop()

,

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/catalog_report.go` around lines 68 - 73, The command currently calls
duty.Start(...) and then immediately blocks with shutdown.WaitForSignal(),
preventing the one-shot work from running; instead, call
duty.Start("mission-control", duty.ClientOnly) as now but immediately defer
stop() (or add stop via shutdown.AddHookWithPriority as a hook) and remove or
move shutdown.WaitForSignal() so the command performs its work (resolve/export
configs) and only waits for a signal afterward; update the code around
duty.Start, stop, shutdown.AddHookWithPriority and shutdown.WaitForSignal
accordingly (prefer deferring stop() right after duty.Start and deleting the
premature WaitForSignal call).
cmd/catalog_get.go (1)

120-125: ⚠️ Potential issue | 🟠 Major

Use duty.Now() for the DB time window.

This timestamp is used to filter database-backed sections, so it should come from the repo-standard clock source instead of time.Now().

 	since, err := duration.ParseDuration(sinceStr)
 	if err != nil {
 		return nil, fmt.Errorf("invalid --since value %q: %w", sinceStr, err)
 	}
-	sinceTime := time.Now().Add(-time.Duration(since))
+	sinceTime := duty.Now().Add(-time.Duration(since))
 	id := config.ID

As per coding guidelines: Use duty.Now() time.Time instead of time.Now() for database timestamps and soft deletes.

,

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/catalog_get.go` around lines 120 - 125, The code parses the --since
duration and builds sinceTime using time.Now(), but DB filters must use the repo
clock; change the creation of sinceTime to use duty.Now() instead of time.Now()
(i.e., replace time.Now().Add(-time.Duration(since)) with
duty.Now().Add(-time.Duration(since))). Ensure the duty package is imported if
not already and update any references to sinceTime that rely on the
repo-standard clock.
Makefile (1)

208-209: ⚠️ Potential issue | 🟠 Major

Add the schema JSON as an explicit prerequisite.

build-kitchen-sink.ts also reads duty/schema/openapi/change-types.schema.json, so schema-only changes will not trigger a rebuild of report/kitchen-sink.json. That leaves the regenerated test data stale until someone manually cleans and reruns the target.

Proposed change
-report/kitchen-sink.json: report/build-kitchen-sink.ts report/testdata/kitchen-sink.yaml
+report/kitchen-sink.json: report/build-kitchen-sink.ts report/testdata/kitchen-sink.yaml duty/schema/openapi/change-types.schema.json
 	cd report && ./node_modules/.bin/tsx build-kitchen-sink.ts
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Makefile` around lines 208 - 209, The Makefile rule for the target
report/kitchen-sink.json must include the OpenAPI schema file as an explicit
prerequisite so schema-only edits trigger a rebuild; update the dependency list
for the target that currently references report/build-kitchen-sink.ts and
report/testdata/kitchen-sink.yaml to also include
duty/schema/openapi/change-types.schema.json (the file read by
build-kitchen-sink.ts) so changes to that schema will cause make to rerun the
build-kitchen-sink.ts command.
🧹 Nitpick comments (8)
report/components/ConfigInsightsSection.tsx (1)

89-95: Tighten byType key typing to prevent accidental key drift.

Using Record<string, ConfigAnalysis[]> loses compile-time safety for valid buckets (AnalysisType | 'other'). Narrowing this map improves maintainability.

💡 Proposed type-safe shape
-  const byType: Record<string, ConfigAnalysis[]> = {};
+  const byType: Partial<Record<AnalysisType | 'other', ConfigAnalysis[]>> = {};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/ConfigInsightsSection.tsx` around lines 89 - 95, The bucket
map currently declared as byType: Record<string, ConfigAnalysis[]> should be
narrowed to exactly the allowed keys (AnalysisType | 'other') to prevent
accidental key drift; change the declaration to use Partial<Record<AnalysisType
| 'other', ConfigAnalysis[]>> (or Record<AnalysisType | 'other',
ConfigAnalysis[]> with explicit initialization) and update uses in the loop and
in typeOrder so accesses like byType[t] are type-safe against ANALYSIS_TYPES,
the loop variable a (ConfigAnalysis) and the computed t value, and ensure any
conditional checks for byType['other'] are updated to satisfy the new narrower
type.
report/components/DeploymentChanges.tsx (1)

50-58: Normalize category to a strict union before counting/rendering.

Current counting is string-indexed (Record<string, number>). If classifier output expands unexpectedly, acc[category] += 1 can become NaN and label lookup can break. Normalize once to scale|policy|spec.

🛠️ Suggested hardening
+type DeploymentCategory = keyof typeof CATEGORY_LABELS;
+
+const toDeploymentCategory = (value: unknown): DeploymentCategory => {
+  const normalized = String(value).toLowerCase();
+  if (normalized === 'scale' || normalized === 'policy' || normalized === 'spec') {
+    return normalized;
+  }
+  return 'spec';
+};
+
   const categorized = relevant.map((change) => ({
     change,
-    category: classifyDeploymentChange(change) ?? 'spec',
+    category: toDeploymentCategory(classifyDeploymentChange(change)),
   }));
 
   const counts = categorized.reduce(
     (acc, { category }) => { acc[category] += 1; return acc; },
-    { scale: 0, policy: 0, spec: 0 } as Record<string, number>,
+    { scale: 0, policy: 0, spec: 0 } as Record<DeploymentCategory, number>,
   );

Also applies to: 109-115

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/DeploymentChanges.tsx` around lines 50 - 58, The code uses
a string-indexed counter which can produce NaN if classifyDeploymentChange
returns something unexpected; define a strict union type like type Category =
'scale' | 'policy' | 'spec', ensure categorized items use category:
(classifyDeploymentChange(change) ?? 'spec') as Category, and change the counts
reducer to use an accumulator typed Record<Category, number> initialized as {
scale: 0, policy: 0, spec: 0 } so acc[category] is always valid; apply the same
typing/fallback pattern for the other counting block referenced (lines ~109-115)
to normalize categories before counting/rendering.
report/CatalogReport.tsx (1)

154-193: Prefer a single grouped data source to prevent duplicate sections.

When data.groupBy === 'config', both data.entries (Line 154) and data.configGroups (Line 185) are rendered. If both are populated, sections can duplicate and inflate report output.

Refactor direction
+const groupedRows = (data.configGroups?.length ?? 0) > 0
+  ? data.configGroups.map((g) => ({
+      key: g.configItem?.id,
+      configItem: g.configItem,
+      changes: g.changes,
+      analyses: g.analyses,
+      access: g.access,
+      accessLogs: g.accessLogs,
+      rbacResources: [],
+    }))
+  : (data.entries ?? []).map((e) => ({
+      key: e.configItem?.id,
+      configItem: e.configItem,
+      changes: e.changes,
+      analyses: e.analyses,
+      access: e.access,
+      accessLogs: e.accessLogs,
+      rbacResources: e.rbacResources ?? [],
+    }));
+
+{data.groupBy === 'config' && groupedRows.map((row, idx) => (
+  <React.Fragment key={row.key || idx}>
+    {/* single rendering path */}
+  </React.Fragment>
+))}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/CatalogReport.tsx` around lines 154 - 193, The report currently
renders both data.entries and data.configGroups when data.groupBy === 'config',
causing duplicate sections; update the rendering logic to use a single grouped
data source (choose one deterministically, e.g., prefer data.configGroups when
it has items, otherwise fall back to data.entries) and change the two blocks
that render
ConfigGroupHeader/CategorizedChangesSection/ConfigInsightsSection/CatalogAccessSection/CatalogAccessLogsSection
and RBACMatrixSection to only iterate the chosen collection (reference symbols:
data.groupBy, data.entries, data.configGroups, ConfigGroupHeader,
CategorizedChangesSection, ConfigInsightsSection, CatalogAccessSection,
CatalogAccessLogsSection, RBACMatrixSection); ensure keys remain stable (use
configItem.id when present) and remove or guard the other iteration so sections
are not duplicated.
db/applications.go (2)

591-598: Use duty.Now() instead of time.Now() in DB time filters.

Line 597, Line 671, and Line 679 should use the project time source for consistency with DB timestamp handling.

Suggested fix
-		staleCutoff = time.Now().Add(-time.Duration(d))
+		staleCutoff = duty.Now().Add(-time.Duration(d))
...
-		q = q.Where("config_access_logs.created_at >= ?", time.Now().Add(-time.Duration(d)))
+		q = q.Where("config_access_logs.created_at >= ?", duty.Now().Add(-time.Duration(d)))
...
-		q = q.Where("config_access_logs.created_at <= ?", time.Now().Add(-time.Duration(d)))
+		q = q.Where("config_access_logs.created_at <= ?", duty.Now().Add(-time.Duration(d)))

As per coding guidelines Use duty.Now() time.Time instead of time.Now() for database timestamps and soft deletes.

Also applies to: 666-680

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@db/applications.go` around lines 591 - 598, Replace usages of time.Now() used
to compute DB filter cutoffs with the project time source duty.Now():
specifically update the stale cutoff calculation in the block that parses
filters.Stale (variable staleCutoff) to use duty.Now().Add(-time.Duration(d))
instead of time.Now(); make the same replacement for the other DB
filter/soft-delete cutoff computations referenced in the same file (the other
occurrences around the filters handling). Ensure the duty package is imported if
not present and that the resulting time.Time types remain compatible with the
surrounding code (e.g., variables like staleCutoff and any comparisons).

586-605: Push stale filtering into SQL to avoid over-fetching.

The stale filter is applied after Find(&rows), so large datasets are fully loaded into memory before filtering.

Suggested fix
 	var staleCutoff time.Time
 	if filters.Stale != "" {
 		d, err := duration.ParseDuration(filters.Stale)
 		if err != nil {
 			return nil, ctx.Oops().Wrapf(err, "invalid stale filter %q", filters.Stale)
 		}
-		staleCutoff = time.Now().Add(-time.Duration(d))
+		staleCutoff = duty.Now().Add(-time.Duration(d))
+		q = q.Where("(config_access_summary.last_signed_in_at IS NULL OR config_access_summary.last_signed_in_at < ?)", staleCutoff)
 	}
...
-		if !staleCutoff.IsZero() && !isStale {
-			continue
-		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@db/applications.go` around lines 586 - 605, Parse filters.Stale into a cutoff
time before executing q.Find and add the condition to the GORM query so
filtering happens in SQL: compute staleCutoff from filters.Stale (same
duration.ParseDuration logic) and then update q with a Where that matches rows
where LastSignedInAt IS NULL OR LastSignedInAt < staleCutoff (e.g. q =
q.Where("last_signed_in_at IS NULL OR last_signed_in_at < ?", staleCutoff)),
then call q.Find(&rows); remove the in-memory stale check inside the loop
(references: q, filters.Stale, staleCutoff, RBACAccessRow.LastSignedInAt, rows).
report/facet.go (1)

64-73: Unchecked Close() errors on temp files.

While errors during Close() on temp files rarely occur, ignoring them can mask issues like incomplete writes. Consider checking these errors for robustness.

♻️ Proposed fix
 	if _, err := dataFile.Write(dataJSON); err != nil {
 		return nil, fmt.Errorf("write data file: %w", err)
 	}
-	dataFile.Close()
+	if err := dataFile.Close(); err != nil {
+		return nil, fmt.Errorf("close data file: %w", err)
+	}

 	outFile, err := os.CreateTemp("", "facet-output-*."+format)
 	if err != nil {
 		return nil, fmt.Errorf("create output temp file: %w", err)
 	}
-	outFile.Close()
+	if err := outFile.Close(); err != nil {
+		os.Remove(outFile.Name())
+		return nil, fmt.Errorf("close output file: %w", err)
+	}
 	defer os.Remove(outFile.Name())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/facet.go` around lines 64 - 73, The Close() calls on dataFile and
outFile are unchecked; capture their return errors and handle them (returning a
wrapped fmt.Errorf) to avoid masking I/O problems — after writing dataJSON check
err := dataFile.Close() and if non-nil return nil, fmt.Errorf("close data file:
%w", err), and after creating outFile check err := outFile.Close() and if
non-nil return nil, fmt.Errorf("close output temp file: %w", err); use the
existing variable names dataFile and outFile and preserve the current
error-wrapping style.
Makefile (1)

61-63: Keep ci-test on the canonical build path.

This reintroduces a raw go build flow inside ci-test, which can drift from the flags and prerequisites carried by dev. Prefer delegating to $(MAKE) dev before running ginkgo. Based on learnings, Don't run go build. Use make dev instead.

Proposed change
-ci-test: $(TAILWIND_JS) $(LOCALBIN)
-	go build -o ./.bin/$(NAME) main.go
+ci-test:
+	$(MAKE) dev
 	ginkgo -r --skip-package=tests/e2e --keep-going --junit-report junit-report.xml --github-output --output-dir test-reports --succinct
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Makefile` around lines 61 - 63, The ci-test Makefile target currently runs a
raw go build; replace that build step by invoking the canonical build target via
$(MAKE) dev so ci-test inherits the dev flags and prerequisites: keep the
existing prerequisites ($(TAILWIND_JS) $(LOCALBIN)), remove the direct go build
command, call $(MAKE) dev before running the ginkgo command, and ensure the
target still runs the ginkgo invocation unchanged (references: target name
ci-test and dev).
report/catalog/change_mappings.go (1)

112-129: Wrap runtime mapping failures with ctx.Oops() too.

These evaluation paths fall back to fmt.Errorf, so runtime CEL failures lose the same error categorization/context that constructor-time failures already preserve. Carrying dutyContext.Context on changeMapper would keep this path aligned with the repo’s error-handling convention.

Possible refactor
 type changeMapper struct {
+	ctx      dutyContext.Context
 	mappings []compiledCategoryMapping
 }
@@
-	return &changeMapper{mappings: compiled}, nil
+	return &changeMapper{ctx: ctx, mappings: compiled}, nil
@@
-				return fmt.Errorf("failed to evaluate filter %q: %w", mapping.mapping.Filter, err)
+				return m.ctx.Oops().Wrapf(err, "failed to evaluate filter %q", mapping.mapping.Filter)
@@
-					return fmt.Errorf("failed to evaluate transform %q: %w", mapping.mapping.Transform, err)
+					return m.ctx.Oops().Wrapf(err, "failed to evaluate transform %q", mapping.mapping.Transform)

As per coding guidelines, "Use duty/context.Context for all function signatures unless there's a specific reason not to." and "Use ctx.Oops() method call ... Use .Wrapf() or .Wrap() to wrap underlying errors with context."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/catalog/change_mappings.go` around lines 112 - 129, The runtime CEL
evaluation errors returned in change_mappings.go (the error paths after
evaluating mapping.mapping.Filter and evalMappingTransform(mapping.transform,
env)) should be wrapped with the repository's duty context using ctx.Oops();
update changeMapper to carry a duty/context.Context (or add a ctx parameter) and
replace the plain fmt.Errorf returns with ctx.Oops().Wrapf(...) (or .Wrap(...))
so the errors for mapping.mapping.Filter and mapping.mapping.Transform (and the
evalMappingTransform call) preserve duty-context metadata and match
constructor-time error handling.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cmd/catalog_get.go`:
- Around line 407-413: Remove the unused helper function formatDuration from
this file and delete any associated tests referencing it; the project already
uses api.Human() for duration display and has an active duplicate helper in
db/rbac.go, so replace any intent to use formatDuration with api.Human() where
needed and remove the dead code and tests to avoid duplication.

In `@Makefile`:
- Around line 153-155: The dev Makefile target currently writes to
./.bin/$(NAME) but doesn't ensure the directory exists; update the dev target
rule to list $(LOCALBIN) as a prerequisite (i.e., change the target header from
"dev: static" to "dev: static $(LOCALBIN)"), and ensure there is a corresponding
$(LOCALBIN) target that creates the ./.bin directory (mkdir -p ./.bin) so go
build -o succeeds on clean checkouts; reference the dev target and the
$(LOCALBIN) prerequisite and the output path ./.bin/$(NAME) when making the
change.

In `@report/catalog/change_mappings.go`:
- Around line 106-107: The Apply logic incorrectly treats any non-nil
change.TypedChange as "already populated" even when its Kind is empty; update
the checks in Apply (and the surrounding transform/fallback logic that inspects
TypedChange) to treat TypedChange as unset unless TypedChange.Kind (or .kind) is
non-empty — i.e., replace conditions like "change.TypedChange != nil" with
"change.TypedChange != nil && change.TypedChange.Kind != \"\"" (and similarly
guard uses in the transform and fallback branches) so an empty/{} TypedChange
does not block transformation or fallback to Details.

In `@report/CatalogReport.tsx`:
- Around line 124-136: The code dereferences data.configItem when building the
configItem object (const configItem = { ... }) without guarding for its absence;
update the CatalogReport component to first check for data?.configItem and
either return early (e.g., render a fallback) or build the configItem object
using safe access (optional chaining or a default empty object) so fields like
id, name, created_at and updated_at are not accessed on undefined; specifically
modify the block that constructs configItem to use data?.configItem or a
fallback (or wrap in an if (!data?.configItem) return ...) and adjust any
downstream uses accordingly.

In `@report/components/BackupsSection.tsx`:
- Around line 33-39: The success detection is inconsistent: use normalized,
case-insensitive checks instead of strict equality; update successCount
(currently using b.status === 'success') and calendarEntries' status computation
to use a lowercase comparison (e.g., String(status).toLowerCase() === 'success')
alongside the existing isFailed(String(status)) helper so variants like
"Success" or "successful" are counted and rendered correctly (refer to isFailed,
successCount, calendarEntries, and BackupCalendarStatus when making the
changes).

In `@report/components/ConfigInsightsSection.tsx`:
- Around line 64-69: Normalize status and severity values before ranking and
counting: when computing sorted from analyses (the sorted variable) map each
item's status and severity to a canonical value (e.g., lowercase strings and a
fallback like 'unknown' or '' that you include at the end of
statusOrder/SEVERITY_ORDER) so indexOf on statusOrder and SEVERITY_ORDER never
returns -1; update the sort comparator in ConfigInsightsSection (the statusOrder
and SEVERITY_ORDER usage) to use the normalized value and change the counting
logic around the severity cards (the condition referenced on lines ~86-88) to
use the same normalization so unknown or null severities are consistently
classified and included in totals rather than skipped.

In `@report/components/GitRef.tsx`:
- Around line 47-49: Validate the user-controlled link before using it in the
anchor in GitRef.tsx: in the GitRef component, check the scheme of the link
variable (e.g., parse with URL and ensure protocol is "http:" or "https:" or
match your explicit allowlist) and only render <a href={link} ...> when it
passes; otherwise render the content without an anchor (or use a safe fallback).
Ensure this validation occurs where the if (link) branch returns the anchor so
unsafe schemes are never emitted.

In `@report/components/RBACMatrixSection.tsx`:
- Line 142: The JSX currently sets the cell value with a logical-OR fallback
(value using variable v) which masks valid falsy values like 0 and false; update
the assignment in RBACMatrixSection where value is set for v to use a nullish
fallback (nullish coalescing) so only null/undefined fall back to '-' and
0/false are preserved, and run/update any related tests that assert these falsy
values are rendered correctly.
- Around line 29-30: The current assignment row.roles.set(u.role, u) blindly
overwrites existing entries causing loss of info when the same user+role appears
multiple times; update the logic in RBACMatrixSection (where row.roles is
populated) to first check for an existing entry via row.roles.get(u.role) and
deterministically merge fields (e.g., preserve direct grants over indirect, OR
flags like overdue, merge metadata) rather than replacing the object; ensure the
merge is deterministic (consistent precedence rules) and then set the merged
result back on row.roles using the same key.

In `@report/facet.go`:
- Around line 176-199: The response bodies from the render POST and subsequent
GET are not closed, causing leaks; after the POST that produces response
(variable response) add a deferred close (defer response.Body.Close())
immediately after verifying the request succeeded, and after the GET that
produces pdfResponse add defer pdfResponse.Body.Close() immediately after
checking the request/response (i.e., right after pdfResponse is assigned from
client.R(ctx).Get and after any basic status checks) so both response.Body and
pdfResponse.Body are closed in the function that contains these variables.

In `@report/ViewReport.tsx`:
- Around line 9-23: In ViewCoverPage, the reduce currently uses falsy-coalescing
which replaces valid falsy defaults; change the accumulator to use nullish
coalescing for variable values (use v.default ?? '-') so 0, '' or false are
preserved, and ensure the CoverPage title prop is always a string by using a
safe fallback like data.title ?? data.name ?? '' so title is never undefined;
update references in the function (variableTags, ViewCoverPage, CoverPage)
accordingly.

---

Outside diff comments:
In `@auth/oidc/storage.go`:
- Around line 60-73: Replace direct uses of time.Now() with duty.Now() for DB
timestamp writes in the affected storage functions: in CreateAuthRequest
(replace CreatedAt and ExpiresAt assignments), and similarly update the other DB
timestamp fields referenced around the block noted (lines ~110-127) to use
duty.Now(); locate the AuthRequest struct assignments and any soft-delete or
timestamp fields in methods on type Storage and substitute duty.Now() for
time.Now() to follow the duty package convention.

In `@auth/oidcclient/oidcclient.go`:
- Around line 30-42: Discover currently decodes the response body without
checking HTTP status, causing misleading JSON errors for non-2xx responses;
update the Discover function to check resp.StatusCode (compare to http.StatusOK)
after the GET call and before json.NewDecoder.Decode, and if it's not OK, read
the response body (or at least include resp.Status and body snippet) and return
a clear error that contains the status and body; reference HTTPClient.Get,
resp.StatusCode, and the Discover function when making this change.

---

Duplicate comments:
In `@cmd/catalog_get.go`:
- Around line 120-125: The code parses the --since duration and builds sinceTime
using time.Now(), but DB filters must use the repo clock; change the creation of
sinceTime to use duty.Now() instead of time.Now() (i.e., replace
time.Now().Add(-time.Duration(since)) with
duty.Now().Add(-time.Duration(since))). Ensure the duty package is imported if
not already and update any references to sinceTime that rely on the
repo-standard clock.

In `@cmd/catalog_report.go`:
- Around line 68-73: The command currently calls duty.Start(...) and then
immediately blocks with shutdown.WaitForSignal(), preventing the one-shot work
from running; instead, call duty.Start("mission-control", duty.ClientOnly) as
now but immediately defer stop() (or add stop via shutdown.AddHookWithPriority
as a hook) and remove or move shutdown.WaitForSignal() so the command performs
its work (resolve/export configs) and only waits for a signal afterward; update
the code around duty.Start, stop, shutdown.AddHookWithPriority and
shutdown.WaitForSignal accordingly (prefer deferring stop() right after
duty.Start and deleting the premature WaitForSignal call).

In `@db/applications.go`:
- Around line 632-643: The query building block using Select(...).Joins("JOIN
config_items ON config_items.id = config_access_logs.config_id").Joins("JOIN
external_users ON external_users.id = config_access_logs.external_user_id")
drops access-log rows when related config_items or external_users are deleted;
change both inner JOINs to LEFT JOINs so config_access_logs rows are retained
(update the Joins calls to use "LEFT JOIN config_items ..." and "LEFT JOIN
external_users ...") and ensure any downstream code that expects non-nil
config_name/config_type/user_name handles NULLs appropriately.

In `@Makefile`:
- Around line 208-209: The Makefile rule for the target report/kitchen-sink.json
must include the OpenAPI schema file as an explicit prerequisite so schema-only
edits trigger a rebuild; update the dependency list for the target that
currently references report/build-kitchen-sink.ts and
report/testdata/kitchen-sink.yaml to also include
duty/schema/openapi/change-types.schema.json (the file read by
build-kitchen-sink.ts) so changes to that schema will cause make to rerun the
build-kitchen-sink.ts command.

In `@report/CatalogReport.tsx`:
- Around line 154-163: In the data.groupBy === 'config' branch where
data.entries is mapped, update the rendering logic for ConfigGroupHeader,
CategorizedChangesSection, ConfigInsightsSection and RBACMatrixSection so they
respect data.sections flags and include access/accessLogs for each entry;
specifically, inside the (data.entries || []).map block (entries mapping)
conditionally render access and accessLogs and pass through the same
section-filter booleans used elsewhere (e.g., check data.sections before
rendering CategorizedChangesSection, ConfigInsightsSection, and
access/accessLogs) and ensure the hideConfigName flag behavior is preserved when
calling CategorizedChangesSection.

---

Nitpick comments:
In `@db/applications.go`:
- Around line 591-598: Replace usages of time.Now() used to compute DB filter
cutoffs with the project time source duty.Now(): specifically update the stale
cutoff calculation in the block that parses filters.Stale (variable staleCutoff)
to use duty.Now().Add(-time.Duration(d)) instead of time.Now(); make the same
replacement for the other DB filter/soft-delete cutoff computations referenced
in the same file (the other occurrences around the filters handling). Ensure the
duty package is imported if not present and that the resulting time.Time types
remain compatible with the surrounding code (e.g., variables like staleCutoff
and any comparisons).
- Around line 586-605: Parse filters.Stale into a cutoff time before executing
q.Find and add the condition to the GORM query so filtering happens in SQL:
compute staleCutoff from filters.Stale (same duration.ParseDuration logic) and
then update q with a Where that matches rows where LastSignedInAt IS NULL OR
LastSignedInAt < staleCutoff (e.g. q = q.Where("last_signed_in_at IS NULL OR
last_signed_in_at < ?", staleCutoff)), then call q.Find(&rows); remove the
in-memory stale check inside the loop (references: q, filters.Stale,
staleCutoff, RBACAccessRow.LastSignedInAt, rows).

In `@Makefile`:
- Around line 61-63: The ci-test Makefile target currently runs a raw go build;
replace that build step by invoking the canonical build target via $(MAKE) dev
so ci-test inherits the dev flags and prerequisites: keep the existing
prerequisites ($(TAILWIND_JS) $(LOCALBIN)), remove the direct go build command,
call $(MAKE) dev before running the ginkgo command, and ensure the target still
runs the ginkgo invocation unchanged (references: target name ci-test and dev).

In `@report/catalog/change_mappings.go`:
- Around line 112-129: The runtime CEL evaluation errors returned in
change_mappings.go (the error paths after evaluating mapping.mapping.Filter and
evalMappingTransform(mapping.transform, env)) should be wrapped with the
repository's duty context using ctx.Oops(); update changeMapper to carry a
duty/context.Context (or add a ctx parameter) and replace the plain fmt.Errorf
returns with ctx.Oops().Wrapf(...) (or .Wrap(...)) so the errors for
mapping.mapping.Filter and mapping.mapping.Transform (and the
evalMappingTransform call) preserve duty-context metadata and match
constructor-time error handling.

In `@report/CatalogReport.tsx`:
- Around line 154-193: The report currently renders both data.entries and
data.configGroups when data.groupBy === 'config', causing duplicate sections;
update the rendering logic to use a single grouped data source (choose one
deterministically, e.g., prefer data.configGroups when it has items, otherwise
fall back to data.entries) and change the two blocks that render
ConfigGroupHeader/CategorizedChangesSection/ConfigInsightsSection/CatalogAccessSection/CatalogAccessLogsSection
and RBACMatrixSection to only iterate the chosen collection (reference symbols:
data.groupBy, data.entries, data.configGroups, ConfigGroupHeader,
CategorizedChangesSection, ConfigInsightsSection, CatalogAccessSection,
CatalogAccessLogsSection, RBACMatrixSection); ensure keys remain stable (use
configItem.id when present) and remove or guard the other iteration so sections
are not duplicated.

In `@report/components/ConfigInsightsSection.tsx`:
- Around line 89-95: The bucket map currently declared as byType: Record<string,
ConfigAnalysis[]> should be narrowed to exactly the allowed keys (AnalysisType |
'other') to prevent accidental key drift; change the declaration to use
Partial<Record<AnalysisType | 'other', ConfigAnalysis[]>> (or
Record<AnalysisType | 'other', ConfigAnalysis[]> with explicit initialization)
and update uses in the loop and in typeOrder so accesses like byType[t] are
type-safe against ANALYSIS_TYPES, the loop variable a (ConfigAnalysis) and the
computed t value, and ensure any conditional checks for byType['other'] are
updated to satisfy the new narrower type.

In `@report/components/DeploymentChanges.tsx`:
- Around line 50-58: The code uses a string-indexed counter which can produce
NaN if classifyDeploymentChange returns something unexpected; define a strict
union type like type Category = 'scale' | 'policy' | 'spec', ensure categorized
items use category: (classifyDeploymentChange(change) ?? 'spec') as Category,
and change the counts reducer to use an accumulator typed Record<Category,
number> initialized as { scale: 0, policy: 0, spec: 0 } so acc[category] is
always valid; apply the same typing/fallback pattern for the other counting
block referenced (lines ~109-115) to normalize categories before
counting/rendering.

In `@report/facet.go`:
- Around line 64-73: The Close() calls on dataFile and outFile are unchecked;
capture their return errors and handle them (returning a wrapped fmt.Errorf) to
avoid masking I/O problems — after writing dataJSON check err :=
dataFile.Close() and if non-nil return nil, fmt.Errorf("close data file: %w",
err), and after creating outFile check err := outFile.Close() and if non-nil
return nil, fmt.Errorf("close output temp file: %w", err); use the existing
variable names dataFile and outFile and preserve the current error-wrapping
style.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0d1de6be-6b26-4a35-94d3-d8d92de1e675

📥 Commits

Reviewing files that changed from the base of the PR and between f1cc0d3 and 5771d74.

⛔ Files ignored due to path filters (1)
  • report/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (27)
  • Makefile
  • auth/oidc/models.go
  • auth/oidc/storage.go
  • auth/oidcclient/oidcclient.go
  • cmd/catalog.go
  • cmd/catalog_get.go
  • cmd/catalog_report.go
  • cmd/catalog_tree.go
  • cmd/connection_browser.go
  • db/applications.go
  • rbac_report/render_facet.go
  • report/CatalogReport.tsx
  • report/KitchenSink.tsx
  • report/MatrixDemo.tsx
  • report/ViewReport.tsx
  • report/catalog/change_mappings.go
  • report/components/BackupActivityCalendar.tsx
  • report/components/BackupsSection.tsx
  • report/components/ConfigChangesExamples.tsx
  • report/components/ConfigInsightsSection.tsx
  • report/components/DeploymentChanges.tsx
  • report/components/GitRef.tsx
  • report/components/RBACMatrixSection.tsx
  • report/facet.go
  • report/package.json
  • report/scraper/scraper.go
  • views/render_facet.go
✅ Files skipped from review due to trivial changes (1)
  • report/MatrixDemo.tsx
🚧 Files skipped from review as they are similar to previous changes (5)
  • report/package.json
  • auth/oidc/models.go
  • report/components/ConfigChangesExamples.tsx
  • report/scraper/scraper.go
  • cmd/connection_browser.go

Comment thread cmd/catalog_get.go
Comment on lines +407 to +413
func formatDuration(d time.Duration) string {
if d >= 24*time.Hour {
days := int(d.Hours() / 24)
return fmt.Sprintf("%dd", days)
}
return d.String()
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if formatDuration is used anywhere in the codebase
rg -n 'formatDuration' --type=go

Repository: flanksource/mission-control

Length of output: 441


🏁 Script executed:

#!/bin/bash
# Check the context around line 494 where the review suggested it should be used
sed -n '490,500p' cmd/catalog_get.go

# Also check the test file to see what it's testing
head -n 220 cmd/catalog_get_test.go | tail -n 25

Repository: flanksource/mission-control

Length of output: 1148


formatDuration is unused and should be removed or put into use.

This helper function is defined but never called within cmd/catalog_get.go. The current code uses api.Human() for duration display instead (line 494). The duplicate definition in db/rbac.go serves the same purpose and is actively used there. Remove this unused function and its tests from cmd/catalog_get.go.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/catalog_get.go` around lines 407 - 413, Remove the unused helper function
formatDuration from this file and delete any associated tests referencing it;
the project already uses api.Human() for duration display and has an active
duplicate helper in db/rbac.go, so replace any intent to use formatDuration with
api.Human() where needed and remove the dead code and tests to avoid
duplication.

Comment thread Makefile
Comment on lines +153 to 155
dev: static
# Disabling CGO because of slow build times in apple silicon (just experimenting)
CGO_ENABLED=0 go build -v -o ./.bin/$(NAME) -gcflags="all=-N -l" main.go
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add $(LOCALBIN) as a prerequisite for dev.

dev writes to ./.bin/$(NAME) but this target does not create that directory. On a clean checkout, go build -o will fail before the binary is written.

Proposed change
-dev: static
+dev: static $(LOCALBIN)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
dev: static
# Disabling CGO because of slow build times in apple silicon (just experimenting)
CGO_ENABLED=0 go build -v -o ./.bin/$(NAME) -gcflags="all=-N -l" main.go
dev: static $(LOCALBIN)
# Disabling CGO because of slow build times in apple silicon (just experimenting)
CGO_ENABLED=0 go build -v -o ./.bin/$(NAME) -gcflags="all=-N -l" main.go
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Makefile` around lines 153 - 155, The dev Makefile target currently writes to
./.bin/$(NAME) but doesn't ensure the directory exists; update the dev target
rule to list $(LOCALBIN) as a prerequisite (i.e., change the target header from
"dev: static" to "dev: static $(LOCALBIN)"), and ensure there is a corresponding
$(LOCALBIN) target that creates the ./.bin directory (mkdir -p ./.bin) so go
build -o succeeds on clean checkouts; reference the dev target and the
$(LOCALBIN) prerequisite and the output path ./.bin/$(NAME) when making the
change.

Comment on lines +106 to +107
category := change.Category
typedChange := change.TypedChange
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Treat TypedChange without kind as unset.

Apply currently uses TypedChange != nil as the “already populated” check, but the rest of this file treats a non-empty kind as the real validity test. If an incoming change carries {} or a map without kind, Line 126 skips the transform and Lines 147-153 skip the fallback from Details, so the invalid value survives unchanged.

Proposed fix
+func hasTypedChangeKind(v map[string]any) bool {
+	kind, _ := v["kind"].(string)
+	return kind != ""
+}
+
 func (m *changeMapper) Apply(change *reportAPI.CatalogReportChange) error {
 	if change == nil {
 		return nil
 	}
 
 	env := changeMappingEnv(change)
 	category := change.Category
-	typedChange := change.TypedChange
+	typedChange := change.TypedChange
+	if !hasTypedChangeKind(typedChange) {
+		typedChange = nil
+	}
@@
-	if change.TypedChange == nil {
+	if !hasTypedChangeKind(change.TypedChange) {
 		if typedChange != nil {
 			change.TypedChange = typedChange
 		} else {
 			change.TypedChange = typedChangeFromDetails(change.Details)
 		}
 	}

Also applies to: 126-153

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/catalog/change_mappings.go` around lines 106 - 107, The Apply logic
incorrectly treats any non-nil change.TypedChange as "already populated" even
when its Kind is empty; update the checks in Apply (and the surrounding
transform/fallback logic that inspects TypedChange) to treat TypedChange as
unset unless TypedChange.Kind (or .kind) is non-empty — i.e., replace conditions
like "change.TypedChange != nil" with "change.TypedChange != nil &&
change.TypedChange.Kind != \"\"" (and similarly guard uses in the transform and
fallback branches) so an empty/{} TypedChange does not block transformation or
fallback to Details.

Comment thread report/CatalogReport.tsx
Comment on lines +124 to +136
const configItem = {
id: data.configItem.id,
name: data.configItem.name,
type: data.configItem.type,
configClass: data.configItem.configClass,
status: data.configItem.status,
health: data.configItem.health,
description: data.configItem.description,
labels: data.configItem.labels,
tags: data.configItem.tags,
createdAt: data.configItem.created_at,
updatedAt: data.configItem.updated_at,
};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Protect data.configItem dereference to avoid runtime crashes.

Line 125–Line 136 assume data.configItem always exists, but this file already treats it as optional elsewhere (Line 25). Guard it before field access.

Suggested fix
-  const configItem = {
-    id: data.configItem.id,
-    name: data.configItem.name,
-    type: data.configItem.type,
-    configClass: data.configItem.configClass,
-    status: data.configItem.status,
-    health: data.configItem.health,
-    description: data.configItem.description,
-    labels: data.configItem.labels,
-    tags: data.configItem.tags,
-    createdAt: data.configItem.created_at,
-    updatedAt: data.configItem.updated_at,
-  };
+  const sourceConfigItem = data.configItem ?? {};
+  const configItem = {
+    id: sourceConfigItem.id,
+    name: sourceConfigItem.name,
+    type: sourceConfigItem.type,
+    configClass: sourceConfigItem.configClass,
+    status: sourceConfigItem.status,
+    health: sourceConfigItem.health,
+    description: sourceConfigItem.description,
+    labels: sourceConfigItem.labels,
+    tags: sourceConfigItem.tags,
+    createdAt: sourceConfigItem.created_at,
+    updatedAt: sourceConfigItem.updated_at,
+  };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const configItem = {
id: data.configItem.id,
name: data.configItem.name,
type: data.configItem.type,
configClass: data.configItem.configClass,
status: data.configItem.status,
health: data.configItem.health,
description: data.configItem.description,
labels: data.configItem.labels,
tags: data.configItem.tags,
createdAt: data.configItem.created_at,
updatedAt: data.configItem.updated_at,
};
const sourceConfigItem = data.configItem ?? {};
const configItem = {
id: sourceConfigItem.id,
name: sourceConfigItem.name,
type: sourceConfigItem.type,
configClass: sourceConfigItem.configClass,
status: sourceConfigItem.status,
health: sourceConfigItem.health,
description: sourceConfigItem.description,
labels: sourceConfigItem.labels,
tags: sourceConfigItem.tags,
createdAt: sourceConfigItem.created_at,
updatedAt: sourceConfigItem.updated_at,
};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/CatalogReport.tsx` around lines 124 - 136, The code dereferences
data.configItem when building the configItem object (const configItem = { ... })
without guarding for its absence; update the CatalogReport component to first
check for data?.configItem and either return early (e.g., render a fallback) or
build the configItem object using safe access (optional chaining or a default
empty object) so fields like id, name, created_at and updated_at are not
accessed on undefined; specifically modify the block that constructs configItem
to use data?.configItem or a fallback (or wrap in an if (!data?.configItem)
return ...) and adjust any downstream uses accordingly.

Comment on lines +33 to +39
const isFailed = (status: string) => String(status).toLowerCase().includes('fail');
const successCount = backups.filter((b) => b.status === 'success').length;
const failedCount = backups.filter((b) => b.status !== 'success').length;

const failedRows = backups
.filter((b) => b.status !== 'success')
.map((b) => [b.database, formatDateTime(b.date), b.size, b.status]);
const failedCount = backups.filter((b) => isFailed(b.status)).length;
const calendarEntries = backups.map((backup) => ({
date: backup.date,
status: (backup.status === 'success' ? 'success' : isFailed(backup.status) ? 'failed' : 'warning') as BackupCalendarStatus,
label: backup.size || undefined,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Normalize success detection consistently for stats and calendar status.

Line 34 and Line 38 use strict 'success' equality while failure is normalized. Status variants like Success/successful will be undercounted and shown as warning.

Suggested fix
 export default function BackupsSection({ backups, restores }: Props) {
-  const isFailed = (status: string) => String(status).toLowerCase().includes('fail');
-  const successCount = backups.filter((b) => b.status === 'success').length;
+  const normalizeStatus = (status: string) => String(status).toLowerCase();
+  const isFailed = (status: string) => normalizeStatus(status).includes('fail');
+  const isSuccess = (status: string) => normalizeStatus(status).includes('success');
+  const successCount = backups.filter((b) => isSuccess(b.status)).length;
   const failedCount = backups.filter((b) => isFailed(b.status)).length;
   const calendarEntries = backups.map((backup) => ({
     date: backup.date,
-    status: (backup.status === 'success' ? 'success' : isFailed(backup.status) ? 'failed' : 'warning') as BackupCalendarStatus,
+    status: (isFailed(backup.status) ? 'failed' : isSuccess(backup.status) ? 'success' : 'warning') as BackupCalendarStatus,
     label: backup.size || undefined,
   }));
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const isFailed = (status: string) => String(status).toLowerCase().includes('fail');
const successCount = backups.filter((b) => b.status === 'success').length;
const failedCount = backups.filter((b) => b.status !== 'success').length;
const failedRows = backups
.filter((b) => b.status !== 'success')
.map((b) => [b.database, formatDateTime(b.date), b.size, b.status]);
const failedCount = backups.filter((b) => isFailed(b.status)).length;
const calendarEntries = backups.map((backup) => ({
date: backup.date,
status: (backup.status === 'success' ? 'success' : isFailed(backup.status) ? 'failed' : 'warning') as BackupCalendarStatus,
label: backup.size || undefined,
export default function BackupsSection({ backups, restores }: Props) {
const normalizeStatus = (status: string) => String(status).toLowerCase();
const isFailed = (status: string) => normalizeStatus(status).includes('fail');
const isSuccess = (status: string) => normalizeStatus(status).includes('success');
const successCount = backups.filter((b) => isSuccess(b.status)).length;
const failedCount = backups.filter((b) => isFailed(b.status)).length;
const calendarEntries = backups.map((backup) => ({
date: backup.date,
status: (isFailed(backup.status) ? 'failed' : isSuccess(backup.status) ? 'success' : 'warning') as BackupCalendarStatus,
label: backup.size || undefined,
}));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/BackupsSection.tsx` around lines 33 - 39, The success
detection is inconsistent: use normalized, case-insensitive checks instead of
strict equality; update successCount (currently using b.status === 'success')
and calendarEntries' status computation to use a lowercase comparison (e.g.,
String(status).toLowerCase() === 'success') alongside the existing
isFailed(String(status)) helper so variants like "Success" or "successful" are
counted and rendered correctly (refer to isFailed, successCount,
calendarEntries, and BackupCalendarStatus when making the changes).

Comment on lines +47 to +49
if (link) {
return <a href={link} target="_blank" rel="noopener noreferrer">{content}</a>;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate link before using it as an anchor href.

Line 48 renders untrusted link directly. If this value is user-controlled, unsafe schemes can be injected. Please gate to http/https (or your explicit allowlist) before rendering <a>.

🔒 Suggested fix
+function isSafeExternalLink(value: string): boolean {
+  try {
+    const parsed = new URL(value);
+    return parsed.protocol === 'http:' || parsed.protocol === 'https:';
+  } catch {
+    return false;
+  }
+}
+
 export default function GitRef({ url, branch, file, dir, link, size = 'xs' }: GitRefProps) {
   if (!url && !file) return null;
   const textClass = SIZE_CLASSES[size];
@@
-  if (link) {
+  if (link && isSafeExternalLink(link)) {
     return <a href={link} target="_blank" rel="noopener noreferrer">{content}</a>;
   }
   return content;
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (link) {
return <a href={link} target="_blank" rel="noopener noreferrer">{content}</a>;
}
function isSafeExternalLink(value: string): boolean {
try {
const parsed = new URL(value);
return parsed.protocol === 'http:' || parsed.protocol === 'https:';
} catch {
return false;
}
}
export default function GitRef({ url, branch, file, dir, link, size = 'xs' }: GitRefProps) {
if (!url && !file) return null;
const textClass = SIZE_CLASSES[size];
if (link && isSafeExternalLink(link)) {
return <a href={link} target="_blank" rel="noopener noreferrer">{content}</a>;
}
return content;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/GitRef.tsx` around lines 47 - 49, Validate the
user-controlled link before using it in the anchor in GitRef.tsx: in the GitRef
component, check the scheme of the link variable (e.g., parse with URL and
ensure protocol is "http:" or "https:" or match your explicit allowlist) and
only render <a href={link} ...> when it passes; otherwise render the content
without an anchor (or use a safe fallback). Ensure this validation occurs where
the if (link) branch returns the anchor so unsafe schemes are never emitted.

Comment on lines +29 to +30
row.roles.set(u.role, u);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid lossy overwrite when the same user has duplicate role entries.

Line 29 currently overwrites existing userId + role entries, so direct vs indirect grants (or overdue flags) can be lost depending on input order. Please merge deterministically instead of replacing blindly.

Proposed deterministic merge
   for (const u of resource.users || []) {
     roleSet.add(u.role);
     let row = userMap.get(u.userId);
     if (!row) {
       row = { userId: u.userId, userName: u.userName, email: u.email, roles: new Map() };
       userMap.set(u.userId, row);
     }
-    row.roles.set(u.role, u);
+    const existing = row.roles.get(u.role);
+    if (!existing) {
+      row.roles.set(u.role, u);
+    } else {
+      const existingIndirect = existing.roleSource.startsWith('group:');
+      const incomingIndirect = u.roleSource.startsWith('group:');
+      const preferred = existingIndirect && !incomingIndirect ? u : existing; // prefer direct grant
+      row.roles.set(u.role, {
+        ...preferred,
+        isReviewOverdue: Boolean(existing.isReviewOverdue || u.isReviewOverdue),
+      });
+    }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/RBACMatrixSection.tsx` around lines 29 - 30, The current
assignment row.roles.set(u.role, u) blindly overwrites existing entries causing
loss of info when the same user+role appears multiple times; update the logic in
RBACMatrixSection (where row.roles is populated) to first check for an existing
entry via row.roles.get(u.role) and deterministically merge fields (e.g.,
preserve direct grants over indirect, OR flags like overdue, merge metadata)
rather than replacing the object; ensure the merge is deterministic (consistent
precedence rules) and then set the merged result back on row.roles using the
same key.

size="xs"
shape="rounded"
label={k}
value={v || '-'}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Preserve valid falsy tag values.

Line 142 uses v || '-', which masks valid values like 0 and false. Use nullish fallback instead.

Small fix
-              value={v || '-'}
+              value={v ?? '-'}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
value={v || '-'}
value={v ?? '-'}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/components/RBACMatrixSection.tsx` at line 142, The JSX currently sets
the cell value with a logical-OR fallback (value using variable v) which masks
valid falsy values like 0 and false; update the assignment in RBACMatrixSection
where value is set for v to use a nullish fallback (nullish coalescing) so only
null/undefined fall back to '-' and 0/false are preserved, and run/update any
related tests that assert these falsy values are rendered correctly.

Comment thread report/facet.go
Comment on lines +176 to +199
if format == "html" {
return io.ReadAll(response.Body)
}

renderResult, err := response.AsJSON()
if err != nil {
return nil, fmt.Errorf("failed to parse render response: %w", err)
}
resultURL, _ := renderResult["url"].(string)
if resultURL == "" {
return nil, fmt.Errorf("render response missing 'url' field")
}

pdfResponse, err := client.R(ctx).Get(resultURL)
if err != nil {
return nil, fmt.Errorf("failed to fetch rendered result: %w", err)
}
if !pdfResponse.IsOK() {
errBody, _ := pdfResponse.AsString()
return nil, fmt.Errorf("result fetch failed (status %d): %s", pdfResponse.StatusCode, errBody)
}

return io.ReadAll(pdfResponse.Body)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check how commonshttp handles response body closure and URL resolution
# Look for Response type and its Body field handling
rg -n -A 10 'type Response struct' --type=go

# Check if there's automatic body closure or if caller must close
rg -n 'Body.*Close|Close.*Body' --type=go

# Check how Get handles URLs with BaseURL already set
ast-grep --pattern $'func ($r $_) Get($url $_) {
  $$$
}'

Repository: flanksource/mission-control

Length of output: 2337


🏁 Script executed:

#!/bin/bash
# View the complete RenderHTTP function (lines 114-199)
sed -n '114,199p' report/facet.go | cat -n

# Check imports in report/facet.go to identify commonshttp
head -50 report/facet.go | grep -E "^import|^\s+\"" | cat -n

# Search for commonshttp imports and usage in the codebase
rg 'import.*commonshttp' --type=go
rg 'commonshttp' --type=go -A 2 | head -30

Repository: flanksource/mission-control

Length of output: 3840


🏁 Script executed:

#!/bin/bash
# Search for commonshttp Response usage in the codebase
rg 'R\(ctx\)' --type=go -A 3 | head -40

# Look for any patterns where commonshttp responses are handled
rg 'commonshttp\.NewClient|\.R\(' --type=go -B 2 -A 5 | head -60

# Check if there's a Response type definition or docs in the repo
find . -name "*.go" -path "*/commons/*" -o -name "*commons*" | head -20

Repository: flanksource/mission-control

Length of output: 5546


🏁 Script executed:

#!/bin/bash
# Search for patterns where io.ReadAll is used with response bodies
rg 'io\.ReadAll.*\.Body' --type=go -B 2 -A 2

# Check if there are any other cases in the codebase where response bodies are read without closing
rg 'ReadAll\(.*\.Body\)' --type=go -B 3 -A 3

Repository: flanksource/mission-control

Length of output: 7809


🏁 Script executed:

#!/bin/bash
# Check if commonshttp is an external library and how Get handles absolute URLs
# Search for commonshttp imports to understand the library
rg 'github.com/flanksource/commons' --type=go | head -10

# Check other uses of .Get() with potentially absolute URLs
rg '\.Get\(' --type=go -B 3 | grep -E "\.Get\(|BaseURL" | head -20

# Look for resultURL usage pattern and how it's set
rg 'resultURL' --type=go -B 2 -A 2

Repository: flanksource/mission-control

Length of output: 3666


Add explicit response body closure to prevent resource leaks.

Both response.Body (line 177) and pdfResponse.Body (line 198) are read with io.ReadAll but never closed. This causes resource leaks. Add defer response.Body.Close() after the POST request and defer pdfResponse.Body.Close() after the GET request, following the pattern used throughout the codebase (e.g., in components/logs.go, connection/check.go).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/facet.go` around lines 176 - 199, The response bodies from the render
POST and subsequent GET are not closed, causing leaks; after the POST that
produces response (variable response) add a deferred close (defer
response.Body.Close()) immediately after verifying the request succeeded, and
after the GET that produces pdfResponse add defer pdfResponse.Body.Close()
immediately after checking the request/response (i.e., right after pdfResponse
is assigned from client.R(ctx).Get and after any basic status checks) so both
response.Body and pdfResponse.Body are closed in the function that contains
these variables.

Comment thread report/ViewReport.tsx
Comment on lines +9 to 23
function ViewCoverPage({ data }: { data: ViewReportData }) {
const variableTags = (data.variables || []).reduce((acc, v) => {
acc[v.label || v.key] = v.default || '-';
return acc;
}, {} as Record<string, string>);

function PageFooter() {
const date = new Date().toLocaleDateString('en-US', {
year: 'numeric', month: 'long', day: 'numeric',
});
return (
<div className="flex items-center justify-between px-[10mm] py-[2mm] border-t border-gray-200 text-[8pt] text-gray-400">
<span>Generated {date}</span>
</div>
);
}

function CoverPage({ data }: { data: ViewReportData }) {
const date = new Date().toLocaleDateString('en-US', {
year: 'numeric', month: 'long', day: 'numeric',
});
return (
<div className="flex flex-col justify-center items-center h-full min-h-[200mm] text-center px-[20mm]">
<div className="mb-[8mm]">
{data.icon && (
<div className="mb-[4mm] flex justify-center">
<Icon name={data.icon} className="w-[16mm] h-[16mm]" />
</div>
)}
<h1 className="text-[36pt] font-bold text-slate-900 leading-tight mb-[4mm]">
{data.title || data.name}
</h1>
{data.namespace && (
<div className="text-[10pt] text-gray-500 font-mono bg-gray-50 rounded px-[4mm] py-[2mm] mt-[4mm]">
{data.namespace}/{data.name}
</div>
)}
</div>
{data.variables && data.variables.length > 0 && (
<div className="flex flex-wrap justify-center gap-[2mm] mb-[6mm]">
{data.variables.map((v) => (
<span key={v.key} className="inline-flex items-center bg-blue-50 text-blue-800 text-[9pt] px-[3mm] py-[1mm] rounded">
<span className="font-medium mr-[1mm]">{v.label || v.key}:</span>
{v.default || '-'}
</span>
))}
</div>
)}
<div className="w-[40mm] h-[1px] mb-[8mm]" style={{ backgroundColor: '#2563EB' }} />
<div className="text-[10pt] text-gray-400">Generated on {date}</div>
</div>
<CoverPage
title={data.title || data.name}
icon={data.icon}
query={data.namespace ? `${data.namespace}/${data.name}` : undefined}
tags={Object.keys(variableTags).length > 0 ? variableTags : undefined}
/>
);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Falsy value handling may incorrectly replace valid defaults.

On line 11, v.default || '-' will replace valid falsy values (like 0, '', or false) with '-'. Consider using nullish coalescing instead.

Additionally, if both data.title and data.name are undefined on line 17, the title prop would be undefined, but CoverPage requires title to be a string.

🛡️ Proposed fix for falsy value handling
 function ViewCoverPage({ data }: { data: ViewReportData }) {
   const variableTags = (data.variables || []).reduce((acc, v) => {
-    acc[v.label || v.key] = v.default || '-';
+    acc[v.label || v.key] = v.default ?? '-';
     return acc;
   }, {} as Record<string, string>);
 
   return (
     <CoverPage
-      title={data.title || data.name}
+      title={data.title || data.name || 'View Report'}
       icon={data.icon}
       query={data.namespace ? `${data.namespace}/${data.name}` : undefined}
       tags={Object.keys(variableTags).length > 0 ? variableTags : undefined}
     />
   );
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@report/ViewReport.tsx` around lines 9 - 23, In ViewCoverPage, the reduce
currently uses falsy-coalescing which replaces valid falsy defaults; change the
accumulator to use nullish coalescing for variable values (use v.default ?? '-')
so 0, '' or false are preserved, and ensure the CoverPage title prop is always a
string by using a safe fallback like data.title ?? data.name ?? '' so title is
never undefined; update references in the function (variableTags, ViewCoverPage,
CoverPage) accordingly.

- Handle browser open errors gracefully with fallback message instead of silently failing
- Change JOIN to LEFT JOIN in access logs query to include records with missing config or user references
- Add scope filtering to bearer token selection for more precise token matching
- Improve context add command to preserve existing values when flags are not explicitly changed
- Add HTTP status code validation before parsing response body
- Extract RBAC row conversion to separate function and apply limit at query level
- Fix change categorization logic to properly handle uncategorized changes with fallback detection
- Simplify token output formatting and add Windows support for browser opening

These changes improve robustness of authentication, data retrieval, and reporting functionality.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
db/applications.go (1)

591-620: Consider pushing stale filter to SQL for better performance.

The current implementation fetches all matching rows and then filters in-memory. For large access tables, this could be inefficient. The stale condition can be expressed in SQL:

+	if filters.Stale != "" {
+		d, err := duration.ParseDuration(filters.Stale)
+		if err != nil {
+			return nil, ctx.Oops().Wrapf(err, "invalid stale filter %q", filters.Stale)
+		}
+		staleCutoff := time.Now().Add(-time.Duration(d))
+		q = q.Where("config_access_summary.last_signed_in_at IS NULL OR config_access_summary.last_signed_in_at < ?", staleCutoff)
+	}
+
 	var rows []RBACAccessRow
 	if err := q.Find(&rows).Error; err != nil {
 		return nil, ctx.Oops().Wrapf(err, "failed to query access")
 	}
-
-	var staleCutoff time.Time
-	if filters.Stale != "" {
-		d, err := duration.ParseDuration(filters.Stale)
-		if err != nil {
-			return nil, ctx.Oops().Wrapf(err, "invalid stale filter %q", filters.Stale)
-		}
-		staleCutoff = time.Now().Add(-time.Duration(d))
-	}
 
 	items := make([]api.AccessItem, 0, len(rows))
 	for _, r := range rows {
-		isStale := !staleCutoff.IsZero() && (r.LastSignedInAt == nil || r.LastSignedInAt.Before(staleCutoff))
-		if !staleCutoff.IsZero() && !isStale {
-			continue
-		}
+		// When stale filter is set, all returned rows are stale
+		isStale := filters.Stale != ""

This reduces memory pressure and leverages database indexing on last_signed_in_at.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@db/applications.go` around lines 591 - 620, The in-memory stale filtering
using filters.Stale and staleCutoff iterates all rows and should be moved into
the SQL query to improve performance; after parsing filters.Stale into a
duration (as currently done where d is produced), compute the cutoff time and
add a WHERE clause to the query that fetches rows to include "(last_signed_in_at
IS NULL OR last_signed_in_at < :stale_cutoff)" (bind the cutoff time) so only
stale rows are returned, then remove the in-loop early-continue logic that
checks staleCutoff and isStale (references: filters.Stale, staleCutoff, rows,
and the loop that builds api.AccessItem) while still setting IsStale on the
returned items (it will be true for every returned row).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cmd/context.go`:
- Around line 336-341: The code unconditionally appends "/api" to stored.Server
which can produce "/api/api"; modify the logic around cfg.GetContext,
stored.Server and ctx.Server to first normalize stored.Server by trimming
trailing slashes, then only append "/api" if it does not already end with "/api"
(e.g., use strings.HasSuffix on stored.Server after trimming) and finally assign
the normalized value to ctx.Server so duplicate "/api" segments are prevented.
- Around line 310-313: The API probe in ensureAPIBase() currently uses
nethttp.DefaultClient.Do(req) with no timeout; replace that call by creating and
using an http.Client with a sensible Timeout (or attach a context with timeout
to req) so the probe cannot hang indefinitely. Locate the ensureAPIBase function
and change the nethttp.DefaultClient.Do(req) usage to use a dedicated client
with Timeout (or req.WithContext(context.WithTimeout(...))) and handle the
timeout error path accordingly.

---

Nitpick comments:
In `@db/applications.go`:
- Around line 591-620: The in-memory stale filtering using filters.Stale and
staleCutoff iterates all rows and should be moved into the SQL query to improve
performance; after parsing filters.Stale into a duration (as currently done
where d is produced), compute the cutoff time and add a WHERE clause to the
query that fetches rows to include "(last_signed_in_at IS NULL OR
last_signed_in_at < :stale_cutoff)" (bind the cutoff time) so only stale rows
are returned, then remove the in-loop early-continue logic that checks
staleCutoff and isStale (references: filters.Stale, staleCutoff, rows, and the
loop that builds api.AccessItem) while still setting IsStale on the returned
items (it will be true for every returned row).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: d3cbd45e-28fe-41cc-b3a5-83dcbc74bc56

📥 Commits

Reviewing files that changed from the base of the PR and between 5771d74 and 391db7d.

📒 Files selected for processing (6)
  • cmd/auth_login.go
  • cmd/connection_browser.go
  • cmd/context.go
  • db/applications.go
  • report/catalog/report.go
  • report/components/change-section-utils.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • cmd/auth_login.go
  • cmd/connection_browser.go
  • report/catalog/report.go

Comment thread cmd/context.go
Comment on lines +310 to +313
resp, err := nethttp.DefaultClient.Do(req)
if err != nil {
return false, err
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the current probe uses DefaultClient (no timeout) and check for timeout-configured HTTP clients in Go files.
rg -n -C2 'DefaultClient\.Do\(' cmd/context.go
rg -n -C2 'Client\{[^}]*Timeout' --type go

Repository: flanksource/mission-control

Length of output: 501


🏁 Script executed:

rg -n 'DefaultClient' --type go

Repository: flanksource/mission-control

Length of output: 742


🏁 Script executed:

sed -n '290,320p' cmd/context.go

Repository: flanksource/mission-control

Length of output: 1015


Add an explicit timeout to the API probe request.

Line 310 uses nethttp.DefaultClient.Do(req) in the critical ensureAPIBase() initialization function with no timeout. A slow or broken network can stall CLI execution indefinitely.

Proposed fix
 import (
 	"encoding/json"
 	"fmt"
 	nethttp "net/http"
 	"net/url"
 	"os"
 	"path/filepath"
 	"strings"
+	"time"

 	"github.com/charmbracelet/huh"
 	"github.com/spf13/cobra"
 )
@@
-	resp, err := nethttp.DefaultClient.Do(req)
+	client := &nethttp.Client{Timeout: 10 * time.Second}
+	resp, err := client.Do(req)
 	if err != nil {
 		return false, err
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resp, err := nethttp.DefaultClient.Do(req)
if err != nil {
return false, err
}
import (
"encoding/json"
"fmt"
nethttp "net/http"
"net/url"
"os"
"path/filepath"
"strings"
"time"
"github.com/charmbracelet/huh"
"github.com/spf13/cobra"
)
// ... other code ...
client := &nethttp.Client{Timeout: 10 * time.Second}
resp, err := client.Do(req)
if err != nil {
return false, err
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/context.go` around lines 310 - 313, The API probe in ensureAPIBase()
currently uses nethttp.DefaultClient.Do(req) with no timeout; replace that call
by creating and using an http.Client with a sensible Timeout (or attach a
context with timeout to req) so the probe cannot hang indefinitely. Locate the
ensureAPIBase function and change the nethttp.DefaultClient.Do(req) usage to use
a dedicated client with Timeout (or req.WithContext(context.WithTimeout(...)))
and handle the timeout error path accordingly.

Comment thread cmd/context.go
Comment on lines +336 to +341
stored := cfg.GetContext(ctx.Name)
if stored == nil {
return false, nil
}
stored.Server = strings.TrimRight(stored.Server, "/") + "/api"
ctx.Server = stored.Server
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard against accidental /api/api persistence.

Line 340 appends "/api" unconditionally to stored.Server. If ctx.Server is stale but config already has /api, this can double-append.

Proposed fix
-	stored.Server = strings.TrimRight(stored.Server, "/") + "/api"
+	base := strings.TrimRight(stored.Server, "/")
+	if !strings.HasSuffix(base, "/api") {
+		base += "/api"
+	}
+	stored.Server = base
 	ctx.Server = stored.Server
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/context.go` around lines 336 - 341, The code unconditionally appends
"/api" to stored.Server which can produce "/api/api"; modify the logic around
cfg.GetContext, stored.Server and ctx.Server to first normalize stored.Server by
trimming trailing slashes, then only append "/api" if it does not already end
with "/api" (e.g., use strings.HasSuffix on stored.Server after trimming) and
finally assign the normalized value to ctx.Server so duplicate "/api" segments
are prevented.

@adityathebe adityathebe self-requested a review April 16, 2026 13:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants