Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a query logging/timing subsystem, introduces a reusable BaseCatalogSearch and catalog APIs, instruments many query functions with timers and named returns, extends Postgrest config with Arch and passes OS/ARCH to installer, enhances GCP credential detection, adds artifact_summary view, and provides template-based Postgres test setup. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Logger as QueryLogger
participant Resolver as ConfigIDResolver
participant DB
participant API as FindCatalogAccess
Client->>API: call FindCatalogAccess(req)
API->>Logger: Start("CatalogAccess").Arg("query", req.String())
API->>Resolver: ResolveConfigIDs(ctx)
Resolver-->>API: configIDs
API->>DB: Build query via ApplyClauses + filters
DB-->>API: Count / Find results
API->>Logger: Results(results)
API->>Logger: End(&err)
API-->>Client: return response
Possibly related PRs
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
✨ Simplify code
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Benchstat (RLS)Base: 📊 2 minor regression(s) (all within 5% threshold)
✅ 11 improvement(s)
Full benchstat output |
Benchstat (Other)Base: 📊 1 minor regression(s) (all within 5% threshold)
Full benchstat output |
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (7)
api/config.go (3)
159-165:Archfield not included inString()output.The
PostgrestConfig.String()method omits the newArchfield, which could make debugging binary selection issues harder.♻️ Suggested fix
func (p PostgrestConfig) String() string { - return fmt.Sprintf("version:%v port=%d log-level=%v, jwt=%s", + return fmt.Sprintf("version:%v arch=%s port=%d log-level=%v, jwt=%s", p.Version, + p.Arch, p.Port, p.LogLevel, logger.PrintableSecret(p.JWTSecret)) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/config.go` around lines 159 - 165, The PostgrestConfig.String() method omits the new Arch field; update the PostgrestConfig.String function to include Arch in the returned fmt.Sprintf output (add an appropriate "%v" or "%s" placeholder in the format string and pass p.Arch as an argument) so the architecture is printed alongside Version, Port, LogLevel, and JWTSecret.
26-33: Darwin fallback usesruntime.GOARCHwhile Arch can be overridden viaPGRST_ARCH.The fallback at line 30 checks
runtime.GOARCH == "amd64", but nowconfig.Postgrest.Archcan be overridden via thePGRST_ARCHenvironment variable. If the intent is to check the configured architecture (not the runtime one), this condition should useDefaultConfig.Postgrest.Arch.If the current behavior is intentional (i.e., the fallback applies based on the actual runtime, regardless of config override), a brief inline comment would clarify this.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/config.go` around lines 26 - 33, The fallback currently checks runtime.GOARCH when deciding to override DefaultConfig.Postgrest.Version for darwin, but the architecture can be overridden via PGRST_ARCH stored in DefaultConfig.Postgrest.Arch; update the condition in the Postgrest version fallback to use DefaultConfig.Postgrest.Arch (not runtime.GOARCH) so the configured arch is respected, or if behavior should remain tied to the actual runtime add a concise inline comment above the check clarifying that runtime.GOARCH is intentionally used instead of the configured DefaultConfig.Postgrest.Arch.
153-155: No validation ofPGRST_ARCHvalue.Invalid or unrecognized architecture values from the environment variable are passed through without validation. This could lead to confusing errors downstream when
deps.WithOSreceives an unexpected value.Consider validating against known architectures (e.g.,
amd64,arm64) or at minimum logging a warning for unrecognized values.♻️ Suggested approach
if v := os.Getenv("PGRST_ARCH"); v != "" { + switch v { + case "amd64", "arm64", "386", "arm": + clone.Arch = v + default: + logger.Warnf("Unrecognized PGRST_ARCH value: %s, using default: %s", v, clone.Arch) + } - clone.Arch = v }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/config.go` around lines 153 - 155, The code currently assigns os.Getenv("PGRST_ARCH") directly to clone.Arch without validation; update the assignment to validate PGRST_ARCH against an allowlist of known architectures (e.g., "amd64", "arm64", possibly "386", "arm") before setting clone.Arch, and if the value is unknown emit a warning via your logger (or fallback to a safe default / leave unset) so deps.WithOS won't receive unexpected values; key symbols to modify: the env read for "PGRST_ARCH" and the field clone.Arch, adding a small validation function or switch and a warning log path for unrecognized values.query/query_logger.go (2)
75-77: Misleading "timed out" fallback whenResults()not called.When
erris nil andt.resultsis nil (e.g.,Results()was never called), the log outputs "timed out". This is inaccurate—it simply means no results were recorded. Consider either:
- Logging something neutral like "no results recorded"
- Requiring
Results()to be called beforeEnd()} else if t.results != nil { count := sliceLen(t.results) countStyle := "text-green-600" if count == 0 { countStyle = "text-red-600" } label = label.AddText(fmt.Sprintf("%d", count), countStyle) label = label.AddText(summaryText(t.results, count), "text-gray-400") } else { - label = label.AddText("timed out", "text-yellow-600") + label = label.AddText("done", "text-gray-400") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/query_logger.go` around lines 75 - 77, The fallback message is misleading: when err == nil and t.results == nil (i.e., Results() was never called), do not log "timed out". Change the End() behavior to detect t.results == nil and call label.AddText("no results recorded", "text-yellow-600") instead of "timed out" (or alternatively make End() return an error if Results() wasn't called); update the branch in the End() method that currently uses label.AddText("timed out", ...) to reference t.results and use the neutral message, keeping the existing styling and ensuring any callers/tests that expect the old message are updated.
83-92:sliceLenreturns 1 for any non-nil pointer, regardless of what it points to.This handles the case where a single result is returned as a pointer (e.g.,
*models.Agent), returning count of 1. However, if someone passes a pointer to a slice, this returns 1 instead of the slice length. Consider dereferencing pointers to check for underlying slices:♻️ Optional improvement to handle pointer-to-slice
func sliceLen(v any) int { rv := reflect.ValueOf(v) + if rv.Kind() == reflect.Ptr && !rv.IsNil() { + rv = rv.Elem() + } if rv.Kind() == reflect.Slice { return rv.Len() } - if rv.Kind() == reflect.Ptr && !rv.IsNil() { + if rv.IsValid() && !rv.IsZero() { return 1 } return 0 }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/query_logger.go` around lines 83 - 92, sliceLen currently treats any non-nil pointer as length 1; update sliceLen to dereference pointers and check the underlying kind so pointer-to-slice returns the slice length. Specifically, in sliceLen (using reflect.Value rv), if rv.Kind() == reflect.Ptr and !rv.IsNil() then get rv = rv.Elem(); if rv.Kind() == reflect.Slice return rv.Len(); otherwise return 1; keep the existing slice branch for non-pointer slices. This uses rv.Elem().Kind() and rv.Elem().Len() to detect pointer-to-slice.query/config_insights.go (1)
50-53: Confusing pattern: adding a dummy clause only to filter it out later.Line 51 adds
clause.Eq{Column: "config_id", Value: nil}which is then filtered out at lines 87-89. This appears to be a workaround but the intent is unclear. If the goal is to avoid the clause generated byApplyClauses(), consider a cleaner approach:♻️ Suggested simplification
if len(configIDs) > 0 { - clauses = append(clauses, clause.Eq{Column: clause.Column{Name: "config_id"}, Value: nil}) q = q.Where("config_id IN ?", configIDs) }Then update the filter at lines 82-91 to only skip
deleted_at:for _, c := range clauses { if eq, ok := c.(clause.Eq); ok && eq.Column.(clause.Column).Name == "deleted_at" { continue } - if eq, ok := c.(clause.Eq); ok && eq.Column.(clause.Column).Name == "config_id" && eq.Value == nil { - continue - } filteredClauses = append(filteredClauses, c) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/config_insights.go` around lines 50 - 53, Remove the dummy clause.Eq{Column: clause.Column{Name: "config_id"}, Value: nil} used as a placeholder; instead when configIDs is non-empty, directly append the intended filter by adding q = q.Where("config_id IN ?", configIDs) and/or a proper clause targeting config_id to the clauses slice so ApplyClauses() produces the correct SQL. Then simplify the clause-filtering logic used before ApplyClauses() (the code that currently skips certain clauses) to only skip the "deleted_at" clause rather than filtering out placeholder config_id clauses. Update references to the clauses slice, clause.Eq, configIDs, and the ApplyClauses() call accordingly so the intent is explicit and no dummy clause is needed.query/catalog_search.go (1)
84-118:ResolveConfigIDsfalls back toSearchResourceson any UUID parse failure.If
CatalogIDcontains a mix of valid UUIDs and invalid entries (e.g.,"uuid1,uuid2,invalid"), the entire list falls back toSearchResources. This might be intentional for search queries, but could be surprising if a user intended to pass UUIDs with a typo.Also, the
SearchResourcesfallback has a hardcodedLimit: 200which could silently truncate results for large searches.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/catalog_search.go` around lines 84 - 118, ResolveConfigIDs currently abandons all parsed UUIDs and calls SearchResources if any part fails to parse; change ResolveConfigIDs so it parses CatalogID by splitting on commas and collects valid uuid.UUIDs while also collecting any invalid parts, then: if at least one valid UUID was parsed, use those (set b.configIDs and return them) and return a clear error listing invalid entries (or at minimum log/warn) instead of falling back to SearchResources; only call SearchResources when no UUIDs are present. Also remove the hardcoded Limit: 200 in the SearchResourcesRequest and replace it with a configurable/value constant (e.g., DefaultSearchLimit or a field on BaseCatalogSearch) used by ResolveConfigIDs so large searches aren’t silently truncated. Ensure you update references to ResolveConfigIDs, CatalogID, SearchResources, and SearchResourcesRequest in this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@connection/gcp.go`:
- Around line 47-54: The TokenSource method for GCPConnection currently forces
service-account-only creds via google.CredentialsFromJSONWithType, which is
inconsistent with GCSConnection/GKEConnection; change GCPConnection.TokenSource
to call google.CredentialsFromJSON(ctx, []byte(g.Credentials.ValueStatic),
scopes...) (keeping the same error handling and returning creds.TokenSource) so
non-service-account credential types (authorized user, workload identity, etc.)
are accepted like the sibling implementations.
In `@query/catalog_search.go`:
- Around line 124-158: ApplyClauses currently swallows parse errors from
parseAndBuildFilteringQuery and labels.Parse; change ApplyClauses signature to
return an error (i.e., ([]clause.Expression, func(*gorm.DB) *gorm.DB, error)),
check and propagate errors from parseAndBuildFilteringQuery calls and from
labels.Parse (returning them immediately), and adjust callers to handle the
error; alternatively, if you prefer non-breaking behavior, log the parse errors
inside ApplyClauses (using the project logger) instead of discarding them and
ensure tagsFn remains nil on parse failure.
In `@query/config_access.go`:
- Around line 47-55: FindConfigAccessByConfigIDs now delegates to
FindCatalogAccess which applies SetDefaults() and enforces PageSize=50, risking
truncated results; update FindConfigAccessByConfigIDs to explicitly set a
sufficiently large PageSize on the CatalogAccessSearch/BaseCatalogSearch (e.g.,
PageSize = len(configIDs) or another large constant) before calling
FindCatalogAccess so pagination won't drop entries, referencing the
FindConfigAccessByConfigIDs and FindCatalogAccess/SetDefaults paths when making
the change.
- Around line 33-44: FindCatalogAccess is missing pagination: use the
BaseCatalogSearch PageSize and Page fields to apply q =
q.Limit(search.PageSize).Offset((search.Page-1)*search.PageSize) (or equivalent)
before q.Find so the query respects pagination; ensure you import/use the same
search struct used by FindCatalogChanges/FindCatalogInsights and update
output.Total to reflect either the page result length (len(output.Access)) or
the overall count if a full count is required.
In `@query/config_insights.go`:
- Around line 55-74: The parse errors from parseAndBuildFilteringQuery are being
ignored for req.Status, req.Severity, req.Analyzer, and req.AnalysisType so
invalid filter syntax becomes silent; update the callers in the block that
builds clauses to check parseErr and propagate a descriptive error (or return
it) instead of swallowing it—e.g., after calling parseAndBuildFilteringQuery for
each of those fields, if parseErr != nil return or wrap the error up to the
caller (same behavior as ApplyClauses()), otherwise append the returned clauses
to clauses; reference parseAndBuildFilteringQuery, ApplyClauses(), and the
clause-building variables (clauses, req.Status, req.Severity, req.Analyzer,
req.AnalysisType) when making the change.
- Around line 82-91: The loop over clauses can panic because it assumes
eq.Column is a clause.Column; update the checks inside the loop that reference
clause.Eq and eq.Column (used when filtering "deleted_at" and "config_id") to
perform a safe type assertion on eq.Column before accessing .Name (e.g., assert
col, ok := eq.Column.(clause.Column) and only compare col.Name when ok), and
similarly guard the clause.Eq assertion (ensure c is clause.Eq before using
eq.Value); this makes filteredClauses assembly robust against unexpected clause
types.
In `@query/config_relations.go`:
- Around line 97-107: The code currently calls timer.Results(results) regardless
of whether the DB call (ctx.DB().Raw(...).Find(&results).Error) returned an
error; change this so timer.Results(results) is invoked only when err is nil
(i.e., after the Raw(...).Find(&results).Error call succeeds). Locate the DB
call and the timer.Results(results) call (symbols: ctx.DB().Raw,
Find(&results).Error, results, timer.Results) and add a conditional check around
timer.Results to avoid logging/processing incomplete data, then return results
and err as before.
In `@schema/artifacts.hcl`:
- Around line 104-109: The foreign key artifacts_config_change_fkey currently
uses on_delete = CASCADE which hard-deletes artifacts instead of honoring the
artifact soft-delete fields (deleted_at, expires_at); change the constraint
on_delete behavior to SET_NULL (or NO_ACTION if SET_NULL is unsupported) so that
deleting rows from table.config_changes does not cascade-remove artifacts but
instead nullifies column.config_change_id (consistent with the nullable
config_change_id column and other artifact relationships like
artifacts_checks_fkey and artifacts_playbook_run_action_fkey); update the
foreign_key "artifacts_config_change_fkey" block to set on_delete = SET_NULL and
run migrations to apply the change.
---
Nitpick comments:
In `@api/config.go`:
- Around line 159-165: The PostgrestConfig.String() method omits the new Arch
field; update the PostgrestConfig.String function to include Arch in the
returned fmt.Sprintf output (add an appropriate "%v" or "%s" placeholder in the
format string and pass p.Arch as an argument) so the architecture is printed
alongside Version, Port, LogLevel, and JWTSecret.
- Around line 26-33: The fallback currently checks runtime.GOARCH when deciding
to override DefaultConfig.Postgrest.Version for darwin, but the architecture can
be overridden via PGRST_ARCH stored in DefaultConfig.Postgrest.Arch; update the
condition in the Postgrest version fallback to use DefaultConfig.Postgrest.Arch
(not runtime.GOARCH) so the configured arch is respected, or if behavior should
remain tied to the actual runtime add a concise inline comment above the check
clarifying that runtime.GOARCH is intentionally used instead of the configured
DefaultConfig.Postgrest.Arch.
- Around line 153-155: The code currently assigns os.Getenv("PGRST_ARCH")
directly to clone.Arch without validation; update the assignment to validate
PGRST_ARCH against an allowlist of known architectures (e.g., "amd64", "arm64",
possibly "386", "arm") before setting clone.Arch, and if the value is unknown
emit a warning via your logger (or fallback to a safe default / leave unset) so
deps.WithOS won't receive unexpected values; key symbols to modify: the env read
for "PGRST_ARCH" and the field clone.Arch, adding a small validation function or
switch and a warning log path for unrecognized values.
In `@query/catalog_search.go`:
- Around line 84-118: ResolveConfigIDs currently abandons all parsed UUIDs and
calls SearchResources if any part fails to parse; change ResolveConfigIDs so it
parses CatalogID by splitting on commas and collects valid uuid.UUIDs while also
collecting any invalid parts, then: if at least one valid UUID was parsed, use
those (set b.configIDs and return them) and return a clear error listing invalid
entries (or at minimum log/warn) instead of falling back to SearchResources;
only call SearchResources when no UUIDs are present. Also remove the hardcoded
Limit: 200 in the SearchResourcesRequest and replace it with a
configurable/value constant (e.g., DefaultSearchLimit or a field on
BaseCatalogSearch) used by ResolveConfigIDs so large searches aren’t silently
truncated. Ensure you update references to ResolveConfigIDs, CatalogID,
SearchResources, and SearchResourcesRequest in this change.
In `@query/config_insights.go`:
- Around line 50-53: Remove the dummy clause.Eq{Column: clause.Column{Name:
"config_id"}, Value: nil} used as a placeholder; instead when configIDs is
non-empty, directly append the intended filter by adding q = q.Where("config_id
IN ?", configIDs) and/or a proper clause targeting config_id to the clauses
slice so ApplyClauses() produces the correct SQL. Then simplify the
clause-filtering logic used before ApplyClauses() (the code that currently skips
certain clauses) to only skip the "deleted_at" clause rather than filtering out
placeholder config_id clauses. Update references to the clauses slice,
clause.Eq, configIDs, and the ApplyClauses() call accordingly so the intent is
explicit and no dummy clause is needed.
In `@query/query_logger.go`:
- Around line 75-77: The fallback message is misleading: when err == nil and
t.results == nil (i.e., Results() was never called), do not log "timed out".
Change the End() behavior to detect t.results == nil and call label.AddText("no
results recorded", "text-yellow-600") instead of "timed out" (or alternatively
make End() return an error if Results() wasn't called); update the branch in the
End() method that currently uses label.AddText("timed out", ...) to reference
t.results and use the neutral message, keeping the existing styling and ensuring
any callers/tests that expect the old message are updated.
- Around line 83-92: sliceLen currently treats any non-nil pointer as length 1;
update sliceLen to dereference pointers and check the underlying kind so
pointer-to-slice returns the slice length. Specifically, in sliceLen (using
reflect.Value rv), if rv.Kind() == reflect.Ptr and !rv.IsNil() then get rv =
rv.Elem(); if rv.Kind() == reflect.Slice return rv.Len(); otherwise return 1;
keep the existing slice branch for non-pointer slices. This uses
rv.Elem().Kind() and rv.Elem().Len() to detect pointer-to-slice.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a881087c-110f-4986-ba08-71f673f5779d
📒 Files selected for processing (18)
api/config.goconnection/aws.goconnection/gcp.gomodels/artifacts.gomodels/config.gopostgrest/postgrest.goquery/agent.goquery/artifacts.goquery/catalog_search.goquery/config.goquery/config_access.goquery/config_changes.goquery/config_insights.goquery/config_relations.goquery/query_logger.gorbac/objects.goschema/artifacts.hclviews/046_artifact_summary.sql
| func (g *GCPConnection) TokenSource(ctx context.Context, scopes ...string) (oauth2.TokenSource, error) { | ||
| creds, err := google.CredentialsFromJSON(ctx, []byte(g.Credentials.ValueStatic), scopes...) //nolint:staticcheck | ||
| creds, err := google.CredentialsFromJSONWithType(ctx, []byte(g.Credentials.ValueStatic), google.ServiceAccount, scopes...) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
|
|
||
| tokenSource := creds.TokenSource | ||
| return tokenSource, nil | ||
| return creds.TokenSource, nil | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how GCP credentials are handled across the codebase
echo "=== Files using CredentialsFromJSONWithType ==="
rg -n "CredentialsFromJSONWithType" --type go
echo ""
echo "=== Files using CredentialsFromJSON (without type) ==="
rg -n "CredentialsFromJSON[^W]" --type go
echo ""
echo "=== Check callers of GCPConnection.TokenSource ==="
rg -n "\.TokenSource\(" --type go -C2Repository: flanksource/duty
Length of output: 1770
🏁 Script executed:
# Read the full GCPConnection implementation
head -100 connection/gcp.goRepository: flanksource/duty
Length of output: 2888
🏁 Script executed:
# Check for any comments or documentation about credential types
rg -A5 -B5 "ServiceAccount" connection/gcp.goRepository: flanksource/duty
Length of output: 432
🏁 Script executed:
# Check if there are tests showing expected credential types
fd ".*test.*\.go$" --type f | xargs rg -l "GCPConnection|TokenSource" 2>/dev/null | head -5Repository: flanksource/duty
Length of output: 113
Enforcing google.ServiceAccount type creates an inconsistency with sibling GCP connections and may break existing workflows.
Using google.CredentialsFromJSONWithType with google.ServiceAccount rejects credentials that aren't service account credentials (e.g., authorized user, workload identity). This differs from sibling connections:
GCSConnection(gcs.go) usesCredentialsFromJSONwithout type constraintGKEConnection(gke.go) usesCredentialsFromJSONwithout type constraint
Without documentation explaining this restriction, it appears unintentional. Revert to CredentialsFromJSON unless ServiceAccount-only credentials are required by design.
🔧 Proposed fix
func (g *GCPConnection) TokenSource(ctx context.Context, scopes ...string) (oauth2.TokenSource, error) {
- creds, err := google.CredentialsFromJSONWithType(ctx, []byte(g.Credentials.ValueStatic), google.ServiceAccount, scopes...)
+ creds, err := google.CredentialsFromJSON(ctx, []byte(g.Credentials.ValueStatic), scopes...)
if err != nil {
return nil, err
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connection/gcp.go` around lines 47 - 54, The TokenSource method for
GCPConnection currently forces service-account-only creds via
google.CredentialsFromJSONWithType, which is inconsistent with
GCSConnection/GKEConnection; change GCPConnection.TokenSource to call
google.CredentialsFromJSON(ctx, []byte(g.Credentials.ValueStatic), scopes...)
(keeping the same error handling and returning creds.TokenSource) so
non-service-account credential types (authorized user, workload identity, etc.)
are accepted like the sibling implementations.
| foreign_key "artifacts_config_change_fkey" { | ||
| columns = [column.config_change_id] | ||
| ref_columns = [table.config_changes.column.id] | ||
| on_update = NO_ACTION | ||
| on_delete = CASCADE | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== References to config_changes / ConfigChange =="
rg -n -C2 --type=go --type=sql 'config_changes|ConfigChange'
echo
echo "== Potential hard delete call sites in Go =="
rg -n -C3 --type=go 'Delete\(|Unscoped\(\)\.Delete\(|Exec\('
echo
echo "== Raw SQL hard deletes/truncates for config_changes =="
rg -n -C2 --type=sql --type=go 'DELETE\s+FROM\s+config_changes|TRUNCATE\s+config_changes'Repository: flanksource/duty
Length of output: 50373
🏁 Script executed:
cat schema/artifacts.hcl | head -150Repository: flanksource/duty
Length of output: 2818
🏁 Script executed:
# Find the exact foreign key definition and surrounding context
rg -A 10 -B 10 'artifacts_config_change_fkey' schema/Repository: flanksource/duty
Length of output: 1189
🏁 Script executed:
# Search for explicit hard deletes of config_changes with context
rg -n 'DELETE\s+FROM\s+config_changes' --type sql -A 2 -B 2Repository: flanksource/duty
Length of output: 983
🏁 Script executed:
# Search for artifact retention, archival, or lifecycle policies
rg -i 'artifact.*retention|artifact.*archive|artifact.*policy|expire' --type go -A 3 -B 1Repository: flanksource/duty
Length of output: 6209
🏁 Script executed:
# Check if artifacts are explicitly managed with soft deletes elsewhere
rg -n 'artifacts.*deleted_at|Unscoped.*artifacts' --type go -C 2Repository: flanksource/duty
Length of output: 42
Confirm ON DELETE CASCADE is inconsistent with artifact soft-delete design.
The CASCADE delete on artifacts_config_change_fkey contradicts the artifact table's soft-delete semantics (deleted_at, expires_at columns). When config_changes rows are hard-deleted during config item cleanup (confirmed in views/001_alter_deferred_tables.sql, views/006_config_views.sql), linked artifacts are cascade-deleted instead of marked deleted, causing unintended evidence loss. This pattern differs from other artifact relationships—artifacts_checks_fkey and artifacts_playbook_run_action_fkey both use NO_ACTION to prevent cascades.
Consider safer alternative (if retention is required)
foreign_key "artifacts_config_change_fkey" {
columns = [column.config_change_id]
ref_columns = [table.config_changes.column.id]
on_update = NO_ACTION
- on_delete = CASCADE
+ on_delete = SET_NULL
}This preserves artifacts while nullifying the relationship, aligning with the nullable config_change_id column design.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| foreign_key "artifacts_config_change_fkey" { | |
| columns = [column.config_change_id] | |
| ref_columns = [table.config_changes.column.id] | |
| on_update = NO_ACTION | |
| on_delete = CASCADE | |
| } | |
| foreign_key "artifacts_config_change_fkey" { | |
| columns = [column.config_change_id] | |
| ref_columns = [table.config_changes.column.id] | |
| on_update = NO_ACTION | |
| on_delete = SET_NULL | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@schema/artifacts.hcl` around lines 104 - 109, The foreign key
artifacts_config_change_fkey currently uses on_delete = CASCADE which
hard-deletes artifacts instead of honoring the artifact soft-delete fields
(deleted_at, expires_at); change the constraint on_delete behavior to SET_NULL
(or NO_ACTION if SET_NULL is unsupported) so that deleting rows from
table.config_changes does not cascade-remove artifacts but instead nullifies
column.config_change_id (consistent with the nullable config_change_id column
and other artifact relationships like artifacts_checks_fkey and
artifacts_playbook_run_action_fkey); update the foreign_key
"artifacts_config_change_fkey" block to set on_delete = SET_NULL and run
migrations to apply the change.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
api/config.go (1)
26-33:⚠️ Potential issue | 🟡 MinorVersion fallback check should use configured
Archinstead ofruntime.GOARCH.After
ReadEnv()is called on line 26,DefaultConfig.Postgrest.Archmay have been overridden viaPGRST_ARCH. However, the fallback logic on line 30 still checksruntime.GOARCH. This can lead to inconsistent version/architecture combinations when someone explicitly setsPGRST_ARCHto a different value.Proposed fix
func init() { DefaultConfig.Postgrest = DefaultConfig.Postgrest.ReadEnv() v := DefaultConfig.Postgrest.Version + arch := DefaultConfig.Postgrest.Arch if strings.HasPrefix(v, "v14") && v != "v14.1" && v != "v14.0" && - runtime.GOOS == "darwin" && runtime.GOARCH == "amd64" { + runtime.GOOS == "darwin" && arch == "amd64" { logger.Warnf("PostgREST v14.2+ does not have a darwin/arm64 binary, defaulting to v14.1 for darwin/amd64") DefaultConfig.Postgrest.Version = "v14.1" } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/config.go` around lines 26 - 33, The fallback logic after calling DefaultConfig.Postgrest.ReadEnv() should use the configured architecture field rather than the OS runtime value: replace the runtime.GOARCH check in the Postgrest version fallback conditional with DefaultConfig.Postgrest.Arch (or the appropriate field on the Postgrest struct) so the condition reads e.g. strings.HasPrefix(v, "v14") && v != "v14.1" && v != "v14.0" && runtime.GOOS == "darwin" && DefaultConfig.Postgrest.Arch == "amd64"; update the warning text if needed to reflect the configured Arch.
♻️ Duplicate comments (3)
query/config_insights.go (2)
83-88:⚠️ Potential issue | 🟠 MajorUse safe type assertions when inspecting
clause.Eq.Column.Line 84 and Line 87 can panic because
eq.Column.(clause.Column)is unchecked. Guard with comma-ok assertion before reading.Name.🛡️ Proposed fix
for _, c := range clauses { - if eq, ok := c.(clause.Eq); ok && eq.Column.(clause.Column).Name == "deleted_at" { - continue - } - if eq, ok := c.(clause.Eq); ok && eq.Column.(clause.Column).Name == "config_id" && eq.Value == nil { - continue + if eq, ok := c.(clause.Eq); ok { + if col, ok := eq.Column.(clause.Column); ok { + if col.Name == "deleted_at" { + continue + } + if col.Name == "config_id" && eq.Value == nil { + continue + } + } } filteredClauses = append(filteredClauses, c) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/config_insights.go` around lines 83 - 88, The loop over clauses is using an unchecked type assertion eq.Column.(clause.Column) which can panic; update the checks inside the for _, c := range clauses loop (where clause.Eq is handled) to perform a safe comma-ok type assertion on eq.Column (e.g., col, ok := eq.Column.(clause.Column)) and only read col.Name when ok is true, otherwise skip/continue; apply this for both the "deleted_at" and "config_id" checks so both uses of eq.Column are guarded.
55-74:⚠️ Potential issue | 🟠 MajorPropagate parse errors for insights-specific filters.
Line 56, Line 61, Line 66, and Line 71 currently ignore parse errors. Invalid filter syntax should fail fast instead of being silently skipped.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/config_insights.go` around lines 55 - 74, The current logic in the insights filter construction silently ignores parse errors from parseAndBuildFilteringQuery for req.Status, req.Severity (via formSeverityQuery), req.Analyzer, and req.AnalysisType; change these blocks (the calls to parseAndBuildFilteringQuery in the function that builds clauses) to check for parseErr and return or propagate the error instead of only appending clauses on nil error—i.e., if parseErr != nil, return the error up to the caller (or wrap it with context) so invalid filter syntax fails fast rather than being skipped.query/catalog_search.go (1)
124-156:⚠️ Potential issue | 🟠 MajorDo not swallow filter parse errors in
ApplyClauses().Line 129, Line 134, and Line 148 silently ignore invalid user filter syntax. That turns invalid input into weaker filtering instead of a clear bad-request error.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/catalog_search.go` around lines 124 - 156, The ApplyClauses method currently swallows parse errors; change its signature from ApplyClauses() ([]clause.Expression, func(*gorm.DB) *gorm.DB) to return an error as a third return value and propagate parse failures instead of ignoring them. Specifically, where it calls parseAndBuildFilteringQuery for AgentID and ConfigType and labels.Parse for Tags, return the error immediately when those functions return a non-nil error; do not append partial clauses. Keep the tagsFn building (jsonColumnRequirementsToSQLClause) unchanged but return an error if labels.Parse or parsedLabelSelector.Requirements() fails. Update all callers of BaseCatalogSearch.ApplyClauses to handle the new error return.
🧹 Nitpick comments (1)
tests/setup/template.go (1)
99-102: Duplicated URL construction logic.The same logic for constructing database URLs appears in both
SetupTemplate(lines 99-102) andSetupNode(lines 172-176). Consider extracting to a helper.♻️ Suggested helper function
func buildDBUrl(baseURL string, port int, dbName string) string { if strings.Contains(baseURL, "/postgres") { return strings.Replace(baseURL, "/postgres", "/"+dbName, 1) } return fmt.Sprintf("postgres://postgres:postgres@localhost:%d/%s?sslmode=disable", port, dbName) }Then use it in both places:
- templateURL := strings.Replace(adminConn, "/postgres", "/"+templateDB, 1) - if !strings.Contains(adminConn, "/postgres") { - templateURL = fmt.Sprintf("postgres://postgres:postgres@localhost:%d/%s?sslmode=disable", port, templateDB) - } + templateURL := buildDBUrl(adminConn, port, templateDB)Also applies to: 172-176
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/setup/template.go` around lines 99 - 102, Duplicate logic builds DB connection URLs in SetupTemplate and SetupNode; extract that logic into a single helper function (e.g., buildDBUrl(baseURL string, port int, dbName string)) that returns the correct URL by checking strings.Contains(baseURL, "/postgres") and either replacing "/postgres" with "/"+dbName or formatting the localhost URL with port and dbName, then replace the inline constructions in SetupTemplate and SetupNode to call buildDBUrl to avoid duplication.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@query/catalog_search.go`:
- Around line 104-117: The catalog selector lookup using SearchResources
(calling SearchResourcesRequest with types.ResourceSelector{Search: b.CatalogID,
Cache: "no-cache"}) currently treats zero matches as success and leaves
b.configIDs empty, allowing downstream global queries; update the catalog
resolution logic in the function that calls SearchResources so that if
response.Configs is empty you return a clear error (e.g., fmt.Errorf("no configs
found for catalog %q", b.CatalogID)) instead of continuing, and ensure
b.configIDs is not set to an empty slice on success—only set b.configIDs and
return ids when at least one config ID is parsed successfully.
In `@tests/setup/template.go`:
- Line 61: The call to telemetry.InitTracer() returns a cleanup function that
you're currently discarding by deferring telemetry.InitTracer(); capture that
returned cleanup (from telemetry.InitTracer) and defer invoking it with a
context (e.g., defer func() { cleanup(context.Background()) }()), or if tracer
cleanup isn't needed for these tests simply call telemetry.InitTracer() without
deferring — update the call site where InitTracer() is invoked to either
capture-and-defer the returned cleanup function or remove the defer.
---
Outside diff comments:
In `@api/config.go`:
- Around line 26-33: The fallback logic after calling
DefaultConfig.Postgrest.ReadEnv() should use the configured architecture field
rather than the OS runtime value: replace the runtime.GOARCH check in the
Postgrest version fallback conditional with DefaultConfig.Postgrest.Arch (or the
appropriate field on the Postgrest struct) so the condition reads e.g.
strings.HasPrefix(v, "v14") && v != "v14.1" && v != "v14.0" && runtime.GOOS ==
"darwin" && DefaultConfig.Postgrest.Arch == "amd64"; update the warning text if
needed to reflect the configured Arch.
---
Duplicate comments:
In `@query/catalog_search.go`:
- Around line 124-156: The ApplyClauses method currently swallows parse errors;
change its signature from ApplyClauses() ([]clause.Expression, func(*gorm.DB)
*gorm.DB) to return an error as a third return value and propagate parse
failures instead of ignoring them. Specifically, where it calls
parseAndBuildFilteringQuery for AgentID and ConfigType and labels.Parse for
Tags, return the error immediately when those functions return a non-nil error;
do not append partial clauses. Keep the tagsFn building
(jsonColumnRequirementsToSQLClause) unchanged but return an error if
labels.Parse or parsedLabelSelector.Requirements() fails. Update all callers of
BaseCatalogSearch.ApplyClauses to handle the new error return.
In `@query/config_insights.go`:
- Around line 83-88: The loop over clauses is using an unchecked type assertion
eq.Column.(clause.Column) which can panic; update the checks inside the for _, c
:= range clauses loop (where clause.Eq is handled) to perform a safe comma-ok
type assertion on eq.Column (e.g., col, ok := eq.Column.(clause.Column)) and
only read col.Name when ok is true, otherwise skip/continue; apply this for both
the "deleted_at" and "config_id" checks so both uses of eq.Column are guarded.
- Around line 55-74: The current logic in the insights filter construction
silently ignores parse errors from parseAndBuildFilteringQuery for req.Status,
req.Severity (via formSeverityQuery), req.Analyzer, and req.AnalysisType; change
these blocks (the calls to parseAndBuildFilteringQuery in the function that
builds clauses) to check for parseErr and return or propagate the error instead
of only appending clauses on nil error—i.e., if parseErr != nil, return the
error up to the caller (or wrap it with context) so invalid filter syntax fails
fast rather than being skipped.
---
Nitpick comments:
In `@tests/setup/template.go`:
- Around line 99-102: Duplicate logic builds DB connection URLs in SetupTemplate
and SetupNode; extract that logic into a single helper function (e.g.,
buildDBUrl(baseURL string, port int, dbName string)) that returns the correct
URL by checking strings.Contains(baseURL, "/postgres") and either replacing
"/postgres" with "/"+dbName or formatting the localhost URL with port and
dbName, then replace the inline constructions in SetupTemplate and SetupNode to
call buildDBUrl to avoid duplication.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e13696b8-4e29-49e6-bb84-5df0e4a3c384
📒 Files selected for processing (16)
api/config.goconnection/gcp.gopostgrest/postgrest.goquery/agent.goquery/catalog_search.goquery/config.goquery/config_access.goquery/config_changes.goquery/config_insights.goquery/config_relations.goquery/query_logger.gorbac/objects.gotests/config_changes_test.gotests/e2e-blobs/test.propertiestests/setup/template.goviews/046_artifact_summary.sql
✅ Files skipped from review due to trivial changes (5)
- tests/e2e-blobs/test.properties
- rbac/objects.go
- postgrest/postgrest.go
- tests/config_changes_test.go
- views/046_artifact_summary.sql
🚧 Files skipped from review as they are similar to previous changes (6)
- connection/gcp.go
- query/config_relations.go
- query/config_access.go
- query/agent.go
- query/config.go
- query/config_changes.go
| response, err := SearchResources(ctx, SearchResourcesRequest{ | ||
| Configs: []types.ResourceSelector{{Search: b.CatalogID, Cache: "no-cache"}}, | ||
| Limit: 200, | ||
| }) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to resolve catalog query %q: %w", b.CatalogID, err) | ||
| } | ||
| for _, c := range response.Configs { | ||
| if id, err := uuid.Parse(c.ID); err == nil { | ||
| ids = append(ids, id) | ||
| } | ||
| } | ||
| b.configIDs = ids | ||
| return ids, nil |
There was a problem hiding this comment.
Prevent broad queries when catalog resolution returns zero matches.
At Line 104, a selector lookup that returns no configs currently falls through as success and returns []uuid.UUID{}. Downstream code then skips config_id scoping, which can return unintended global results.
💡 Proposed fix
response, err := SearchResources(ctx, SearchResourcesRequest{
Configs: []types.ResourceSelector{{Search: b.CatalogID, Cache: "no-cache"}},
Limit: 200,
})
if err != nil {
return nil, fmt.Errorf("failed to resolve catalog query %q: %w", b.CatalogID, err)
}
for _, c := range response.Configs {
if id, err := uuid.Parse(c.ID); err == nil {
ids = append(ids, id)
}
}
+if len(ids) == 0 {
+ return nil, fmt.Errorf("catalog query %q matched no configs", b.CatalogID)
+}
b.configIDs = ids
return ids, nil📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| response, err := SearchResources(ctx, SearchResourcesRequest{ | |
| Configs: []types.ResourceSelector{{Search: b.CatalogID, Cache: "no-cache"}}, | |
| Limit: 200, | |
| }) | |
| if err != nil { | |
| return nil, fmt.Errorf("failed to resolve catalog query %q: %w", b.CatalogID, err) | |
| } | |
| for _, c := range response.Configs { | |
| if id, err := uuid.Parse(c.ID); err == nil { | |
| ids = append(ids, id) | |
| } | |
| } | |
| b.configIDs = ids | |
| return ids, nil | |
| response, err := SearchResources(ctx, SearchResourcesRequest{ | |
| Configs: []types.ResourceSelector{{Search: b.CatalogID, Cache: "no-cache"}}, | |
| Limit: 200, | |
| }) | |
| if err != nil { | |
| return nil, fmt.Errorf("failed to resolve catalog query %q: %w", b.CatalogID, err) | |
| } | |
| for _, c := range response.Configs { | |
| if id, err := uuid.Parse(c.ID); err == nil { | |
| ids = append(ids, id) | |
| } | |
| } | |
| if len(ids) == 0 { | |
| return nil, fmt.Errorf("catalog query %q matched no configs", b.CatalogID) | |
| } | |
| b.configIDs = ids | |
| return ids, nil |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@query/catalog_search.go` around lines 104 - 117, The catalog selector lookup
using SearchResources (calling SearchResourcesRequest with
types.ResourceSelector{Search: b.CatalogID, Cache: "no-cache"}) currently treats
zero matches as success and leaves b.configIDs empty, allowing downstream global
queries; update the catalog resolution logic in the function that calls
SearchResources so that if response.Configs is empty you return a clear error
(e.g., fmt.Errorf("no configs found for catalog %q", b.CatalogID)) instead of
continuing, and ensure b.configIDs is not set to an empty slice on success—only
set b.configIDs and return ids when at least one config ID is parsed
successfully.
| logger.Errorf("Failed to load test properties: %v", err) | ||
| } | ||
|
|
||
| defer telemetry.InitTracer() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify telemetry.InitTracer() signature to understand intended usage
ast-grep --pattern $'func InitTracer() $_ {
$$$
}'
# Also check how it's used elsewhere in the codebase
rg -n 'telemetry\.InitTracer' --type go -C2Repository: flanksource/duty
Length of output: 3570
🏁 Script executed:
# Check SetupTemplate function signature to understand available context
rg -n 'func SetupTemplate' tests/setup/template.go -A 5Repository: flanksource/duty
Length of output: 297
🏁 Script executed:
# Check if context.Background() is used in template.go
rg -n 'context\.' tests/setup/template.go | head -20Repository: flanksource/duty
Length of output: 205
Captured cleanup function from InitTracer() is lost and never called.
InitTracer() returns a cleanup function that must be invoked to properly shutdown the tracer. Using defer telemetry.InitTracer() discards this return value, causing a resource leak. The cleanup function requires a context.Context argument, which the function doesn't have directly available.
Use this pattern instead:
cleanup := telemetry.InitTracer()
defer func() {
cleanup(context.Background())
}()Alternatively, if cleanup is not necessary for this test setup, remove the defer entirely:
telemetry.InitTracer()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/setup/template.go` at line 61, The call to telemetry.InitTracer()
returns a cleanup function that you're currently discarding by deferring
telemetry.InitTracer(); capture that returned cleanup (from
telemetry.InitTracer) and defer invoking it with a context (e.g., defer func() {
cleanup(context.Background()) }()), or if tracer cleanup isn't needed for these
tests simply call telemetry.InitTracer() without deferring — update the call
site where InitTracer() is invoked to either capture-and-defer the returned
cleanup function or remove the defer.
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
api/config.go (1)
26-33:⚠️ Potential issue | 🟠 MajorUse configured arch for v14 Darwin fallback (not host arch).
The fallback gating at line 30 uses
runtime.GOARCH, but the PostgREST installer atpostgrest/postgrest.go:24usesconfig.Postgrest.Arch. IfPGRST_ARCHis set (e.g.,PGRST_ARCH=amd64on a darwin/arm64 host), the fallback condition will not trigger sinceruntime.GOARCHwill be "arm64", but the installer will attempt to download a binary using the configured arch (amd64). For v14.2+, which lacks darwin/amd64 binaries, this causes a download failure.Additionally, the log message on line 31 is incorrect: it states "does not have a darwin/arm64 binary" when v14.2+ actually has arm64 but lacks amd64.
🔧 Proposed fix
func init() { DefaultConfig.Postgrest = DefaultConfig.Postgrest.ReadEnv() v := DefaultConfig.Postgrest.Version if strings.HasPrefix(v, "v14") && v != "v14.1" && v != "v14.0" && - runtime.GOOS == "darwin" && runtime.GOARCH == "amd64" { - logger.Warnf("PostgREST v14.2+ does not have a darwin/arm64 binary, defaulting to v14.1 for darwin/amd64") + runtime.GOOS == "darwin" && DefaultConfig.Postgrest.Arch == "amd64" { + logger.Warnf("PostgREST v14.2+ does not have a darwin/amd64 binary, defaulting to v14.1 for darwin/amd64") DefaultConfig.Postgrest.Version = "v14.1" } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/config.go` around lines 26 - 33, The fallback currently checks runtime.GOARCH but the installer uses the configured arch (Postgrest.Arch), so change the gating to use DefaultConfig.Postgrest.Arch (or a local variable arch := DefaultConfig.Postgrest.Arch) instead of runtime.GOARCH; keep the rest of the version checks and OS check (runtime.GOOS == "darwin"), and update the logger message to correctly state that v14.2+ lacks a darwin/amd64 binary (use the arch variable in the message) before setting DefaultConfig.Postgrest.Version = "v14.1".
♻️ Duplicate comments (4)
query/config_insights.go (2)
83-89:⚠️ Potential issue | 🟡 MinorGuard the
clause.Columnassertion before reading.Name.
eq.Column.(clause.Column)can panic if one of theseEqexpressions carries a different column representation. A checked assertion keeps this filtering step safe.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/config_insights.go` around lines 83 - 89, The loop over clauses currently assumes eq.Column is a clause.Column and may panic; update the checks that inspect eq.Column in the clause.Eq branches (the loop over clauses and the clause.Eq handling) to first perform a safe type assertion like col, ok := eq.Column.(clause.Column) and only read col.Name when ok is true, e.g., replace direct eq.Column.(clause.Column).Name accesses with a guarded check that verifies the assertion before comparing to "deleted_at" or "config_id".
55-74:⚠️ Potential issue | 🟡 MinorReject malformed insight filters instead of ignoring them.
Parse errors for
status,severity,analyzer, andanalysis_typeare swallowed here, so bad input returns broader results than requested instead of anEINVALIDresponse.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/config_insights.go` around lines 55 - 74, The parsing errors from parseAndBuildFilteringQuery called for req.Status, formSeverityQuery(req.Severity), req.Analyzer, and req.AnalysisType are currently ignored (only appending clauses on nil parseErr), causing malformed filters to be treated as absent; change each branch to detect a non-nil parseErr and return an EINVALID-style error (propagate a validation error up to the caller) instead of skipping it, e.g., in the code paths around parseAndBuildFilteringQuery and formSeverityQuery ensure you return a validation error when parseErr != nil so invalid status/severity/analyzer/analysis_type values fail fast rather than producing broader results.query/catalog_search.go (2)
104-117:⚠️ Potential issue | 🟠 MajorTreat empty catalog resolution as a failure.
When
SearchResourcesyields no parsable config IDs, this still returns success withb.configIDsempty.FindCatalogChangesandFindCatalogInsightsboth treatlen(configIDs) == 0as “don’t scope by config”, so a selector that matches nothing expands into a global report.💡 Suggested fix
for _, c := range response.Configs { if id, err := uuid.Parse(c.ID); err == nil { ids = append(ids, id) } } + if len(ids) == 0 { + return nil, fmt.Errorf("catalog query %q matched no configs", b.CatalogID) + } b.configIDs = ids return ids, nil🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/catalog_search.go` around lines 104 - 117, SearchResources call in the catalog resolver can return no parsable IDs causing b.configIDs to be set empty and inadvertently broaden scope in FindCatalogChanges/FindCatalogInsights; change the post-search logic in the function that calls SearchResources (referencing SearchResources, SearchResourcesRequest, b.CatalogID, and b.configIDs) to treat an empty parsed-IDs result as an error: after iterating response.Configs and building ids, if len(ids) == 0 return a non-nil error (include b.CatalogID in the message) instead of silently assigning b.configIDs and succeeding, so selectors that match nothing do not expand into global reports.
128-156:⚠️ Potential issue | 🟡 MinorDon’t silently drop invalid base filters.
Malformed
agent_id,config_type, ortagsfilters are ignored here, so bad input broadens the query instead of returning a validation error or at least a warning.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/catalog_search.go` around lines 128 - 156, The code currently swallows parsing errors for b.AgentID, b.ConfigType and b.Tags (calls to parseAndBuildFilteringQuery and labels.Parse) which widens queries on bad input; change the logic in the catalog search builder so that if parseAndBuildFilteringQuery returns a non-nil err (for AgentID or ConfigType) or labels.Parse returns an err (for Tags) you propagate/return a validation error instead of silently ignoring it; update the block around parseAndBuildFilteringQuery (referencing b.AgentID, b.ConfigType and parseAndBuildFilteringQuery) to check err != nil and return that error (or wrap it as a validation error) and likewise check the error from labels.Parse before building tagsFn (referencing labels.Parse and jsonColumnRequirementsToSQLClause) so malformed selectors produce an error response rather than being dropped.
🧹 Nitpick comments (3)
query/config_tree.go (2)
108-124: Return order is non-deterministic.
lo.Keys(allIDs)returns map keys in arbitrary order since Go maps are unordered. If downstream code depends on consistent ordering (e.g., for deterministic tree construction or testing), consider sorting the result.♻️ Option to ensure deterministic ordering
- return lo.Keys(allIDs), nil + result := lo.Keys(allIDs) + slices.SortFunc(result, func(a, b uuid.UUID) int { + return strings.Compare(a.String(), b.String()) + }) + return result, nilNote: Would require adding
"slices"to imports.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/config_tree.go` around lines 108 - 124, The function ExpandConfigChildren collects IDs into a map and returns lo.Keys(allIDs), which yields a non-deterministic order; modify ExpandConfigChildren to extract the keys into a slice, sort that slice (e.g., using slices.Sort or sort.Slice with uuid.UUID's string ordering) and then return the sorted slice so callers receive a deterministic ordering; update imports to include the sorter package used (slices or sort) and ensure you reference ExpandConfigChildren and the allIDs variable when making the change.
25-73: Consider adding query logging instrumentation for consistency.Other query functions like
GetRelatedConfigsuseNewQueryLoggerfor timing and observability. This function orchestrates multiple database operations and would benefit from similar instrumentation.♻️ Example instrumentation pattern
func ConfigTree(ctx context.Context, configID uuid.UUID, opts ConfigTreeOptions) (*ConfigTreeNode, error) { + var err error + timer := NewQueryLogger(ctx).Start("ConfigTree").Arg("id", configID) + defer timer.End(&err) + config, err := GetCachedConfig(ctx, configID.String()) if err != nil { return nil, err🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/config_tree.go` around lines 25 - 73, Add query logging to ConfigTree by creating a NewQueryLogger at the start of the function and using it to instrument the DB work performed by GetCachedConfig, ExpandConfigChildren, GetConfigsByIDs and GetRelatedConfigs; ensure you capture and pass errors to the logger (call qlog.Done(err) or the equivalent before each early return and at the final return) and include the function name "ConfigTree" as the logger label so timing/observability matches other callers like GetRelatedConfigs while leaving buildConfigTree uninstrumented.api/config.go (1)
153-155: ValidatePGRST_ARCHto fail fast on unsupported values.
PGRST_ARCHis accepted verbatim at Lines 153-155. Invalid values currently fail later during install, which is harder to diagnose. Consider constraining to supported values (amd64,arm64) and warning/falling back otherwise.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/config.go` around lines 153 - 155, The code currently assigns PGRST_ARCH verbatim to clone.Arch; change this to validate the environment variable (os.Getenv("PGRST_ARCH")) against the supported list ["amd64","arm64"] and handle invalid values by either returning an error from the config loader (preferred fail-fast) or logging a warning and falling back to the default architecture before setting clone.Arch; ensure you update the branch that reads PGRST_ARCH so it only sets clone.Arch when the value is in the allowed set and include a clear error/warning message mentioning the invalid value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@connection/gcp.go`:
- Around line 48-53: The code calls deprecated
google.CredentialsFromJSONWithParams (SA1019) on g.Credentials.ValueStatic and
must be replaced by google.CredentialsFromJSONWithTypeAndParams; update the
logic that currently sets creds via CredentialsFromJSONWithParams to instead (1)
parse/inspect the JSON from g.Credentials.ValueStatic to determine the
credential type, (2) only allow expected types (e.g. google.ServiceAccount) and
return an error for unexpected types, and (3) call
google.CredentialsFromJSONWithTypeAndParams(ctx, jsonData, credType,
google.CredentialsParams{Scopes: scopes}) and return the resulting TokenSource.
Ensure the code path that previously returned creds.TokenSource now uses the new
call and rejects unvalidated credential JSON.
In `@query/catalog_search.go`:
- Around line 124-158: ApplyClauses on BaseCatalogSearch emits config-specific
predicates (agent_id, type, created_at, deleted_at, tags) which breaks callers
like FindCatalogInsights that query config_analysis; change ApplyClauses to be
table-aware by adding a parameter or flag (e.g. includeConfigColumns bool or
tableName string) and only emit predicates for columns that exist on the target
table, or remove the table-specific predicates from ApplyClauses and require
callers (FindCatalogInsights) to append their own table-specific clauses; ensure
the tagsFn uses the correct JSON column name for the caller (e.g. "tags" vs the
config_analysis column) and update all call sites to pass the new parameter.
In `@query/config_changes.go`:
- Around line 221-225: Validate req.Severity against the allowed single values
(critical, high, medium, low, info) before calling formSeverityQuery; if
req.Severity is not exactly one of those, return api.Errorf(api.EINVALID, ...)
instead of falling back to the broad default. Update the block that calls
formSeverityQuery (and the same check used by FindCatalogInsights) to perform
this whitelist check on req.Severity, then only call formSeverityQuery and
parseAndBuildFilteringQuery when the value is valid.
In `@query/config_tree.go`:
- Around line 91-94: GetConfigsByIDs call is swallowing errors (items, err :=
GetConfigsByIDs(ctx, parentIDs)) and always returning nil on error or empty
result; either propagate the error or at minimum log it. Update the code around
that call (the block that currently does "if err != nil || len(items) == 0 {
return nil }") to: if err != nil { return nil, err } (or call your logger to
record the error before returning) and keep the existing empty-check behavior
separately (e.g., if len(items) == 0 { return nil, nil }); reference
GetConfigsByIDs, parentIDs, items and err when making the change so parent
resolution failures are not silently swallowed.
In `@tests/setup/template.go`:
- Around line 218-241: ensurePostgres starts the embedded Postgres
(postgresServer) but never registers a shutdown hook, so the DB process remains
orphaned when shutdown.Shutdown() runs; update ensurePostgres to register a
cleanup with the same shutdown facility used in SetupDB (e.g., call
shutdown.After or equivalent) that stops postgresServer when the test suite
shuts down, referencing the postgresServer variable and ensuring you call its
Stop/Close method in the hook so the embedded DB is terminated.
- Around line 164-169: The unmark→clone→remark sequence in SetupNode creates a
race across parallel Ginkgo nodes; change it to only run the ALTER DATABASE
workaround for older Postgres versions by calling the existing PGMajorVersion()
utility and executing the execPostgres("ALTER DATABASE ... is_template = false")
/ CREATE DATABASE / execPostgres("ALTER DATABASE ... is_template = true")
sequence only when PGMajorVersion() < 10 (or otherwise document supported
versions and remove the workaround), keeping the CREATE DATABASE call
unconditional; reference execPostgres, PGMajorVersion, and SetupNode when making
this change.
---
Outside diff comments:
In `@api/config.go`:
- Around line 26-33: The fallback currently checks runtime.GOARCH but the
installer uses the configured arch (Postgrest.Arch), so change the gating to use
DefaultConfig.Postgrest.Arch (or a local variable arch :=
DefaultConfig.Postgrest.Arch) instead of runtime.GOARCH; keep the rest of the
version checks and OS check (runtime.GOOS == "darwin"), and update the logger
message to correctly state that v14.2+ lacks a darwin/amd64 binary (use the arch
variable in the message) before setting DefaultConfig.Postgrest.Version =
"v14.1".
---
Duplicate comments:
In `@query/catalog_search.go`:
- Around line 104-117: SearchResources call in the catalog resolver can return
no parsable IDs causing b.configIDs to be set empty and inadvertently broaden
scope in FindCatalogChanges/FindCatalogInsights; change the post-search logic in
the function that calls SearchResources (referencing SearchResources,
SearchResourcesRequest, b.CatalogID, and b.configIDs) to treat an empty
parsed-IDs result as an error: after iterating response.Configs and building
ids, if len(ids) == 0 return a non-nil error (include b.CatalogID in the
message) instead of silently assigning b.configIDs and succeeding, so selectors
that match nothing do not expand into global reports.
- Around line 128-156: The code currently swallows parsing errors for b.AgentID,
b.ConfigType and b.Tags (calls to parseAndBuildFilteringQuery and labels.Parse)
which widens queries on bad input; change the logic in the catalog search
builder so that if parseAndBuildFilteringQuery returns a non-nil err (for
AgentID or ConfigType) or labels.Parse returns an err (for Tags) you
propagate/return a validation error instead of silently ignoring it; update the
block around parseAndBuildFilteringQuery (referencing b.AgentID, b.ConfigType
and parseAndBuildFilteringQuery) to check err != nil and return that error (or
wrap it as a validation error) and likewise check the error from labels.Parse
before building tagsFn (referencing labels.Parse and
jsonColumnRequirementsToSQLClause) so malformed selectors produce an error
response rather than being dropped.
In `@query/config_insights.go`:
- Around line 83-89: The loop over clauses currently assumes eq.Column is a
clause.Column and may panic; update the checks that inspect eq.Column in the
clause.Eq branches (the loop over clauses and the clause.Eq handling) to first
perform a safe type assertion like col, ok := eq.Column.(clause.Column) and only
read col.Name when ok is true, e.g., replace direct
eq.Column.(clause.Column).Name accesses with a guarded check that verifies the
assertion before comparing to "deleted_at" or "config_id".
- Around line 55-74: The parsing errors from parseAndBuildFilteringQuery called
for req.Status, formSeverityQuery(req.Severity), req.Analyzer, and
req.AnalysisType are currently ignored (only appending clauses on nil parseErr),
causing malformed filters to be treated as absent; change each branch to detect
a non-nil parseErr and return an EINVALID-style error (propagate a validation
error up to the caller) instead of skipping it, e.g., in the code paths around
parseAndBuildFilteringQuery and formSeverityQuery ensure you return a validation
error when parseErr != nil so invalid status/severity/analyzer/analysis_type
values fail fast rather than producing broader results.
---
Nitpick comments:
In `@api/config.go`:
- Around line 153-155: The code currently assigns PGRST_ARCH verbatim to
clone.Arch; change this to validate the environment variable
(os.Getenv("PGRST_ARCH")) against the supported list ["amd64","arm64"] and
handle invalid values by either returning an error from the config loader
(preferred fail-fast) or logging a warning and falling back to the default
architecture before setting clone.Arch; ensure you update the branch that reads
PGRST_ARCH so it only sets clone.Arch when the value is in the allowed set and
include a clear error/warning message mentioning the invalid value.
In `@query/config_tree.go`:
- Around line 108-124: The function ExpandConfigChildren collects IDs into a map
and returns lo.Keys(allIDs), which yields a non-deterministic order; modify
ExpandConfigChildren to extract the keys into a slice, sort that slice (e.g.,
using slices.Sort or sort.Slice with uuid.UUID's string ordering) and then
return the sorted slice so callers receive a deterministic ordering; update
imports to include the sorter package used (slices or sort) and ensure you
reference ExpandConfigChildren and the allIDs variable when making the change.
- Around line 25-73: Add query logging to ConfigTree by creating a
NewQueryLogger at the start of the function and using it to instrument the DB
work performed by GetCachedConfig, ExpandConfigChildren, GetConfigsByIDs and
GetRelatedConfigs; ensure you capture and pass errors to the logger (call
qlog.Done(err) or the equivalent before each early return and at the final
return) and include the function name "ConfigTree" as the logger label so
timing/observability matches other callers like GetRelatedConfigs while leaving
buildConfigTree uninstrumented.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d07e790f-a6ae-49bf-bffe-6c2482555eb3
📒 Files selected for processing (17)
api/config.goconnection/gcp.gopostgrest/postgrest.goquery/agent.goquery/catalog_search.goquery/config.goquery/config_access.goquery/config_changes.goquery/config_insights.goquery/config_relations.goquery/config_tree.goquery/query_logger.gorbac/objects.gotests/config_changes_test.gotests/e2e-blobs/test.propertiestests/setup/template.goviews/046_artifact_summary.sql
✅ Files skipped from review due to trivial changes (5)
- tests/e2e-blobs/test.properties
- rbac/objects.go
- postgrest/postgrest.go
- views/046_artifact_summary.sql
- tests/config_changes_test.go
🚧 Files skipped from review as they are similar to previous changes (3)
- query/agent.go
- query/config_relations.go
- query/config_access.go
| // Unmark template temporarily for cloning (some pg versions need this) | ||
| _ = execPostgres(adminURL, fmt.Sprintf("ALTER DATABASE %s WITH is_template = false", info.TemplateDB)) | ||
| if err := execPostgres(adminURL, fmt.Sprintf("CREATE DATABASE %s TEMPLATE %s", nodeDBName, info.TemplateDB)); err != nil { | ||
| panic(fmt.Sprintf("failed to clone template: %v", err)) | ||
| } | ||
| _ = execPostgres(adminURL, fmt.Sprintf("ALTER DATABASE %s WITH is_template = true", info.TemplateDB)) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if there are any comments or docs indicating which PG versions require this pattern
rg -n 'is_template' --type go -C2Repository: flanksource/duty
Length of output: 1745
🏁 Script executed:
# Check for PostgreSQL version requirements/constraints
rg -i 'postgres.*version|version.*constraint' --type go --type yaml --type toml --type json -C2 | head -100Repository: flanksource/duty
Length of output: 1819
🏁 Script executed:
# Look for docker/compose files or requirements docs that specify PG version
fd -e yml -e yaml -e dockerfile -e requirements | xargs grep -l -i postgres 2>/dev/null | head -20Repository: flanksource/duty
Length of output: 91
🏁 Script executed:
# Search for any version checks or conditionals related to template or database operations
ast-grep --pattern $'if .*postgres.*version {
$$$
}'Repository: flanksource/duty
Length of output: 42
🏁 Script executed:
# Check for any comments explaining why the unmark/remark pattern is needed
rg -B5 'is_template.*false' tests/setup/template.go | head -50Repository: flanksource/duty
Length of output: 749
🏁 Script executed:
# Look for docker-compose or other test infrastructure configs
fd -e docker-compose.yml -e docker-compose.yaml -e compose.yml -e Dockerfile tests/ 2>/dev/null | xargs cat 2>/dev/null | grep -i -A5 -B5 postgres | head -100Repository: flanksource/duty
Length of output: 42
🏁 Script executed:
# Check root for docker-compose or test configs
fd -maxdepth 2 \( -name 'docker-compose*' -o -name 'Dockerfile*' -o -name '.github' -type d \)Repository: flanksource/duty
Length of output: 290
🏁 Script executed:
# Look for go.mod to see if there are any dependencies or version hints
head -100 go.modRepository: flanksource/duty
Length of output: 4051
🏁 Script executed:
# Search for any issue tracking or comments about this pattern
rg -i 'race|concurrent|parallel|template.*clone' tests/setup/template.go -B2 -A2Repository: flanksource/duty
Length of output: 250
🏁 Script executed:
# Check if there are any comments or issues about PostgreSQL version requirements for cloning
rg -i 'clone|template.*database' tests/setup/template.go -B3 -A3Repository: flanksource/duty
Length of output: 1176
🏁 Script executed:
# Look at the full SetupNode function to understand context better
sed -n '155,175p' tests/setup/template.goRepository: flanksource/duty
Length of output: 1232
🏁 Script executed:
# Search for any GitHub issues, TODOs, or FIXMEs related to this pattern
rg 'TODO|FIXME|XXX|BUG|HACK' tests/setup/template.goRepository: flanksource/duty
Length of output: 42
🏁 Script executed:
# Check if there are version-specific conditions elsewhere in the codebase for database operations
rg 'PGMajorVersion|postgres.*version' --type go -B2 -A2 | head -80Repository: flanksource/duty
Length of output: 857
Race condition in concurrent template cloning.
The unmark → clone → remark sequence is not atomic. If multiple Ginkgo parallel nodes execute SetupNode simultaneously, one node may remark the template (is_template = true) before another finishes cloning. While the actual impact is likely minimal (errors are ignored, and modern PostgreSQL versions allow cloning from marked templates), this pattern should either be protected with synchronization or removed if PostgreSQL 10+ is the only supported target.
The vague comment "some pg versions need this" suggests a defensive measure. Consider:
- Documenting which PostgreSQL versions require unmarking, or
- Adding a version check using the existing
PGMajorVersion()utility if compatibility with older versions is necessary, or - Removing this pattern if only modern PostgreSQL (≥10) is supported.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/setup/template.go` around lines 164 - 169, The unmark→clone→remark
sequence in SetupNode creates a race across parallel Ginkgo nodes; change it to
only run the ALTER DATABASE workaround for older Postgres versions by calling
the existing PGMajorVersion() utility and executing the execPostgres("ALTER
DATABASE ... is_template = false") / CREATE DATABASE / execPostgres("ALTER
DATABASE ... is_template = true") sequence only when PGMajorVersion() < 10 (or
otherwise document supported versions and remove the workaround), keeping the
CREATE DATABASE call unconditional; reference execPostgres, PGMajorVersion, and
SetupNode when making this change.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
query/catalog_search.go (1)
104-117:⚠️ Potential issue | 🟠 MajorStill unresolved: zero-match catalog resolution is treated as success.
When
SearchResources()returns no parseable config IDs,ResolveConfigIDs()still returns success and stores an empty slice. Callers that only addconfig_idscoping whenlen(configIDs) > 0can then execute a much broader query than the selector asked for.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/catalog_search.go` around lines 104 - 117, ResolveConfigIDs currently treats a zero-match result from SearchResources as success; after calling SearchResources and collecting parseable UUIDs into ids (from response.Configs), check if len(ids) == 0 and if so return a non-nil error (and do not set b.configIDs) so callers won't mistakenly treat an empty slice as a valid scope; update the ResolveConfigIDs function to return an error like "no configs matched catalog %q" when ids is empty and only assign b.configIDs = ids and return ids, nil when ids has at least one entry (references: SearchResources, ResolveConfigIDs, b.configIDs).query/config_changes.go (1)
230-235:⚠️ Potential issue | 🟡 MinorStill unresolved: invalid severities are not rejected.
This path still never validates the raw severity input before building the clause, so values like
bogusorhigh,mediumcan turn into an empty 200 response instead ofEINVALID.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/config_changes.go` around lines 230 - 235, The code path in config_changes.go calls formSeverityQuery(req.Severity) and parseAndBuildFilteringQuery without first validating raw req.Severity, so invalid inputs like "bogus" or "high,medium" can produce an empty result instead of EINVALID; before calling parseAndBuildFilteringQuery (inside the if req.Severity != "" block) validate req.Severity against the canonical severity set (e.g., allowed tokens such as "low","medium","high" or the project's defined enum), return api.Errorf(api.EINVALID, ...) on any unknown token, and only then call formSeverityQuery/parseAndBuildFilteringQuery to build and append clauses. Ensure you reference req.Severity, formSeverityQuery and parseAndBuildFilteringQuery when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@query/catalog_search.go`:
- Around line 128-156: The parsing errors for config_type and tags are being
swallowed in parseAndBuildFilteringQuery and labels.Parse inside ApplyClauses,
causing malformed filters to be treated as broad queries; fix this by surfacing
parse errors: either (A) update ApplyClauses (or the function that builds
clauses) to return (clauses, error) and propagate that error to the caller so
callers can convert it to EINVALID, or (B) move parsing/validation into Validate
(call parseAndBuildFilteringQuery for b.ConfigType and labels.Parse(b.Tags)
there) and store parsed results/state so ApplyClauses can assume valid inputs;
reference parseAndBuildFilteringQuery, labels.Parse, ApplyClauses, and Validate
when making the change.
In `@query/config_tree.go`:
- Around line 197-203: The current code reuses the same *ptrNode from
nodes[rc.ID] for "related" edges which preserves the original edgeType/relation
and causes truncation when toConfigTreeNode() sees visited; change the logic in
the related-edge handling (the block that calls relatedToConfigItem(rc) and
writes nodes[rc.ID]) to not overwrite or reattach an existing structural node:
if nodes[rc.ID] already exists (and was created by the parent/child walk) then
create a separate wrapper *ptrNode for the related edge (e.g., new
ptrNode{ConfigItem: nodes[rc.ID].ConfigItem, edgeType: "related", relation:
rc.Relation}) and insert that wrapper into the graph/edges instead of replacing
nodes[rc.ID]; alternatively skip creating/reattaching when the existing node is
structural—apply the same change to the other related-edge blocks noted (the
ranges around 207-225 and 243-249) so toConfigTreeNode() never reuses a
structural node for a related edge.
- Around line 213-221: The code is using the last Path segment as the parent but
models.ConfigItem.SetParent() appends the node's own ID to Path, so
segments[len(segments)-1] returns the node's ID; change the lookup to use the
penultimate segment (segments[len(segments)-2]) as the parent ID when rc.Path
has multiple segments. Update the logic around rc.Path, segments, parentStr and
uuid.Parse to handle single-segment vs multi-segment Paths, then resolve the
parent from nodes[pid] and attach via parent.children = append(parent.children,
node) only when the parsed penultimate UUID exists and passes the same checks
(parent != node and !parentIDs[pid]).
---
Duplicate comments:
In `@query/catalog_search.go`:
- Around line 104-117: ResolveConfigIDs currently treats a zero-match result
from SearchResources as success; after calling SearchResources and collecting
parseable UUIDs into ids (from response.Configs), check if len(ids) == 0 and if
so return a non-nil error (and do not set b.configIDs) so callers won't
mistakenly treat an empty slice as a valid scope; update the ResolveConfigIDs
function to return an error like "no configs matched catalog %q" when ids is
empty and only assign b.configIDs = ids and return ids, nil when ids has at
least one entry (references: SearchResources, ResolveConfigIDs, b.configIDs).
In `@query/config_changes.go`:
- Around line 230-235: The code path in config_changes.go calls
formSeverityQuery(req.Severity) and parseAndBuildFilteringQuery without first
validating raw req.Severity, so invalid inputs like "bogus" or "high,medium" can
produce an empty result instead of EINVALID; before calling
parseAndBuildFilteringQuery (inside the if req.Severity != "" block) validate
req.Severity against the canonical severity set (e.g., allowed tokens such as
"low","medium","high" or the project's defined enum), return
api.Errorf(api.EINVALID, ...) on any unknown token, and only then call
formSeverityQuery/parseAndBuildFilteringQuery to build and append clauses.
Ensure you reference req.Severity, formSeverityQuery and
parseAndBuildFilteringQuery when making the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c3ed21b3-a6e0-4bcf-a578-75fac3246b5b
📒 Files selected for processing (8)
models/config.gomodels/config_access.goquery/catalog_search.goquery/config_changes.goquery/config_relations.goquery/config_tree.goquery/query_logger.gotests/setup/template.go
🚧 Files skipped from review as they are similar to previous changes (2)
- query/config_relations.go
- tests/setup/template.go
| if _, exists := nodes[rc.ID]; !exists { | ||
| nodes[rc.ID] = &ptrNode{ | ||
| ConfigItem: relatedToConfigItem(rc), | ||
| edgeType: "related", | ||
| relation: rc.Relation, | ||
| } | ||
| } |
There was a problem hiding this comment.
Don't reuse structural nodes for related edges.
If a related config is already present from the parent/child walk, this code reuses the same *ptrNode and reattaches it. The node keeps its original edgeType/relation, and the second occurrence is truncated once toConfigTreeNode() hits visited. Use a separate wrapper node for the related edge, or skip reattaching structural nodes explicitly.
Also applies to: 207-225, 243-249
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@query/config_tree.go` around lines 197 - 203, The current code reuses the
same *ptrNode from nodes[rc.ID] for "related" edges which preserves the original
edgeType/relation and causes truncation when toConfigTreeNode() sees visited;
change the logic in the related-edge handling (the block that calls
relatedToConfigItem(rc) and writes nodes[rc.ID]) to not overwrite or reattach an
existing structural node: if nodes[rc.ID] already exists (and was created by the
parent/child walk) then create a separate wrapper *ptrNode for the related edge
(e.g., new ptrNode{ConfigItem: nodes[rc.ID].ConfigItem, edgeType: "related",
relation: rc.Relation}) and insert that wrapper into the graph/edges instead of
replacing nodes[rc.ID]; alternatively skip creating/reattaching when the
existing node is structural—apply the same change to the other related-edge
blocks noted (the ranges around 207-225 and 243-249) so toConfigTreeNode() never
reuses a structural node for a related edge.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
query/config_tree.go (1)
193-225:⚠️ Potential issue | 🟠 MajorRelated-edge wiring is still using the wrong parent and the wrong node instance.
rc.Pathis read from the last segment even thoughmodels.ConfigItem.SetParent()appends the node’s own ID there, so this falls back totargetNode.childreninstead of the actual parent. Whennodes[rc.ID]already exists, the same structural node is also reused for the related edge, which keeps the oldedgeType/relationand truncates the second occurrence oncetoConfigTreeNode()hitsvisited. Use the penultimate path segment and wrap structural nodes in a separaterelatednode instead of reattaching the original.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/config_tree.go` around lines 193 - 225, The related-edge wiring is using the wrong path segment and reusing structural nodes; change the logic in the loop that processes related to use the penultimate segment of rc.Path (not the last) to resolve the actual parent ID (respecting models.ConfigItem.SetParent behavior), and when attaching a related edge do not reuse the existing structural ptrNode from nodes[rc.ID] — instead create a new wrapper ptrNode (e.g., edge node with edgeType "related" and relation rc.Relation) that contains ConfigItem from relatedToConfigItem(rc) and append that wrapper to the resolved parent (or targetNode if no parent found); keep using parentIDs, wired and toConfigTreeNode() as before to avoid visiting cycles.query/catalog_search.go (2)
104-117:⚠️ Potential issue | 🟠 MajorResolve catalog selectors atomically.
This still treats “no matches” as success and silently caps selector resolution at 200 configs. The empty case lets downstream callers fall back to an unscoped query, and the cap can return partial results with no signal.
ResolveConfigIDs()should either fully resolve the selector or return a clear error/overflow condition instead of caching an empty or truncatedconfigIDsset.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/catalog_search.go` around lines 104 - 117, The current ResolveConfigIDs implementation calls SearchResources and silently treats zero results as success and a 200-result page as complete; change it so ResolveConfigIDs validates the response: if len(response.Configs) == 0 return a clear "no matches for selector" error referencing b.CatalogID (do not set b.configIDs), and if len(response.Configs) >= request.Limit return an "selector overflow/truncated results" error to signal partial resolution; only populate b.configIDs from parsed UUIDs and return them when you have a definitive, non-truncated result. Use the existing SearchResources call, the b.CatalogID selector, and the b.configIDs field when implementing these checks.
124-158:⚠️ Potential issue | 🟠 Major
ApplyClauses()is still too table-specific to be shared safely.It hard-codes config-shaped columns and silently drops
config_type/tagsparse failures. That means non-config callers can end up targeting the wrong columns, while malformed filters quietly broaden the query instead of surfacingEINVALID. Make this helper table-aware and return an error so callers can fail fast.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/catalog_search.go` around lines 124 - 158, ApplyClauses currently assumes "created_at", "deleted_at", "type"/"agent_id"/"tags" columns and swallows parse errors; change BaseCatalogSearch.ApplyClauses to be table-aware and return an error (e.g., func (b *BaseCatalogSearch) ApplyClauses(tableName string) ([]clause.Expression, func(*gorm.DB) *gorm.DB, error) or add a returned error) so callers can provide the target table and receive parse failures; ensure you use the provided tableName when building column names instead of hard-coding "created_at"/"deleted_at"/"type"/"agent_id" and propagate errors from parseAndBuildFilteringQuery and labels.Parse (do not ignore them) and keep jsonColumnRequirementsToSQLClause usage but pass the resolved table/column names so the tag filter is applied to the correct JSON column; update calling sites to handle the new error/parameter.query/config_changes.go (1)
169-196:⚠️ Potential issue | 🟡 MinorReject unknown severities before building the predicate.
The
HasPrefix("!")fast-path still bypasses the whitelist, so typoed negations make it through clause generation instead of returningEINVALID. For a fixed enum like severity, validate the raw value first and only then translate it into the query form.Also applies to: 230-235
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@query/config_changes.go` around lines 169 - 196, The function formSeverityQuery currently fast-paths inputs starting with "!" before validating against the allowed severity enum, so typoed negations bypass validation; update formSeverityQuery to first validate the raw severity (check that severity or severity without leading '!' is one of models.Severity values) and only then build/translate the query string (preserve negation handling for validated values), returning "__invalid__" (or the existing invalid sentinel) for unknown values; apply the same validation-first change to the equivalent clause generation at the other occurrence referenced (the code around the similar severity handling later in the file).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@query/config_changes.go`:
- Around line 222-227: The parser return paths for parseAndBuildFilteringQuery
used for req.ChangeType ("change_type") and the fallback "catalog_id" currently
return raw parse errors; change both branches to wrap/return the error as
api.EINVALID (preserving the original parse error as the cause or message) so
malformed filter syntax consistently yields api.EINVALID; update the error
return in the parseErr != nil cases in the change_type handling and the
catalog_id fallback to return api.EINVALID containing the parser error.
---
Duplicate comments:
In `@query/catalog_search.go`:
- Around line 104-117: The current ResolveConfigIDs implementation calls
SearchResources and silently treats zero results as success and a 200-result
page as complete; change it so ResolveConfigIDs validates the response: if
len(response.Configs) == 0 return a clear "no matches for selector" error
referencing b.CatalogID (do not set b.configIDs), and if len(response.Configs)
>= request.Limit return an "selector overflow/truncated results" error to signal
partial resolution; only populate b.configIDs from parsed UUIDs and return them
when you have a definitive, non-truncated result. Use the existing
SearchResources call, the b.CatalogID selector, and the b.configIDs field when
implementing these checks.
- Around line 124-158: ApplyClauses currently assumes "created_at",
"deleted_at", "type"/"agent_id"/"tags" columns and swallows parse errors; change
BaseCatalogSearch.ApplyClauses to be table-aware and return an error (e.g., func
(b *BaseCatalogSearch) ApplyClauses(tableName string) ([]clause.Expression,
func(*gorm.DB) *gorm.DB, error) or add a returned error) so callers can provide
the target table and receive parse failures; ensure you use the provided
tableName when building column names instead of hard-coding
"created_at"/"deleted_at"/"type"/"agent_id" and propagate errors from
parseAndBuildFilteringQuery and labels.Parse (do not ignore them) and keep
jsonColumnRequirementsToSQLClause usage but pass the resolved table/column names
so the tag filter is applied to the correct JSON column; update calling sites to
handle the new error/parameter.
In `@query/config_changes.go`:
- Around line 169-196: The function formSeverityQuery currently fast-paths
inputs starting with "!" before validating against the allowed severity enum, so
typoed negations bypass validation; update formSeverityQuery to first validate
the raw severity (check that severity or severity without leading '!' is one of
models.Severity values) and only then build/translate the query string (preserve
negation handling for validated values), returning "__invalid__" (or the
existing invalid sentinel) for unknown values; apply the same validation-first
change to the equivalent clause generation at the other occurrence referenced
(the code around the similar severity handling later in the file).
In `@query/config_tree.go`:
- Around line 193-225: The related-edge wiring is using the wrong path segment
and reusing structural nodes; change the logic in the loop that processes
related to use the penultimate segment of rc.Path (not the last) to resolve the
actual parent ID (respecting models.ConfigItem.SetParent behavior), and when
attaching a related edge do not reuse the existing structural ptrNode from
nodes[rc.ID] — instead create a new wrapper ptrNode (e.g., edge node with
edgeType "related" and relation rc.Relation) that contains ConfigItem from
relatedToConfigItem(rc) and append that wrapper to the resolved parent (or
targetNode if no parent found); keep using parentIDs, wired and
toConfigTreeNode() as before to avoid visiting cycles.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 283b8601-54c0-48a5-af95-d792a8c8c32e
📒 Files selected for processing (8)
models/config.gomodels/config_access.goquery/catalog_search.goquery/config_changes.goquery/config_relations.goquery/config_tree.goquery/query_logger.gotests/setup/template.go
🚧 Files skipped from review as they are similar to previous changes (2)
- query/config_relations.go
- tests/setup/template.go
| if req.ChangeType != "" { | ||
| clause, err := parseAndBuildFilteringQuery(req.ChangeType, "change_type", false) | ||
| if err != nil { | ||
| return nil, err | ||
| if c, parseErr := parseAndBuildFilteringQuery(req.ChangeType, "change_type", false); parseErr == nil { | ||
| clauses = append(clauses, c...) | ||
| } else { | ||
| return nil, parseErr | ||
| } |
There was a problem hiding this comment.
Malformed filter syntax should consistently return EINVALID.
change_type and the fallback catalog_id branch return raw parser errors here, while the other filters wrap the same failure mode with api.EINVALID. The same class of bad request can therefore surface differently depending on which field was malformed.
Also applies to: 280-286
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@query/config_changes.go` around lines 222 - 227, The parser return paths for
parseAndBuildFilteringQuery used for req.ChangeType ("change_type") and the
fallback "catalog_id" currently return raw parse errors; change both branches to
wrap/return the error as api.EINVALID (preserving the original parse error as
the cause or message) so malformed filter syntax consistently yields
api.EINVALID; update the error return in the parseErr != nil cases in the
change_type handling and the catalog_id fallback to return api.EINVALID
containing the parser error.
Use CredentialsFromJSONWithType with explicit ServiceAccount type instead of deprecated CredentialsFromJSON to remove staticcheck linter suppression.
New view aggregates artifact counts and sizes grouped by content type, storage location, and connection with associated metadata for reporting and analytics.
…aries Introduces QueryLogger/QueryTimer to capture query execution time, argument details, and result counts. Refactors multiple query functions to use new logging infrastructure. Extracts BaseCatalogSearch to consolidate common search parameters and validation logic across catalog search endpoints.
Allow PGRST_ARCH environment variable to override the default runtime architecture when installing postgrest binary, enabling custom architecture targets.
…aversal Implements proper pagination by counting total results before applying limit/offset. Adds new ConfigTree function to build hierarchical config relationships including parents, children, and related configs.
Switch from CredentialsFromJSONWithParams to CredentialsFromJSONWithType to properly handle different GCP credential formats. Add credential type detection helper to parse the JSON and extract the credential type field.
Implement QueryLogSummary interface to provide category-based summaries in query logs. Adds methods to multiple models (ConfigAnalysis, ConfigAccessSummary, ConfigChangeRow, RelatedConfig) and creates groupedSummary function to display counts by category (e.g. "CodeDeployment: 5, BackupCompleted: 3"). Also improves error handling in config tree resolution and adds embedded postgres cleanup hook.
- formSeverityQuery returns error instead of "__invalid__" for unknown severity - ApplyClauses returns error and accepts excludeColumns to skip inapplicable columns - FindCatalogInsights/FindCatalogChanges propagate filter parse errors - Remove unsafe type assertion in config_insights clause filtering - timer.Results only called on successful DB query in GetRelatedConfigs - Fix path segment off-by-one in buildConfigTree (use penultimate segment) - Empty config resolution returns empty results instead of global unscoped query - FindConfigAccessByConfigIDs uses large PageSize to avoid truncation - Add Lenient flag to BaseCatalogSearch for global search use cases
Summary by CodeRabbit
New Features
Improvements
Tests