Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
HiveMind Sessions2 sessions · 1h 54m · $86
View all sessions in HiveMind → Run |
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/controller/translator/v2/objectstore.go (1)
33-58:⚠️ Potential issue | 🟠 Major
storageSizeis no longer provisioning real object-store capacity.After parsing
infraSpec.StorageSize, the only effect isMaster.VolumeSizeLimitMBon Line 58. Nothing in this builder uses that value to size the Seaweed volume servers themselves, so the user-facing storage setting now changes a master-side limit but not the actual storage allocated to the object store.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/controller/translator/v2/objectstore.go` around lines 33 - 58, The parsed storageQuantity is only being applied to Master.VolumeSizeLimitMB (volumeSizeLimitMB) but not used to size the actual volume servers, so update the Seaweed CR construction to apply volumeSizeLimitMB to the volume-server sizing field(s) on the Seaweed spec (e.g., set the appropriate field on seaweedCR.Spec.Volume or seaweedCR.Spec.VolumeServer/VolumeServers such as VolumeSizeLimitMB/SizeLimitMB/CapacityMB depending on the CRD), using the same computed int32 volumeSizeLimitMB (or storageQuantity) and preserving replication logic (replication variable and Master.DefaultReplication); ensure the value is applied in MB units and remove no-op usage so user-facing StorageSize actually affects provisioned object-store capacity.
🧹 Nitpick comments (2)
hack/testing-manifests/seaweedfs/seaweedfs.yaml (1)
18-18: Harden test pod spec and pin image for reproducible runs.Using
chrislusf/seaweedfs:latestand no containersecurityContextmakes test runs non-deterministic and weakens baseline security posture.Suggested hardening + pinning
- image: chrislusf/seaweedfs:latest + image: chrislusf/seaweedfs:v3.74 # or pinned digest + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + capabilities: + drop: ["ALL"]Also applies to: 16-38
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hack/testing-manifests/seaweedfs/seaweedfs.yaml` at line 18, Replace the unpinned image reference and add a container securityContext: change the image field (currently "chrislusf/seaweedfs:latest") to a specific, immutable tag or digest (e.g., a versioned tag or sha256 digest) and add a securityContext on the container with at minimum runAsNonRoot: true, runAsUser (non-zero UID), readOnlyRootFilesystem: true, allowPrivilegeEscalation: false, and dropCapabilities (e.g., ALL); also consider adding an appropriate podSecurityContext and a seccompProfile if supported to further harden the pod spec (apply same changes to the other similar blocks referenced at lines 16-38).pkg/vendored/seaweedfs-operator/seaweed.seaweedfs.com/v1/vendored_helpers.go (1)
5-5: UseGroupNameingroupversion_info.goto avoid literal drift.
GroupNameis now defined here, butGroupVersionstill hardcodes the same string elsewhere. Centralizing on this constant will prevent accidental divergence.♻️ Suggested follow-up diff
--- a/pkg/vendored/seaweedfs-operator/seaweed.seaweedfs.com/v1/groupversion_info.go +++ b/pkg/vendored/seaweedfs-operator/seaweed.seaweedfs.com/v1/groupversion_info.go @@ -GroupVersion = schema.GroupVersion{Group: "seaweed.seaweedfs.com", Version: "v1"} +GroupVersion = schema.GroupVersion{Group: GroupName, Version: "v1"}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/vendored/seaweedfs-operator/seaweed.seaweedfs.com/v1/vendored_helpers.go` at line 5, Replace the hardcoded group string in groupversion_info.go with the exported constant GroupName defined in vendored_helpers.go: locate the GroupVersion / SchemeGroupVersion declaration (the variable that currently sets Group: "seaweed.seaweedfs.com") and change it to use GroupName so the package references the single source of truth; ensure the package import path allows access to GroupName (adjust imports if necessary) and run a quick build to confirm no references break.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@config/crd/bases/apps.wandb.com_weightsandbiases.yaml`:
- Around line 3026-3028: The schema change replaced a legacy managed
object-store field with accessKey which breaks v2 storage-version manifests;
restore a migration path by keeping the legacy field alongside accessKey in the
CRD schema (add the original legacy property back, mark it deprecated in the
description), and implement a conversion/migration that maps the legacy property
to accessKey during rollout (either via CRD conversion webhooks or an internal
conversion step) so existing v2 manifests continue to validate and get migrated
to accessKey on write. Ensure v2 remains the storageVersion until migration is
complete.
In `@deploy/operator/Chart.yaml`:
- Around line 30-33: The Chart.yaml pins seaweedfs-operator to version 0.1.13
whose CRD is missing fields required by the vendored operator types; update the
chart CRD to match the vendored schema or bump to a chart release that contains
the full CRD schema. Specifically, replace the current CRD bundle for the
seaweedfs-operator with the vendored operator's complete CRD (including Master,
Volume, Filer, Admin, VolumeTopology, Volume.ResourceRequirements,
Volume.Replicas and Status subresources) or set Chart.yaml to a newer chart
version that already includes those fields so the deployed CRD mirrors the
controller's types.
In `@deploy/operator/templates/wandb-operator-seaweedfs-role.yaml`:
- Line 4: The YAML emits raw template scalars which break linting; wrap the
templated role/name and namespace scalars in quotes so they are valid YAML (e.g.
change name: {{ .Release.Name }}-seaweedfs to name: "{{ .Release.Name
}}-seaweedfs" and similarly quote the other two role name occurrences and
namespace: {{ .Release.Namespace }}), updating every metadata.name/role
name/namespace field that uses {{ ... }} so the template remains functional but
produces valid quoted YAML.
In `@deploy/telemetry/templates/telemetry-scrapes.yaml`:
- Around line 247-253: The SeaweedFS CR lacks the required
weightsandbiases.apps.wandb.com/component label so the Prometheus scrape
selector (expecting weightsandbiases.apps.wandb.com/component: seaweedfs)
matches nothing; call and apply the existing BuildWandbObjectStoreLabels when
constructing the Seaweed CR in ToObjectStoreVendorSpec
(internal/controller/translator/v2/objectstore.go) so the component label is
present on pods, and in addition verify that the SeaweedFS pod spec exposes a
container port named "metrics" (or update telemetry-scrapes.yaml to reference
the actual port name) so podMetricsEndpoints.port matches the container port.
In `@hack/testing-manifests/seaweedfs/seaweedfs.yaml`:
- Around line 24-25: The manifest exposes the S3 containerPort as 8333 which
conflicts with the controller's expected Port: "9000" in
internal/controller/infra/managed/objectstore/seaweedfs/conn.go; update the s3
containerPort entries (the one at the shown diff and the other occurrence around
lines 48-49) to 9000 so the test manifest matches the controller's connection
settings and avoids object-store connectivity failures.
- Around line 42-43: The Service name in the manifest must match the endpoint
constructed by NsNameBuilder.ServiceName() (which appends "-s3") used by
buildS3ConnInfo; change the Service metadata.name from "seaweedfs" to
"seaweedfs-s3" so the Kubernetes Service name equals the generated serviceName
(NsNameBuilder.ServiceName()) and the Host created in buildS3ConnInfo resolves
correctly.
In `@internal/controller/infra/managed/objectstore/seaweedfs/config.go`:
- Around line 60-64: The helper extractSecretKey currently returns an empty
string for missing credentials which lets malformed configs persist; change
extractSecretKey to validate the config shape and the specific credential by
matching on accessKey (e.g., accept an accessKey parameter), and return the
secretKey and an error instead of silently returning "". Inside extractSecretKey
check that config.Identities is non-empty, that the identity's Credentials
contains an entry with the expected accessKey, and if found return its
SecretKey; otherwise return a descriptive error. Update all callers of
extractSecretKey (and any write path that persists identities) to handle the
error and abort persistence on missing/invalid credentials to avoid saving empty
secretKey values.
In `@internal/controller/infra/managed/objectstore/seaweedfs/detach.go`:
- Around line 34-35: The current check treats a missing actual.Spec.Volume as OK
when desiredReplicas > 0; update the logic in detach.go (the block referencing
desiredReplicas and actual.Spec.Volume) so that when desiredReplicas > 0 you
treat actual.Spec.Volume == nil as a mismatch as well as a mismatched replica
count — i.e., change the guard to detect either actual.Spec.Volume == nil or
actual.Spec.Volume.Replicas != desiredReplicas and report drift/failure in that
case (use the same reporting path you already use for replica mismatches).
In `@internal/controller/infra/managed/objectstore/seaweedfs/read.go`:
- Around line 108-117: The S3-outage branch using cr.Status.S3.Replicas and
cr.Status.S3.ReadyReplicas is currently placed after the generic anyRunning
case, making the "red" path unreachable when other components
(master/filer/volume) are running; reorder the switch so the S3-specific
condition (cr.Status.S3.Replicas > 0 && cr.Status.S3.ReadyReplicas == 0) is
evaluated before the anyRunning branch, preserving the intended semantics of
setting status/reason to ConditionFalse/"red" when S3 has desired replicas but
no ready replicas.
In `@internal/controller/infra/managed/objectstore/seaweedfs/write.go`:
- Around line 168-176: The code currently treats a parse failure of an existing
config as "missing" and generates a new secretKey; update the logic so that if
actual != nil and parseS3IdentityConfig(string(actual.Data[configFileName]))
returns a parse error, the reconcile returns that error instead of generating a
new key, and only call goutils.RandomAlphabetic(20) to mint a secret when actual
is nil or actual.Data[configFileName] is absent; adjust the secretKey assignment
flow around parseS3IdentityConfig, extractSecretKey, actual, configFileName, and
goutils.RandomAlphabetic to enforce this behavior.
In `@internal/controller/translator/objectstore.go`:
- Around line 12-14: The SeaweedImage constant currently uses the floating tag
"chrislusf/seaweedfs:latest"; change the constant SeaweedImage to reference a
specific, tested release (e.g., "chrislusf/seaweedfs:4.18") to ensure
reproducible reconciles, or refactor to read the image tag from configuration if
runtime flexibility is required; update the constant SeaweedImage (and any
places that import it) to use the pinned tag and add a comment explaining why
the pin is necessary.
In `@internal/controller/v2/objectstore.go`:
- Line 75: The change removed MinIO handling (e.g., calls like
seaweedfs.PurgeFinalizer) and must not leave existing ManagedObjectStore tenants
stranded; add a migration/compatibility gate in the
reconcile/status/detach/purge code paths that detects legacy MinIO-backed
instances (inspect ManagedObjectStore.Spec/Status backend markers) and either:
1) route legacy operations to the existing MinIO handlers (the legacy
finalizer/detach/purge logic) until migration completes, or 2) explicitly
block/deny the upgrade by returning a clear error/status condition instructing
an admin to run the migration; update the functions that currently call
seaweedfs.* (the reconcile loop, status updater, detach handler, and
purge/finalizer invocation such as where seaweedfs.PurgeFinalizer is called) to
perform this backend check and branch accordingly so legacy objects are
readable/finalizable until migrated.
---
Outside diff comments:
In `@internal/controller/translator/v2/objectstore.go`:
- Around line 33-58: The parsed storageQuantity is only being applied to
Master.VolumeSizeLimitMB (volumeSizeLimitMB) but not used to size the actual
volume servers, so update the Seaweed CR construction to apply volumeSizeLimitMB
to the volume-server sizing field(s) on the Seaweed spec (e.g., set the
appropriate field on seaweedCR.Spec.Volume or
seaweedCR.Spec.VolumeServer/VolumeServers such as
VolumeSizeLimitMB/SizeLimitMB/CapacityMB depending on the CRD), using the same
computed int32 volumeSizeLimitMB (or storageQuantity) and preserving replication
logic (replication variable and Master.DefaultReplication); ensure the value is
applied in MB units and remove no-op usage so user-facing StorageSize actually
affects provisioned object-store capacity.
---
Nitpick comments:
In `@hack/testing-manifests/seaweedfs/seaweedfs.yaml`:
- Line 18: Replace the unpinned image reference and add a container
securityContext: change the image field (currently "chrislusf/seaweedfs:latest")
to a specific, immutable tag or digest (e.g., a versioned tag or sha256 digest)
and add a securityContext on the container with at minimum runAsNonRoot: true,
runAsUser (non-zero UID), readOnlyRootFilesystem: true,
allowPrivilegeEscalation: false, and dropCapabilities (e.g., ALL); also consider
adding an appropriate podSecurityContext and a seccompProfile if supported to
further harden the pod spec (apply same changes to the other similar blocks
referenced at lines 16-38).
In
`@pkg/vendored/seaweedfs-operator/seaweed.seaweedfs.com/v1/vendored_helpers.go`:
- Line 5: Replace the hardcoded group string in groupversion_info.go with the
exported constant GroupName defined in vendored_helpers.go: locate the
GroupVersion / SchemeGroupVersion declaration (the variable that currently sets
Group: "seaweed.seaweedfs.com") and change it to use GroupName so the package
references the single source of truth; ensure the package import path allows
access to GroupName (adjust imports if necessary) and run a quick build to
confirm no references break.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 36ee394f-6b26-41a5-9844-192e1e1e01b5
⛔ Files ignored due to path filters (1)
deploy/operator/Chart.lockis excluded by!**/*.lock
📒 Files selected for processing (56)
api/v2/weightsandbiases_types.gocmd/main.goconfig/crd/bases/apps.wandb.com_weightsandbiases.yamlconfig/rbac/role.yamldeploy/operator/Chart.yamldeploy/operator/crds/apps.wandb.com_weightsandbiases.yamldeploy/operator/templates/wandb-operator-seaweedfs-role.yamldeploy/operator/values.yamldeploy/telemetry/templates/telemetry-scrapes.yamlhack/testing-manifests/minio/minio.yamlhack/testing-manifests/seaweedfs/seaweedfs.yamlhack/testing-manifests/wandb/kustomize/overlays/disable-objectstore/kustomization.yamlhack/testing-manifests/wandb/kustomize/overlays/disable-objectstore/patch.yamlinternal/controller/infra/managed/minio/tenant/config.gointernal/controller/infra/managed/minio/tenant/config_test.gointernal/controller/infra/managed/objectstore/seaweedfs/config.gointernal/controller/infra/managed/objectstore/seaweedfs/config_test.gointernal/controller/infra/managed/objectstore/seaweedfs/conn.gointernal/controller/infra/managed/objectstore/seaweedfs/detach.gointernal/controller/infra/managed/objectstore/seaweedfs/naming.gointernal/controller/infra/managed/objectstore/seaweedfs/purge.gointernal/controller/infra/managed/objectstore/seaweedfs/read.gointernal/controller/infra/managed/objectstore/seaweedfs/seaweedfs_suite_test.gointernal/controller/infra/managed/objectstore/seaweedfs/status.gointernal/controller/infra/managed/objectstore/seaweedfs/write.gointernal/controller/suite_test.gointernal/controller/translator/objectstore.gointernal/controller/translator/v2/objectstore.gointernal/controller/v2/infra_routes.gointernal/controller/v2/networking_route_builders_test.gointernal/controller/v2/objectstore.gointernal/controller/weightsandbiases_controller.gointernal/webhook/v2/weightsandbiases_defaulter_objectstore_test.gointernal/webhook/v2/weightsandbiases_webhook.gointernal/webhook/v2/weightsandbiases_webhook_test.gopkg/vendored/minio-operator/README.mdpkg/vendored/minio-operator/crds/minio.min.io_tenants.yamlpkg/vendored/minio-operator/minio.min.io/v2/constants.gopkg/vendored/minio-operator/minio.min.io/v2/conversion.gopkg/vendored/minio-operator/minio.min.io/v2/doc.gopkg/vendored/minio-operator/minio.min.io/v2/helper.gopkg/vendored/minio-operator/minio.min.io/v2/helper_test.gopkg/vendored/minio-operator/minio.min.io/v2/labels.gopkg/vendored/minio-operator/minio.min.io/v2/names.gopkg/vendored/minio-operator/minio.min.io/v2/register.gopkg/vendored/minio-operator/minio.min.io/v2/types.gopkg/vendored/minio-operator/minio.min.io/v2/utils.gopkg/vendored/minio-operator/minio.min.io/v2/vendored_helpers.gopkg/vendored/minio-operator/minio.min.io/v2/zz_generated.deepcopy.gopkg/vendored/minio-operator/minio.min.io/v2/zz_generated.defaults.gopkg/vendored/seaweedfs-operator/README.mdpkg/vendored/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yamlpkg/vendored/seaweedfs-operator/seaweed.seaweedfs.com/v1/groupversion_info.gopkg/vendored/seaweedfs-operator/seaweed.seaweedfs.com/v1/types.gopkg/vendored/seaweedfs-operator/seaweed.seaweedfs.com/v1/vendored_helpers.gopkg/vendored/seaweedfs-operator/seaweed.seaweedfs.com/v1/zz_generated.deepcopy.go
💤 Files with no reviewable changes (17)
- pkg/vendored/minio-operator/minio.min.io/v2/conversion.go
- pkg/vendored/minio-operator/minio.min.io/v2/doc.go
- pkg/vendored/minio-operator/minio.min.io/v2/vendored_helpers.go
- hack/testing-manifests/minio/minio.yaml
- internal/controller/infra/managed/minio/tenant/config_test.go
- pkg/vendored/minio-operator/minio.min.io/v2/zz_generated.defaults.go
- pkg/vendored/minio-operator/README.md
- pkg/vendored/minio-operator/minio.min.io/v2/helper_test.go
- pkg/vendored/minio-operator/minio.min.io/v2/utils.go
- pkg/vendored/minio-operator/minio.min.io/v2/labels.go
- pkg/vendored/minio-operator/minio.min.io/v2/helper.go
- pkg/vendored/minio-operator/minio.min.io/v2/register.go
- internal/controller/infra/managed/minio/tenant/config.go
- pkg/vendored/minio-operator/minio.min.io/v2/names.go
- pkg/vendored/minio-operator/minio.min.io/v2/types.go
- pkg/vendored/minio-operator/minio.min.io/v2/constants.go
- pkg/vendored/minio-operator/minio.min.io/v2/zz_generated.deepcopy.go
| properties: | ||
| minioBrowserSetting: | ||
| accessKey: | ||
| type: string |
There was a problem hiding this comment.
Preserve a migration path for the renamed managed object-store field.
v2 is still the storage version here, so replacing the legacy field with accessKey is a breaking schema change for existing v2 manifests. Old specs will start failing validation on upgrade, and legacy values can be dropped on the next write unless you keep a compatibility alias/conversion during the rollout.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@config/crd/bases/apps.wandb.com_weightsandbiases.yaml` around lines 3026 -
3028, The schema change replaced a legacy managed object-store field with
accessKey which breaks v2 storage-version manifests; restore a migration path by
keeping the legacy field alongside accessKey in the CRD schema (add the original
legacy property back, mark it deprecated in the description), and implement a
conversion/migration that maps the legacy property to accessKey during rollout
(either via CRD conversion webhooks or an internal conversion step) so existing
v2 manifests continue to validate and get migrated to accessKey on write. Ensure
v2 remains the storageVersion until migration is complete.
| - name: seaweedfs-operator | ||
| version: 0.1.13 | ||
| repository: https://seaweedfs.github.io/seaweedfs-operator/ | ||
| condition: seaweedfs-operator.enabled |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Vendored Seaweed API group/version =="
rg -n 'GroupVersion = schema.GroupVersion' \
pkg/vendored/seaweedfs-operator/seaweed.seaweedfs.com/v1/groupversion_info.go -C 2
tmp="$(mktemp -d)"
trap 'rm -rf "$tmp"' EXIT
curl -fsSL \
"https://seaweedfs.github.io/seaweedfs-operator/seaweedfs-operator-0.1.13.tgz" \
-o "$tmp/seaweedfs-operator.tgz"
tar -xzf "$tmp/seaweedfs-operator.tgz" -C "$tmp"
echo
echo "== CRD groups/versions from dependency chart =="
rg -n 'kind: CustomResourceDefinition|group:|name: seaweeds\.seaweed\.seaweedfs\.com|versions:' \
"$tmp/seaweedfs-operator/crds" -C 2Repository: wandb/operator
Length of output: 408
🏁 Script executed:
# Check if deploy/operator/Chart.yaml exists and show the dependencies section
cd deploy/operator && cat Chart.yaml | grep -A 20 "dependencies:"Repository: wandb/operator
Length of output: 755
🏁 Script executed:
# Look for vendored CRD files in the repository
find . -path "*seaweedfs-operator*" -name "*.yaml" -o -name "*.yml" | head -20Repository: wandb/operator
Length of output: 215
🏁 Script executed:
# Try to find what versions of seaweedfs-operator chart might be available
# Check if there's any documentation or lock files referencing the version
rg "seaweedfs-operator.*0\.1\.13" --type yaml --type jsonRepository: wandb/operator
Length of output: 40
🏁 Script executed:
# Check the vendored package structure for CRDs
find pkg/vendored/seaweedfs-operator -type f -name "*.go" | head -20Repository: wandb/operator
Length of output: 360
🏁 Script executed:
# Examine the vendored CRD file to see what group/version it defines
cat pkg/vendored/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yaml | head -50Repository: wandb/operator
Length of output: 1652
🏁 Script executed:
# Check if there are other chart-related files that might indicate available versions
find . -name "Chart.lock" -o -name "requirements.lock" -o -name "*.tgz" | head -10Repository: wandb/operator
Length of output: 86
🏁 Script executed:
# Try to fetch the Helm index from the repository to see available versions
curl -fsSL "https://seaweedfs.github.io/seaweedfs-operator/index.yaml" 2>/dev/null | head -100Repository: wandb/operator
Length of output: 3875
🏁 Script executed:
# Check for any documentation about seaweedfs operator versions in the repo
rg -i "seaweedfs.*version|0\.1\." --type markdown --type txt | head -20Repository: wandb/operator
Length of output: 418
🏁 Script executed:
tmp="$(mktemp -d)"
trap 'rm -rf "$tmp"' EXIT
# Fetch from the correct GitHub release URL (as shown in Chart.lock)
curl -fsSL \
"https://github.com/seaweedfs/seaweedfs-operator/releases/download/seaweedfs-operator-0.1.13/seaweedfs-operator-0.1.13.tgz" \
-o "$tmp/seaweedfs-operator.tgz"
tar -xzf "$tmp/seaweedfs-operator.tgz" -C "$tmp"
echo "== Chart CRD group/version =="
if [ -d "$tmp/seaweedfs-operator/crds" ]; then
head -30 "$tmp/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yaml" | grep -E "group:|kind:|name:|version:" || true
else
echo "No crds directory found"
fi
echo
echo "== Vendored CRD group/version (for comparison) =="
head -30 pkg/vendored/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yaml | grep -E "group:|kind:|name:|version:" || trueRepository: wandb/operator
Length of output: 536
🏁 Script executed:
tmp="$(mktemp -d)"
trap 'rm -rf "$tmp"' EXIT
curl -fsSL \
"https://github.com/seaweedfs/seaweedfs-operator/releases/download/seaweedfs-operator-0.1.13/seaweedfs-operator-0.1.13.tgz" \
-o "$tmp/seaweedfs-operator.tgz" 2>/dev/null
tar -xzf "$tmp/seaweedfs-operator.tgz" -C "$tmp"
# Compare the full CRD files
if diff -q "$tmp/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yaml" \
pkg/vendored/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yaml >/dev/null 2>&1; then
echo "✓ Chart CRD and vendored CRD are identical"
else
echo "CRD differences detected:"
diff "$tmp/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yaml" \
pkg/vendored/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yaml | head -50 || true
fiRepository: wandb/operator
Length of output: 2669
🏁 Script executed:
# Check file sizes and line counts
echo "Chart CRD size:"
tmp="$(mktemp -d)"
trap 'rm -rf "$tmp"' EXIT
curl -fsSL \
"https://github.com/seaweedfs/seaweedfs-operator/releases/download/seaweedfs-operator-0.1.13/seaweedfs-operator-0.1.13.tgz" \
-o "$tmp/seaweedfs-operator.tgz" 2>/dev/null
tar -xzf "$tmp/seaweedfs-operator.tgz" -C "$tmp"
wc -l "$tmp/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yaml"
echo "Vendored CRD size:"
wc -l pkg/vendored/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yaml
echo
echo "Checking if vendored CRD is truncated (should end with valid YAML):"
tail -5 pkg/vendored/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yamlRepository: wandb/operator
Length of output: 416
🏁 Script executed:
# Check if the controller code actually references CRD schema fields
# Look for references to Seaweed type fields in the controller code
rg -t go "type Seaweed struct|Admin|Filer|Master|Volume" \
pkg/vendored/seaweedfs-operator/seaweed.seaweedfs.com/v1/types.go \
| head -30Repository: wandb/operator
Length of output: 1683
🏁 Script executed:
# Check reconciliation code to see what fields from the Seaweed CRD it uses
rg -t go "\.Spec\.|\.Status\." . --max-count=10 | grep -i seaweed | head -20Repository: wandb/operator
Length of output: 1395
Verify chart CRD completeness against vendored schema—chart version 0.1.13 CRD is a subset of vendored types.
The vendored seaweedfs-operator CRD defines a comprehensive schema (10,727 lines) including Master, Volume, Filer, Admin, and VolumeTopology specs. The chart version 0.1.13 deploys a stripped-down CRD (6,269 lines) that omits significant schema fields the controller code actually references (e.g., Volume.ResourceRequirements, Volume.Replicas, Status subresources).
This creates a validation gap: users can submit Seaweed manifests that the chart's CRD accepts but the controller may not fully validate, potentially causing reconciliation issues or runtime failures when the controller references missing or unvalidated fields.
Ensure the pinned chart version fully includes all CRD schema fields required by the vendored types in this operator, or upgrade to a chart version with a complete CRD.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy/operator/Chart.yaml` around lines 30 - 33, The Chart.yaml pins
seaweedfs-operator to version 0.1.13 whose CRD is missing fields required by the
vendored operator types; update the chart CRD to match the vendored schema or
bump to a chart release that contains the full CRD schema. Specifically, replace
the current CRD bundle for the seaweedfs-operator with the vendored operator's
complete CRD (including Master, Volume, Filer, Admin, VolumeTopology,
Volume.ResourceRequirements, Volume.Replicas and Status subresources) or set
Chart.yaml to a newer chart version that already includes those fields so the
deployed CRD mirrors the controller's types.
| kind: ClusterRole | ||
| metadata: | ||
| name: {{ .Release.Name }}-minio | ||
| name: {{ .Release.Name }}-seaweedfs |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -n '^\s*name:\s*\{\{.*\}\}.*$' deploy/operator/templates/wandb-operator-seaweedfs-role.yamlRepository: wandb/operator
Length of output: 179
🏁 Script executed:
cat -n deploy/operator/templates/wandb-operator-seaweedfs-role.yamlRepository: wandb/operator
Length of output: 1082
🏁 Script executed:
#!/bin/bash
# Check if YAML linting tools are available and test the file
if command -v yamllint &> /dev/null; then
echo "=== yamllint output ==="
yamllint deploy/operator/templates/wandb-operator-seaweedfs-role.yaml 2>&1 || true
else
echo "yamllint not available"
fi
# Also check the context of those lines more carefully
echo "=== Lines around the templated names ==="
sed -n '1,10p' deploy/operator/templates/wandb-operator-seaweedfs-role.yaml
echo "..."
sed -n '25,35p' deploy/operator/templates/wandb-operator-seaweedfs-role.yamlRepository: wandb/operator
Length of output: 1220
🏁 Script executed:
#!/bin/bash
# Test if quoted version would be valid YAML
# Create a test file with quoted versions
cat > /tmp/test_quoted.yaml << 'EOF'
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: "{{ .Release.Name }}-seaweedfs"
rules:
- apiGroups: [""]
resources: ["test"]
verbs: ["get"]
EOF
# Test with yamllint
if command -v yamllint &> /dev/null; then
echo "=== Quoted version with yamllint ==="
yamllint /tmp/test_quoted.yaml 2>&1 || true
fi
# Test with python YAML parser
python3 << 'PYTHON'
import yaml
try:
with open('/tmp/test_quoted.yaml', 'r') as f:
yaml.safe_load(f)
print("Quoted version: YAML parsing successful")
except Exception as e:
print(f"Quoted version: YAML parsing failed - {e}")
PYTHONRepository: wandb/operator
Length of output: 225
Quote the templated role names.
These scalar values are emitted as raw {{ ... }} and cause YAML linting errors. Quoting them resolves the syntax errors while keeping the template valid.
Suggested fix
- name: {{ .Release.Name }}-seaweedfs
+ name: "{{ .Release.Name }}-seaweedfs"Applies to lines 4, 30, and 34. Also consider quoting line 38: namespace: {{ .Release.Namespace }}.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| name: {{ .Release.Name }}-seaweedfs | |
| name: "{{ .Release.Name }}-seaweedfs" |
🧰 Tools
🪛 YAMLlint (1.38.0)
[error] 4-4: syntax error: expected , but found ''
(syntax)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy/operator/templates/wandb-operator-seaweedfs-role.yaml` at line 4, The
YAML emits raw template scalars which break linting; wrap the templated
role/name and namespace scalars in quotes so they are valid YAML (e.g. change
name: {{ .Release.Name }}-seaweedfs to name: "{{ .Release.Name }}-seaweedfs" and
similarly quote the other two role name occurrences and namespace: {{
.Release.Namespace }}), updating every metadata.name/role name/namespace field
that uses {{ ... }} so the template remains functional but produces valid quoted
YAML.
| weightsandbiases.apps.wandb.com/component: seaweedfs | ||
| namespaceSelector: | ||
| matchNames: | ||
| - {{ include "telemetry.namespace" . }} | ||
| podMetricsEndpoints: | ||
| - port: minio-port | ||
| path: /minio/metrics/v3 | ||
| scheme: https | ||
| tlsConfig: | ||
| insecureSkipVerify: true | ||
| - port: metrics | ||
| path: /metrics |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Where seaweedfs component labels are set =="
rg -n --iglob '*.{go,yaml,yml}' \
'weightsandbiases\.apps\.wandb\.com/component.*seaweedfs|WandbComponentLabel.*seaweedfs'
echo
echo "== Where seaweedfs metrics ports/endpoints are defined =="
rg -n --iglob '*.{go,yaml,yml}' \
'seaweedfs|name:\s*metrics|port:\s*metrics|path:\s*/metrics'Repository: wandb/operator
Length of output: 10427
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Searching for where weightsandbiases.apps.wandb.com/component label is set =="
rg -n 'weightsandbiases\.apps\.wandb\.com/component' --type yaml --type go
echo
echo "== Checking SeaweedFS pod specs and label templates in controller code =="
rg -n -A 5 'podSpec|PodSpec|podTemplate|PodTemplate' \
internal/controller/translator/v2/objectstore.go
echo
echo "== Checking SeaweedFS Seaweed CR creation with labels in object-store code =="
fd -t f 'objectstore' -x rg -l 'labels|Labels|metadata' {} \; | head -20
echo
echo "== Looking for label/metadata patches applied to SeaweedFS resources =="
rg -n 'Labels\[|labels:' internal/controller/infra/managed/objectstore/seaweedfs/ --type goRepository: wandb/operator
Length of output: 552
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Usages of WandbComponentLabel constant =="
rg -n 'WandbComponentLabel' --type go --type yaml
echo
echo "== Check where seaweedfs component label value 'seaweedfs' is set =="
rg -n '"seaweedfs"' internal/controller/translator/retention.go -B 5 -A 5
echo
echo "== Check retention.go file context around WandbComponentLabel =="
sed -n '1,50p' internal/controller/translator/retention.goRepository: wandb/operator
Length of output: 648
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Full retention.go file =="
cat -n internal/controller/translator/retention.go
echo
echo "== Check translator/v2/retention.go to see componentName assignment =="
cat -n internal/controller/translator/v2/retention.go | head -50
echo
echo "== Search for 'seaweedfs' string in object-store code that might set component =="
rg -n '"seaweedfs"' internal/controller/infra/managed/objectstore/ --type go
echo
echo "== Check naming.go in seaweedfs package =="
cat -n internal/controller/infra/managed/objectstore/seaweedfs/naming.goRepository: wandb/operator
Length of output: 1942
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== List files in seaweedfs package =="
ls -la internal/controller/infra/managed/objectstore/seaweedfs/
echo
echo "== Check all .go files in seaweedfs for label/Labels strings =="
rg -n 'Labels\[|labels' internal/controller/infra/managed/objectstore/seaweedfs/ --type go
echo
echo "== Check objectstore v2 controller that handles seaweedfs =="
rg -n -B 5 -A 10 'seaweedfs\.' internal/controller/v2/objectstore.go | head -80
echo
echo "== Check if seaweedfs pods get labeled in write.go =="
cat -n internal/controller/infra/managed/objectstore/seaweedfs/write.goRepository: wandb/operator
Length of output: 10853
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Check ToObjectStoreVendorSpec function that creates SeaweedFS CR =="
rg -n -A 30 'func ToObjectStoreVendorSpec' internal/controller/translator/v2/objectstore.go
echo
echo "== Check ToSeaweed function or similar that builds the CR =="
rg -n 'ToSeaweed\|NewSeaweed' internal/controller/translator/v2/ --type go
echo
echo "== Full objectstore.go translator to see full picture =="
wc -l internal/controller/translator/v2/objectstore.goRepository: wandb/operator
Length of output: 1148
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Full ToObjectStoreVendorSpec function =="
sed -n '20,120p' internal/controller/translator/v2/objectstore.go
echo
echo "== Check if there are more labels being added after line 50 =="
sed -n '45,75p' internal/controller/translator/v2/objectstore.goRepository: wandb/operator
Length of output: 3782
🏁 Script executed:
#!/bin/bash
set -euo pipeflight
echo "== Check if BuildWandbObjectStoreLabels is used anywhere =="
rg -n 'BuildWandbObjectStoreLabels' --type go
echo
echo "== Check seaweedfs-operator CRD to see what ports are exposed =="
fd -t f 'seaweed' -x rg -l 'port|metrics' {} \; | grep -i crd | head -5
echo
echo "== Search for metrics port in seaweedfs operator vendored code =="
rg -n 'metrics\|Port' pkg/vendored/seaweedfs-operator/ --type yaml --type go | grep -i port | head -20
echo
echo "== Check seaweedfs testing manifest for port definitions =="
cat hack/testing-manifests/seaweedfs/seaweedfs.yamlRepository: wandb/operator
Length of output: 113
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Check if BuildWandbObjectStoreLabels is used anywhere =="
rg -n 'BuildWandbObjectStoreLabels' --type go
echo
echo "== Search for metrics port in seaweedfs operator vendored code =="
rg -n 'metrics\|Port' pkg/vendored/seaweedfs-operator/ --type yaml --type go | grep -i port | head -20
echo
echo "== Check seaweedfs testing manifest for port definitions =="
cat hack/testing-manifests/seaweedfs/seaweedfs.yamlRepository: wandb/operator
Length of output: 321
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Check seaweedfs testing manifest for port definitions =="
cat hack/testing-manifests/seaweedfs/seaweedfs.yaml
echo
echo ""
echo "== Check if metrics port name is exposed by seaweedfs pods =="
rg -n 'containerPort\|port:' pkg/vendored/seaweedfs-operator/crds/ --type yaml | head -30
echo
echo ""
echo "== Search for exposed container ports in seaweed CRD =="
fd -t f '*.yaml' pkg/vendored/seaweedfs-operator/crds/ -x cat {} \; | grep -A 3 -B 1 'containerPort\|port:' | head -50Repository: wandb/operator
Length of output: 1046
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== List CRD files in vendored seaweedfs-operator =="
find pkg/vendored/seaweedfs-operator/crds -type f -name '*.yaml'
echo
echo "== Check the seaweeds CRD for port definitions =="
cat pkg/vendored/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yaml | grep -A 200 'containerPort\|ports:' | head -100
echo
echo "== Search SeaweedFS types for port/service definitions =="
rg -n 'containerPort\|ServicePort\|metrics' pkg/vendored/seaweedfs-operator/seaweed.seaweedfs.com/v1/types.go | head -30Repository: wandb/operator
Length of output: 233
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Full seaweeds CRD file (first 500 lines) =="
head -500 pkg/vendored/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yaml
echo
echo "== Check if metrics is mentioned anywhere in CRD =="
rg -i 'metrics' pkg/vendored/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yamlRepository: wandb/operator
Length of output: 25183
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Search for metricsPort definition in CRD =="
rg -B 5 -A 5 'metricsPort' pkg/vendored/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yaml | head -100
echo
echo "== Check if metrics port name is exposed in pod spec =="
rg -B 10 -A 10 'metricsPort' pkg/vendored/seaweedfs-operator/seaweed.seaweedfs.com/v1/types.go | head -80
echo
echo "== Confirm the metrics container port name =="
wc -l pkg/vendored/seaweedfs-operator/crds/seaweed.seaweedfs.com_seaweeds.yamlRepository: wandb/operator
Length of output: 7036
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Check how metricsPort is used in seaweedfs-operator to generate pods =="
rg -n 'metricsPort' pkg/vendored/seaweedfs-operator/seaweed.seaweedfs.com/v1/ -A 2 | head -40
echo
echo "== Check if there's a container port with name 'metrics' exposed =="
rg -n 'name.*metrics\|metrics.*name' pkg/vendored/seaweedfs-operator/ --type go --type yaml
echo
echo "== Verify the controller code that translates metricsPort to container spec =="
fd -t f '*.go' pkg/vendored/seaweedfs-operator/ -x rg -l 'containerPort' {} \;Repository: wandb/operator
Length of output: 2610
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Final confirmation: check the entire ToObjectStoreVendorSpec to see if labels are patched later =="
sed -n '20,120p' internal/controller/translator/v2/objectstore.go | grep -i label -A 2 -B 2
echo
echo "== Confirm BuildWandbObjectStoreLabels is never actually used =="
rg -C 5 'BuildWandbObjectStoreLabels' internal/controller/translator/v2/objectstore.go
echo
echo "== Double-check if the Seaweed CR gets any label patches applied elsewhere =="
rg -n 'seaweedCR.*Labels\|setLabels\|AddLabels' internal/controller/ --type goRepository: wandb/operator
Length of output: 1068
Add missing component labels to SeaweedFS custom resource.
The scrape selector at line 247 expects pods labeled with weightsandbiases.apps.wandb.com/component: seaweedfs, but the Seaweed CR created in ToObjectStoreVendorSpec (internal/controller/translator/v2/objectstore.go) only applies the app label. The function BuildWandbObjectStoreLabels exists but is never called. Without this label, the scrape will match zero pods and silently collect no metrics.
Additionally, confirm that SeaweedFS pods expose a container port named metrics at line 252–253, or update the scrape configuration to use the correct port name.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy/telemetry/templates/telemetry-scrapes.yaml` around lines 247 - 253,
The SeaweedFS CR lacks the required weightsandbiases.apps.wandb.com/component
label so the Prometheus scrape selector (expecting
weightsandbiases.apps.wandb.com/component: seaweedfs) matches nothing; call and
apply the existing BuildWandbObjectStoreLabels when constructing the Seaweed CR
in ToObjectStoreVendorSpec (internal/controller/translator/v2/objectstore.go) so
the component label is present on pods, and in addition verify that the
SeaweedFS pod spec exposes a container port named "metrics" (or update
telemetry-scrapes.yaml to reference the actual port name) so
podMetricsEndpoints.port matches the container port.
| - containerPort: 8333 | ||
| name: s3 |
There was a problem hiding this comment.
S3 service port is inconsistent with controller connection settings.
This manifest exposes S3 on 8333, but internal/controller/infra/managed/objectstore/seaweedfs/conn.go currently writes connection secrets with Port: "9000" (Line 17-20 there). If this test manifest is used with that controller path, object-store connectivity will fail.
Suggested alignment (manifest-side)
spec:
ports:
- - port: 8333
+ - port: 9000
+ targetPort: s3
name: s3Also applies to: 48-49
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@hack/testing-manifests/seaweedfs/seaweedfs.yaml` around lines 24 - 25, The
manifest exposes the S3 containerPort as 8333 which conflicts with the
controller's expected Port: "9000" in
internal/controller/infra/managed/objectstore/seaweedfs/conn.go; update the s3
containerPort entries (the one at the shown diff and the other occurrence around
lines 48-49) to 9000 so the test manifest matches the controller's connection
settings and avoids object-store connectivity failures.
| if desiredReplicas > 0 && actual.Spec.Volume != nil { | ||
| if actual.Spec.Volume.Replicas != desiredReplicas { |
There was a problem hiding this comment.
Treat a missing Spec.Volume as a detached-spec mismatch.
With desiredReplicas > 0, this guard now returns success when the detached Seaweed resource has no volume stanza at all. That hides drift instead of reporting it.
Suggested fix
- if desiredReplicas > 0 && actual.Spec.Volume != nil {
- if actual.Spec.Volume.Replicas != desiredReplicas {
+ if desiredReplicas > 0 {
+ if actual.Spec.Volume == nil || actual.Spec.Volume.Replicas != desiredReplicas {
return []metav1.Condition{📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if desiredReplicas > 0 && actual.Spec.Volume != nil { | |
| if actual.Spec.Volume.Replicas != desiredReplicas { | |
| if desiredReplicas > 0 { | |
| if actual.Spec.Volume == nil || actual.Spec.Volume.Replicas != desiredReplicas { | |
| return []metav1.Condition{ |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/controller/infra/managed/objectstore/seaweedfs/detach.go` around
lines 34 - 35, The current check treats a missing actual.Spec.Volume as OK when
desiredReplicas > 0; update the logic in detach.go (the block referencing
desiredReplicas and actual.Spec.Volume) so that when desiredReplicas > 0 you
treat actual.Spec.Volume == nil as a mismatch as well as a mismatched replica
count — i.e., change the guard to detect either actual.Spec.Volume == nil or
actual.Spec.Volume.Replicas != desiredReplicas and report drift/failure in that
case (use the same reporting path you already use for replica mismatches).
| switch { | ||
| case allReady && anyRunning: | ||
| status = metav1.ConditionTrue | ||
| case miniov2.HealthStatusYellow: | ||
| reason = "green" | ||
| case anyRunning: | ||
| status = metav1.ConditionFalse | ||
| reason = "yellow" | ||
| case miniov2.HealthStatusRed: | ||
| case cr.Status.S3.Replicas > 0 && cr.Status.S3.ReadyReplicas == 0: | ||
| status = metav1.ConditionFalse | ||
| reason = "red" |
There was a problem hiding this comment.
Prioritize the S3-outage branch before the generic anyRunning branch.
Line 112 makes the "red" path unreachable whenever any non-S3 component is ready. With the current order, master/filer/volume can keep this condition at "yellow" even when S3 has desired replicas and ReadyReplicas == 0, so the controller reports a degraded store instead of a broken S3 endpoint.
Suggested fix
switch {
+case cr.Status.S3.Replicas > 0 && cr.Status.S3.ReadyReplicas == 0:
+ status = metav1.ConditionFalse
+ reason = "red"
case allReady && anyRunning:
status = metav1.ConditionTrue
reason = "green"
case anyRunning:
status = metav1.ConditionFalse
reason = "yellow"
-case cr.Status.S3.Replicas > 0 && cr.Status.S3.ReadyReplicas == 0:
- status = metav1.ConditionFalse
- reason = "red"
default:
status = metav1.ConditionUnknown
reason = ctrlcommon.UnknownReason
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/controller/infra/managed/objectstore/seaweedfs/read.go` around lines
108 - 117, The S3-outage branch using cr.Status.S3.Replicas and
cr.Status.S3.ReadyReplicas is currently placed after the generic anyRunning
case, making the "red" path unreachable when other components
(master/filer/volume) are running; reorder the switch so the S3-specific
condition (cr.Status.S3.Replicas > 0 && cr.Status.S3.ReadyReplicas == 0) is
evaluated before the anyRunning branch, preserving the intended semantics of
setting status/reason to ConditionFalse/"red" when S3 has desired replicas but
no ready replicas.
| if actual != nil { | ||
| rootPassword = parseMinioConfigFile(string(actual.Data[configFileName])).rootPassword | ||
| existingConfig, parseErr := parseS3IdentityConfig(string(actual.Data[configFileName])) | ||
| if parseErr == nil { | ||
| secretKey = extractSecretKey(existingConfig) | ||
| } | ||
| } | ||
| if rootPassword == "" { | ||
| if rootPassword, err = goutils.RandomAlphabetic(20); err != nil { | ||
| if secretKey == "" { | ||
| if secretKey, err = goutils.RandomAlphabetic(20); err != nil { | ||
| return nil, err |
There was a problem hiding this comment.
Don't silently rotate credentials when the existing config is unreadable.
If config.json exists but cannot be parsed, this falls back to generating a fresh secretKey and rewriting the Secret. That turns config corruption into an implicit credential rotation. Fail the reconcile on parse errors and only mint a new key when the file is genuinely absent.
Suggested fix
if actual != nil {
- existingConfig, parseErr := parseS3IdentityConfig(string(actual.Data[configFileName]))
- if parseErr == nil {
- secretKey = extractSecretKey(existingConfig)
- }
+ if raw, ok := actual.Data[configFileName]; ok {
+ existingConfig, parseErr := parseS3IdentityConfig(string(raw))
+ if parseErr != nil {
+ return nil, fmt.Errorf("parse existing %s: %w", configFileName, parseErr)
+ }
+ secretKey = extractSecretKey(existingConfig)
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/controller/infra/managed/objectstore/seaweedfs/write.go` around
lines 168 - 176, The code currently treats a parse failure of an existing config
as "missing" and generates a new secretKey; update the logic so that if actual
!= nil and parseS3IdentityConfig(string(actual.Data[configFileName])) returns a
parse error, the reconcile returns that error instead of generating a new key,
and only call goutils.RandomAlphabetic(20) to mint a secret when actual is nil
or actual.Data[configFileName] is absent; adjust the secretKey assignment flow
around parseS3IdentityConfig, extractSecretKey, actual, configFileName, and
goutils.RandomAlphabetic to enforce this behavior.
| const ( | ||
| MinioImage = "quay.io/minio/minio:latest" | ||
| DevVolumesPerServer = int32(1) | ||
| ProdVolumesPerServer = int32(4) | ||
| SeaweedImage = "chrislusf/seaweedfs:latest" | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What is the latest stable production-ready container tag for chrislusf/seaweedfs, and is using the floating :latest tag recommended for Kubernetes operators?
💡 Result:
The latest stable production-ready container tag for chrislusf/seaweedfs is 4.18. This tag was pushed about 2 hours ago and corresponds to the recent GitHub release 4.18 from 2026-04-02. Other variants like 4.18_full and 4.18_large_disk are also available from the same push. Using the floating :latest tag is not recommended for Kubernetes operators in production. Examples from the SeaweedFS operator and Helm charts default to :latest, but general Kubernetes best practices and Bitnami documentation emphasize using specific version tags or immutable tags to avoid unexpected updates and ensure reproducibility. The :latest tag was last pushed 1 day ago, likely pointing to a prior version.
Citations:
- 1: https://hub.docker.com/r/chrislusf/seaweedfs/tags
- 2: https://hub.docker.com/r/chrislusf/seaweedfs/tags?ordering=last_updated&page=1
- 3: https://github.com/seaweedfs/seaweedfs
- 4: https://github.com/seaweedfs/seaweedfs-operator
- 5: https://github.com/seaweedfs/seaweedfs/blob/master/k8s/charts/seaweedfs/values.yaml
Pin the SeaweedFS image to a specific stable version.
chrislusf/seaweedfs:latest makes reconciles non-reproducible and can pull incompatible upstream changes without any code change here. Pin to a tested release tag instead (current stable is 4.18); use configuration if versioning needs flexibility.
Suggested change
- SeaweedImage = "chrislusf/seaweedfs:latest"
+ SeaweedImage = "chrislusf/seaweedfs:4.18"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/controller/translator/objectstore.go` around lines 12 - 14, The
SeaweedImage constant currently uses the floating tag
"chrislusf/seaweedfs:latest"; change the constant SeaweedImage to reference a
specific, tested release (e.g., "chrislusf/seaweedfs:4.18") to ensure
reproducible reconciles, or refactor to read the image tag from configuration if
runtime flexibility is required; update the constant SeaweedImage (and any
places that import it) to use the pinned tag and add a comment explaining why
the pin is necessary.
| specNamespacedName := managedObjectStoreSpecNamespacedName(spec) | ||
| onDeleteRule := translatorv2.ToObjectStoreOnDeleteRule(wandb, wandb.GetRetentionPolicy(spec.ManagedInfraSpec)) | ||
| return tenant.PurgeFinalizer(ctx, client, specNamespacedName, onDeleteRule) | ||
| return seaweedfs.PurgeFinalizer(ctx, client, specNamespacedName, onDeleteRule) |
There was a problem hiding this comment.
Add an upgrade path before switching every managed-object-store operation to SeaweedFS.
These lines remove the last managed MinIO control path from reconcile, status, detach, and purge. An in-place operator upgrade for an existing managed MinIO install would no longer be able to read or finalize the legacy backend, so it can strand the old tenant/data while reconciling SeaweedFS resources instead. Please add a compatibility/migration gate for legacy MinIO-backed ManagedObjectStore instances, or explicitly block the upgrade until a migration has run.
Also applies to: 93-93, 108-108, 136-136, 150-155, 172-179
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/controller/v2/objectstore.go` at line 75, The change removed MinIO
handling (e.g., calls like seaweedfs.PurgeFinalizer) and must not leave existing
ManagedObjectStore tenants stranded; add a migration/compatibility gate in the
reconcile/status/detach/purge code paths that detects legacy MinIO-backed
instances (inspect ManagedObjectStore.Spec/Status backend markers) and either:
1) route legacy operations to the existing MinIO handlers (the legacy
finalizer/detach/purge logic) until migration completes, or 2) explicitly
block/deny the upgrade by returning a clear error/status condition instructing
an admin to run the migration; update the functions that currently call
seaweedfs.* (the reconcile loop, status updater, detach handler, and
purge/finalizer invocation such as where seaweedfs.PurgeFinalizer is called) to
perform this backend check and branch accordingly so legacy objects are
readable/finalizable until migrated.
Summary by CodeRabbit