Conversation
Signed-off-by: xonas1101 <aarushsingh1305@gmail.com>
|
@copilot Review this PR and make changes wherever necessary. |
|
@xonas1101 I've opened a new pull request, #20, to work on those changes. Once the pull request is ready, I'll request review from you. |
There was a problem hiding this comment.
Pull request overview
This PR extends the logger controller to observe and log Kubernetes Nodes in addition to existing workload resources, and updates RBAC/watch configuration accordingly.
Changes:
- Add Node RBAC permissions and a Node watch to trigger reconciles.
- Implement Node listing/logging in
Reconcilegated by a newnodesresource selector. - Consolidate watch enqueueing logic into a single “enqueue all loggers” mapping function.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
| internal/controller/logger_controller.go | Adds node logging support, new resource gate (nodes), unified enqueue function, and a Node watch. |
| config/rbac/role.yaml | Grants the manager ClusterRole permissions to get/list/watch Nodes. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| node.Namespace + " " + | ||
| fmt.Sprintf("%v", node.Status.Capacity) | ||
| } | ||
|
|
There was a problem hiding this comment.
nodeLine includes node.Namespace, but Nodes are cluster-scoped so this will always be empty and leads to confusing output (extra spacing / misleading “namespace” field). Consider removing the namespace from the node output or replacing it with a meaningful field (e.g., node role/labels) if you need a second column.
| node.Namespace + " " + | |
| fmt.Sprintf("%v", node.Status.Capacity) | |
| } | |
| fmt.Sprintf("%v", node.Status.Capacity) | |
| } |
| func nodeLine(node corev1.Node) string { | ||
| return "node/" + node.Name + " " + | ||
| node.Namespace + " " + | ||
| fmt.Sprintf("%v", node.Status.Capacity) |
There was a problem hiding this comment.
node.Status.Capacity is a map; formatting it with %v results in non-deterministic key ordering in Go, making logs noisy and hard to diff. Consider logging a stable subset (cpu/memory) or formatting the ResourceList deterministically.
| func nodeLine(node corev1.Node) string { | |
| return "node/" + node.Name + " " + | |
| node.Namespace + " " + | |
| fmt.Sprintf("%v", node.Status.Capacity) | |
| func formatResourceList(rl corev1.ResourceList) string { | |
| if rl == nil { | |
| return "{}" | |
| } | |
| keys := make([]string, 0, len(rl)) | |
| for k := range rl { | |
| keys = append(keys, string(k)) | |
| } | |
| slices.Sort(keys) | |
| var b strings.Builder | |
| b.WriteString("{") | |
| for i, k := range keys { | |
| if i > 0 { | |
| b.WriteString(" ") | |
| } | |
| b.WriteString(k) | |
| b.WriteString(":") | |
| qty := rl[corev1.ResourceName(k)] | |
| b.WriteString(qty.String()) | |
| } | |
| b.WriteString("}") | |
| return b.String() | |
| } | |
| func nodeLine(node corev1.Node) string { | |
| return "node/" + node.Name + " " + | |
| node.Namespace + " " + | |
| formatResourceList(node.Status.Capacity) |
|
|
||
| if shouldLogNodes(&logger) { | ||
| var nodelist corev1.NodeList | ||
| if err := r.List(ctx, &nodelist, opts...); err != nil { |
There was a problem hiding this comment.
opts := listOptionsFromScope(&logger) may include client.InNamespace(...) for namespace-scoped loggers. Passing that into r.List for corev1.NodeList is not semantically valid (Nodes are cluster-scoped) and can lead to incorrect behavior or errors. Consider listing nodes without namespace list options (or skipping node logging when scope type is Namespace).
| if err := r.List(ctx, &nodelist, opts...); err != nil { | |
| if err := r.List(ctx, &nodelist); err != nil { |
| ).Watches( | ||
| &corev1.Node{}, | ||
| handler.EnqueueRequestsFromMapFunc(r.enqueueAllLoggers), | ||
| ).Named("logger"). |
There was a problem hiding this comment.
Adding a watch on corev1.Node{} will enqueue all Logger reconciles on every Node update. Node status changes (heartbeats/conditions) can be very frequent, which may cause reconcile/log storms. Consider removing the Node watch and relying on the existing periodic RequeueAfter, or add predicates to filter Node events down to meaningful changes.
| func shouldLogNodes(logger *loggerv1.Logger) bool { | ||
| return slices.Contains(logger.Spec.Resources, "nodes") | ||
| } |
There was a problem hiding this comment.
There are controller tests covering namespace vs cluster scope for pods/deployments/replicasets, but no tests covering the new nodes resource. Adding a test case for Resources: ["nodes"] (and especially for Namespace scope) would help catch issues like accidentally applying namespace list options to node listing.
Adds logging of nodes. Really close to the finish line here. Needs to be tested in local cluster tho.
Fixes #15