Context
Currently, ActionsClient in actions/k8s/client.go uses a direct client.WithWatch for all K8s operations. Every Get() and List() call hits the API server directly. Since we already run a Watch loop (doWatch) that tracks all TaskAction changes, we can use a controller-runtime informer cache to serve reads from local memory instead. This reduces API server load and improves read latency.
Additionally, the manual Watch loop becomes redundant — the cache internally runs an informer that watches the resource. We replace doWatch/watchLoop with informer event handlers that feed the existing subscriber notification system.
Originally posted by @machichima in #6902 (comment)
Approach
Use controller-runtime cache.Cache for reads (Get/List) and informer event handlers for watch notifications. Keep the existing client.WithWatch (or just client.Client) for writes (Create, Delete, Status().Update()).
Files to modify
1. app/context.go — Add K8sCache field
- Add
K8sCache cache.Cache field to SetupContext (may be nil)
- Add import for
sigs.k8s.io/controller-runtime/pkg/cache
2. app/k8s.go — Add InitK8sCache factory
- Add new function:
func InitK8sCache(restConfig *rest.Config, scheme *runtime.Scheme) (cache.Cache, error)
- Uses
cache.New(restConfig, cache.Options{Scheme: scheme})
- Does NOT start the cache (caller starts it as a background worker)
3. actions/k8s/client.go — Core changes
Struct changes:
- Replace
k8sClient client.WithWatch with:
k8sClient client.Client — for writes only
k8sCache cache.Cache — for reads + informer event handlers
- Replace
stopCh chan struct{} and watching bool with handlerRegistration toolscache.ResourceEventHandlerRegistration
Constructor:
- Update
NewActionsClient to accept cache.Cache as second parameter
Read operations — redirect to cache:
- All 5
c.k8sClient.Get(...) calls → c.k8sCache.Get(...)
- Line 123: Get parent TaskAction in
Enqueue()
- Line 165: Get TaskAction in
AbortAction()
- Line 182: Get TaskAction in
GetState()
- Line 199: Get TaskAction in
PutState()
- Line 288: Get TaskAction in
GetTaskAction()
- All 2
c.k8sClient.List(...) calls → c.k8sCache.List(...)
- Line 236:
ListRunActions()
- Line 261:
ListChildActions()
Write operations — unchanged, stay on k8sClient:
- Line 94:
EnsureNamespaceExists() (writes)
- Line 150:
Create()
- Line 169:
Delete()
- Line 215:
Status().Update()
Watch — replace manual loop with informer event handlers:
- Replace
StartWatching() — use c.k8sCache.GetInformer(ctx, &executorv1.TaskAction{}) then AddEventHandler(toolscache.ResourceEventHandlerFuncs{...})
AddFunc → calls handleWatchEvent with watch.Added
UpdateFunc → calls handleWatchEvent with watch.Modified
DeleteFunc → calls handleWatchEvent with watch.Deleted
- Remove
watchLoop() and doWatch() — no longer needed
- Simplify
StopWatching() — remove event handler registration via RemoveEventHandler, or make it a no-op (cache lifecycle handles cleanup)
handleWatchEvent() stays as-is
- Add import
toolscache "k8s.io/client-go/tools/cache"
4. actions/setup.go — Wire cache creation and startup
- Create cache: call
app.InitK8sCache(sc.K8sConfig, executor.Scheme())
- Start cache as worker:
sc.AddWorker("actions-cache", func(ctx) { k8sCache.Start(ctx) })
- Add readiness check: verify
k8sCache.WaitForCacheSync(ctx)
- Pass cache to
NewActionsClient(sc.K8sClient, k8sCache, ...)
- Replace the "actions-watcher" worker (currently just waits for ctx.Done) — no longer needed since cache lifecycle manages the informer
5. actions/cmd/main.go — Fix standalone binary
- Pass
executor.Scheme() instead of nil to InitKubernetesClient (needed for cache to decode TaskAction)
- Store and set
sc.K8sConfig = k8sConfig (currently discarded with _)
6. actions/k8s/client_test.go — Update tests
- Update
ActionsClient struct literals in tests to include the new k8sCache field (nil is fine for existing tests that only test notifyRunService and helpers)
- Add new tests for informer-based watch behavior
Verification
- Unit tests:
go test ./actions/k8s/... -v
- Integration: Run the manager (
manager/cmd/main.go), create a TaskAction, verify:
GetState/GetTaskAction reads from cache (check logs — no direct API calls)
- Watch subscribers still receive updates via informer events
- Create/Delete/Status.Update still go to API server
- Standalone: Run
actions/cmd/main.go standalone, verify cache starts and syncs
Context
Currently,
ActionsClientinactions/k8s/client.gouses a directclient.WithWatchfor all K8s operations. EveryGet()andList()call hits the API server directly. Since we already run a Watch loop (doWatch) that tracks all TaskAction changes, we can use a controller-runtime informer cache to serve reads from local memory instead. This reduces API server load and improves read latency.Additionally, the manual Watch loop becomes redundant — the cache internally runs an informer that watches the resource. We replace
doWatch/watchLoopwith informer event handlers that feed the existing subscriber notification system.Originally posted by @machichima in #6902 (comment)
Approach
Use controller-runtime
cache.Cachefor reads (Get/List) and informer event handlers for watch notifications. Keep the existingclient.WithWatch(or justclient.Client) for writes (Create, Delete, Status().Update()).Files to modify
1.
app/context.go— Add K8sCache fieldK8sCache cache.Cachefield toSetupContext(may be nil)sigs.k8s.io/controller-runtime/pkg/cache2.
app/k8s.go— Add InitK8sCache factorycache.New(restConfig, cache.Options{Scheme: scheme})3.
actions/k8s/client.go— Core changesStruct changes:
k8sClient client.WithWatchwith:k8sClient client.Client— for writes onlyk8sCache cache.Cache— for reads + informer event handlersstopCh chan struct{}andwatching boolwithhandlerRegistration toolscache.ResourceEventHandlerRegistrationConstructor:
NewActionsClientto acceptcache.Cacheas second parameterRead operations — redirect to cache:
c.k8sClient.Get(...)calls →c.k8sCache.Get(...)Enqueue()AbortAction()GetState()PutState()GetTaskAction()c.k8sClient.List(...)calls →c.k8sCache.List(...)ListRunActions()ListChildActions()Write operations — unchanged, stay on k8sClient:
EnsureNamespaceExists()(writes)Create()Delete()Status().Update()Watch — replace manual loop with informer event handlers:
StartWatching()— usec.k8sCache.GetInformer(ctx, &executorv1.TaskAction{})thenAddEventHandler(toolscache.ResourceEventHandlerFuncs{...})AddFunc→ callshandleWatchEventwithwatch.AddedUpdateFunc→ callshandleWatchEventwithwatch.ModifiedDeleteFunc→ callshandleWatchEventwithwatch.DeletedwatchLoop()anddoWatch()— no longer neededStopWatching()— remove event handler registration viaRemoveEventHandler, or make it a no-op (cache lifecycle handles cleanup)handleWatchEvent()stays as-istoolscache "k8s.io/client-go/tools/cache"4.
actions/setup.go— Wire cache creation and startupapp.InitK8sCache(sc.K8sConfig, executor.Scheme())sc.AddWorker("actions-cache", func(ctx) { k8sCache.Start(ctx) })k8sCache.WaitForCacheSync(ctx)NewActionsClient(sc.K8sClient, k8sCache, ...)5.
actions/cmd/main.go— Fix standalone binaryexecutor.Scheme()instead ofniltoInitKubernetesClient(needed for cache to decode TaskAction)sc.K8sConfig = k8sConfig(currently discarded with_)6.
actions/k8s/client_test.go— Update testsActionsClientstruct literals in tests to include the newk8sCachefield (nil is fine for existing tests that only testnotifyRunServiceand helpers)Verification
go test ./actions/k8s/... -vmanager/cmd/main.go), create a TaskAction, verify:GetState/GetTaskActionreads from cache (check logs — no direct API calls)actions/cmd/main.gostandalone, verify cache starts and syncs