fix: extend pod hugepages based on dpdk_base_memory_mb#2399
fix: extend pod hugepages based on dpdk_base_memory_mb#2399
Conversation
How to use the Graphite Merge QueueAdd the label main-merge-queue to this PR to add it to the merge queue. You must have a Graphite account in order to use the merge queue. Sign up using this link. An organization admin has required the Graphite Merge Queue in this repository. Please do not merge from GitHub as this will restart CI on PRs being processed by the merge queue. This stack of pull requests is managed by Graphite. Learn more about stacking. |
Dependency Review✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.Snapshot WarningsEnsure that dependencies are being submitted on PR branches and consider enabling retry-on-snapshot-warnings. See the documentation for more information and troubleshooting advice. Scanned FilesNone |
Graphite Automations"Add anton/matt/sergey/kristina as reviwers on operator PRs" took an action on this PR • (03/25/26)2 reviewers were added to this PR based on Anton Bykov's automation. |
There was a problem hiding this comment.
Pull request overview
Adjusts the hugepages “offset” used to compute the MEMORY env var for Weka pods, aiming to better account for DPDK base memory needs on frontend containers.
Changes:
- Update
GetHugePagesOffsetdefault logic to scale frontend offset withNumCores(64 MiB per core) instead of a fixed 200 MiB. - Keep the previous 200 MiB default offset for non-frontend containers in the default mode branch.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| offset = 64 * container.Spec.NumCores // 64 is same as the value we put int dpdk_base_memory_mb in resources.json | ||
| } else { |
There was a problem hiding this comment.
The inline comment has a typo (“put int”). Also, the reference to dpdk_base_memory_mb in resources.json is hard to validate/trace from this codebase, and the raw 64 is an unexplained magic number. Please fix the typo and consider replacing 64 with a named constant (or config-driven value) and update the comment to point to the actual source of truth for this offset.
| default: | ||
| offset = 200 | ||
| if container.HasFrontend() { | ||
| offset = 64 * container.Spec.NumCores // 64 is same as the value we put int dpdk_base_memory_mb in resources.json |
There was a problem hiding this comment.
Typo in comment: “put int” should be “put in”.
| offset = 64 * container.Spec.NumCores // 64 is same as the value we put int dpdk_base_memory_mb in resources.json | |
| offset = 64 * container.Spec.NumCores // 64 is same as the value we put in dpdk_base_memory_mb in resources.json |
| if container.HasFrontend() { | ||
| offset = 64 * container.Spec.NumCores // 64 is same as the value we put int dpdk_base_memory_mb in resources.json | ||
| } else { | ||
| offset = 200 | ||
| } |
There was a problem hiding this comment.
This new frontend-specific offset logic only runs when container.Spec.HugepagesOffset == 0. In the common WekaCluster/WekaClient flows, HugepagesOffset is always set (e.g., via template defaults), so this change may not actually affect the pods you’re trying to fix. Consider updating the default hugepages offset where it’s populated (templates/factory) or intentionally leaving HugepagesOffset unset so this function’s defaults take effect.
2e1f2f0 to
b63609b
Compare

No description provided.