Use nvidia runtime handler for the daemonset#966
Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: harche The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/test e2e-bundle-4-19-runc |
|
/test e2e-bundle-runc |
Signed-off-by: Harshal Patil <12152047+harche@users.noreply.github.com>
Signed-off-by: Harshal Patil <12152047+harche@users.noreply.github.com>
|
/test e2e-bundle-runc |
|
/test e2e-bundle-4-19-runc |
Signed-off-by: Harshal Patil <12152047+harche@users.noreply.github.com>
|
/test e2e-bundle-runc |
|
@harche: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/lgtm |
NVIDIA/gpu-operator#1578 PR broke DAS operator daemonset, so now we need to set
runtimeClassName: nvidiagoing forward for DAS operator daemonset since all access to the gpu is handled by using CDI which handles mounting the required nvml libraries.For testing, If I set
NVIDIA_RUNTIME_SET_AS_DEFAULT=truein driver took in gpu cluster policy, DAS daemonset pod starts working again,Fixes : https://issues.redhat.com/browse/OCPBUGS-65805