Skip to content

nvidia-container-toolkit: 1.9.0 -> 1.15.0-rc.3#278969

Merged
SomeoneSerge merged 2 commits intoNixOS:masterfrom
aaronmondal:nvidia-container-toolkit-v1.15.0-rc.1
Feb 13, 2024
Merged

nvidia-container-toolkit: 1.9.0 -> 1.15.0-rc.3#278969
SomeoneSerge merged 2 commits intoNixOS:masterfrom
aaronmondal:nvidia-container-toolkit-v1.15.0-rc.1

Conversation

@aaronmondal
Copy link
Contributor

@aaronmondal aaronmondal commented Jan 5, 2024

Description of changes

This changes bumps the nvidia-container-toolkit from 1.9.0 to 1.15.0-rc.3. This likely won't be too much of a notable change yet, but it allows deprecating the ancient nvidia-docker packages in future commits. This change also adds the nvidia-ctk tool as it's part of the toolkit.

Fixes #278155
Fixes #272235

Things done

  • Built on platform(s)
    • x86_64-linux
    • aarch64-linux
    • x86_64-darwin
    • aarch64-darwin
  • For non-Linux: Is sandboxing enabled in nix.conf? (See Nix manual)
    • sandbox = relaxed
    • sandbox = true
  • Tested, as applicable:
  • Tested compilation of all packages that depend on this change using nix-shell -p nixpkgs-review --run "nixpkgs-review rev HEAD". Note: all changes have to be committed, also see nixpkgs-review usage
  • Tested basic functionality of all binary files (usually in ./result/bin/)
  • 24.05 Release Notes (or backporting 23.05 and 23.11 Release notes)
    • (Package updates) Added a release notes entry if the change is major or breaking
    • (Module updates) Added a release notes entry if the change is significant
    • (Module addition) Added a release notes entry if adding a new NixOS module
  • Fits CONTRIBUTING.md.

Add a 👍 reaction to pull requests you find important.

@NixOSInfra NixOSInfra added the 12.first-time contribution This PR is the author's first one; please be gentle! label Jan 5, 2024
@ofborg ofborg bot added 10.rebuild-darwin: 0 This PR does not cause any packages to rebuild on Darwin. 10.rebuild-linux: 1-10 This PR causes between 1 and 10 packages to rebuild on Linux. labels Jan 5, 2024
@aaronmondal
Copy link
Contributor Author

aaronmondal commented Jan 5, 2024

FYI bumping to 1.14.3 doesn't seem like a good option since building it segfaults during runtime due to NVIDIA/go-nvml#36.

In the current state of this PR the nvidia-ctk tool doesn't properly autodetect cuda. but 1.15.0-rc.1 allows setting the cuda library search path wit a new --library-search-path option like so:

sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml --library-search-path=/run/opengl-drivers/lib

Regular docker run --gpus=all mode works, but the new CDI variant not yet:

docker run --rm -ti --runtime=nvidia \
    -e NVIDIA_VISIBLE_DEVICES=nvidia.com/gpu=all \
      ubuntu nvidia-smi -L
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container 
process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: ldconfig: Can't create temporary cache file /nix/store/9y8pm
vk8gdwwznmkzxa6pwyah52xy3nk-glibc-2.38-27/etc/ld.so.cache~: No such file or directory: unknown.

@aaronmondal aaronmondal force-pushed the nvidia-container-toolkit-v1.15.0-rc.1 branch 3 times, most recently from 0618e5e to 55efde8 Compare January 6, 2024 01:41
@bachp
Copy link
Member

bachp commented Jan 8, 2024

@aaronmondal Did you try to run podman with the generated CDI spec? I think it would make sense to rework the podman nvidia integration to work on top of CDI.

@aaronmondal
Copy link
Contributor Author

@bachp Not yet, but this is something I want to work as well. I'll have to rework this after #280087 which should also make it easier to debug the podman CDI setup.

Comment on lines 47 to 74
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there much point giving them the real ldconfig if ldconfig is never going to do the expected thing on NixOS (and shouldn't be expected to, in general)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without this it triggers this error:

nvidia-container-cli: ldcache error: open failed: /sbin/ldconfig: no such file or directory: unknown.

I'm not sure whether this is the right way to fix it though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the correct patch would be to make ldconfig optional. Using ldconfig the way they do is wrong by design

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace with a link to coreutils true, perhaps?

Copy link
Contributor

@SomeoneSerge SomeoneSerge Jan 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the current state of this PR the nvidia-ctk tool doesn't properly autodetect cuda

Do you know which "mode" it's trying to use to discover the libraries? E.g. libnvidia-container, csv, etc

EDIT: also, in case it wasn't clear, this ugly patch was sufficient to make nvidia-container-cli info (from libnvidia-container) work again, and I believe nvidia-ctk uses nvidia-container-cli in one of the "modes": 35b1062#diff-2b4dc4504c07052fdeb991c058ab1cd1b3fc215f2475fddab960ebea2db772e7

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for opening the PR! Fingers crossed jaja

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CDI generation auto-detects mode as "nvml".

It's using different modes during container invocations though. In #280184 it's complaining like this, so I guess there it's failing in "legacy" mode?

docker run --gpus=all -it ubuntu bash

docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container
process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: ldcache error: open failed: /sbin/ldconfig: no such file or directory: unknown.
ERRO[0000] error waiting for container:

It works in this patch because of one of the changed /sbin/ldconfig paths. I'm not sure which one though 😅 I'm also not sure whether just brute-forcing this is the way to go. It's certainly curious that this works on nixpkgs variant of ldconfig. Seems to just look for the file but not invoke it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This particular error should be addressed by the linked patch

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. The patch approach seems better than the substitutions I'm doing here. Looks like the remaining issue is the runc/crun crash. I believe this is the same issue you also encountered in #279235 (comment). Not being able to get any useful information out of --debug mode certainly makes this somewhat tricky to figure out.

@aaronmondal
Copy link
Contributor Author

Ok podman doesn't work:

podman run -it --rm --device nvidia.com/gpu=all ubuntu bash

Error: OCI runtime error: crun: {"msg":"error executing hook `/run/current-system/sw/bin/nvidia-ctk` (exit code: 1)","level":"error","time":"..."}

The corresponding CDI spec looks like this:

/etc/cdi/nvidia.yaml
---
cdiVersion: 0.5.0
containerEdits:
  deviceNodes:
  - path: /dev/nvidia-modeset
  - path: /dev/nvidia-uvm
  - path: /dev/nvidia-uvm-tools
  - path: /dev/nvidiactl
  hooks:
  - args:
    - nvidia-ctk
    - hook
    - update-ldcache
    - --folder
    - /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib
    hookName: createContainer
    path: /run/current-system/sw/bin/nvidia-ctk
  mounts:
  - containerPath: /etc/egl/egl_external_platform.d/10_nvidia_wayland.json
    hostPath: /etc/egl/egl_external_platform.d/10_nvidia_wayland.json
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /etc/egl/egl_external_platform.d/15_nvidia_gbm.json
    hostPath: /etc/egl/egl_external_platform.d/15_nvidia_gbm.json
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libEGL_nvidia.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libEGL_nvidia.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libGLESv1_CM_nvidia.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libGLESv1_CM_nvidia.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libGLESv2_nvidia.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libGLESv2_nvidia.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libGLX_nvidia.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libGLX_nvidia.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libcuda.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libcuda.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libcudadebugger.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libcudadebugger.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libglxserver_nvidia.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libglxserver_nvidia.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvcuvid.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvcuvid.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-allocator.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-allocator.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-cfg.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-cfg.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-egl-gbm.so.1.1.0
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-egl-gbm.so.1.1.0
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-eglcore.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-eglcore.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-encode.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-encode.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-fbc.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-fbc.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-glcore.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-glcore.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-glsi.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-glsi.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-glvkspirv.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-glvkspirv.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-gpucomp.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-gpucomp.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-ml.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-ml.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-ngx.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-ngx.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-nvvm.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-nvvm.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-opencl.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-opencl.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-opticalflow.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-opticalflow.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-pkcs11-openssl3.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-pkcs11-openssl3.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-pkcs11.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-pkcs11.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-ptxjitcompiler.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-ptxjitcompiler.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-rtcore.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-rtcore.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-tls.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvidia-tls.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvoptix.so.545.29.06
    hostPath: /nix/store/wyfi50q4mcipw1xr0r1hxyzzaimzm593-nvidia-x11-545.29.06-6.1.69/lib/libnvoptix.so.545.29.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /run/current-system/sw/bin/nvidia-cuda-mps-control
    hostPath: /run/current-system/sw/bin/nvidia-cuda-mps-control
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /run/current-system/sw/bin/nvidia-cuda-mps-server
    hostPath: /run/current-system/sw/bin/nvidia-cuda-mps-server
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /run/current-system/sw/bin/nvidia-debugdump
    hostPath: /run/current-system/sw/bin/nvidia-debugdump
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /run/current-system/sw/bin/nvidia-smi
    hostPath: /run/current-system/sw/bin/nvidia-smi
    options:
    - ro
    - nosuid
    - nodev
    - bind
devices:
- containerEdits:
    deviceNodes:
    - path: /dev/nvidia0
    - path: /dev/dri/card0
    - path: /dev/dri/renderD128
    hooks:
    - args:
      - nvidia-ctk
      - hook
      - create-symlinks
      - --link
      - ../card0::/dev/dri/by-path/pci-0000:00:05.0-card
      - --link
      - ../renderD128::/dev/dri/by-path/pci-0000:00:05.0-render
      hookName: createContainer
      path: /run/current-system/sw/bin/nvidia-ctk
    - args:
      - nvidia-ctk
      - hook
      - chmod
      - --mode
      - "755"
      - --path
      - /dev/dri
      hookName: createContainer
      path: /run/current-system/sw/bin/nvidia-ctk
  name: "0"
- containerEdits:
    deviceNodes:
    - path: /dev/nvidia0
    - path: /dev/dri/card0
    - path: /dev/dri/renderD128
    hooks:
    - args:
      - nvidia-ctk
      - hook
      - create-symlinks
      - --link
      - ../card0::/dev/dri/by-path/pci-0000:00:05.0-card
      - --link
      - ../renderD128::/dev/dri/by-path/pci-0000:00:05.0-render
      hookName: createContainer
      path: /run/current-system/sw/bin/nvidia-ctk
    - args:
      - nvidia-ctk
      - hook
      - chmod
      - --mode
      - "755"
      - --path
      - /dev/dri
      hookName: createContainer
      path: /run/current-system/sw/bin/nvidia-ctk
  name: all
kind: nvidia.com/gpu

@jmbaur's patches allow adding a new --ldconfig-path option which changes the first hook in the above example to something like this (assuming --ldconfig-path=myldconfigpath):

  hooks:
  - args:
    - nvidia-ctk
    - hook
    - update-ldcache
    - --ldconfig-path
    - myldconfigpath
    - --folder
    - /nix/store/qhw7ag7945046gm7z2sryx266hk5masw-nvidia-x11-545.29.06-6.1.71/lib
    hookName: createContainer
    path: /run/current-system/sw/bin/nvidia-ctk

But it doesn't seem to have any effect on whatever is failing during runtime.

@ereslibre
Copy link
Member

ereslibre commented Jan 14, 2024

Thank you for this WIP! I made podman work based on this derivation, but I want to put all pieces together in a better way.

I have set up my NixOS environment with the following setting:

    etc."cdi/nvidia.yaml".text = ''
      ---
      cdiVersion: 0.5.0
      containerEdits:
        deviceNodes:
        - path: /dev/nvidia-modeset
        - path: /dev/nvidia-uvm
        - path: /dev/nvidia-uvm-tools
        - path: /dev/nvidiactl
        hooks:
        - args: []
          hookName: createContainer
          path: /nix/store/fcjkd0v1ybrgd3fvvpljj2m526wvi4f5-container-toolkit-container-toolkit-1.15.0-rc.1/bin/nvidia-ctk
        mounts:
        ... <more data> ...
    ''

I generated this CDI content by running: nvidia-ctk cdi generate --library-search-path /run/opengl-driver/lib --nvidia-ctk-path /nix/store/fcjkd0v1ybrgd3fvvpljj2m526wvi4f5-container-toolkit-container-toolkit-1.15.0-rc.1/bin/nvidia-ctk, based on the realisation of the derivation of this PR.

On this generated file I removed the update-ldcache hook because it was not working for me. Now, if I mount the /nix directory from the host, so that the interpreter of the binaries can be found on the container, it works as expected:

❯ podman run --rm --device nvidia.com/gpu=all -v /nix:/nix ubuntu /run/current-system/sw/bin/nvidia-smi -L
GPU 0: NVIDIA GeForce RTX 4090 (UUID: GPU-c475e08b-0cc5-f5aa-4326-99699429b449)
GPU 1: NVIDIA GeForce RTX 2080 SUPER (UUID: GPU-5cca1a6f-7cee-b649-40f0-2d3ecb0aa207)

I think it would be interesting to expose a NixOS option for CDI, maybe under virtualisation.containers.

I hope to be able to propose something while you are also working on it.

@aaronmondal
Copy link
Contributor Author

Oh damn this also works with docker!

docker run \
    --rm -ti \
    --runtime=nvidia \
    -e NVIDIA_VISIBLE_DEVICES=nvidia.com/gpu=all \
    -v /nix:/nix \
    ubuntu \
    /run/current-system/sw/bin/nvidia-smi

It seems that the manually adjusting the generated CDI to have args: [] instead of the update-ldcache args was the missing piece. In my case I didn't need the --nvidia-ctk-path option though, as this seems to work as well:

hooks:
  - args: []
    hookName: craeateContainer
    path: /run/current-system/sw/bin/nvidia-ctk

I guess there are a few things that need changing then:

  1. Omit the update-ldcache arguments in generated CDIs entirely by patching it out?
  2. The containerPath: /run/current-system/... and similar hostPath sections should probably use realpaths like resolved ${pkgs.somepackage}/bin/nvididia-cuda-mps-server instead of the symlinks. This way we no longer need to mount all of -v /nix:/nix.

I wonder whether it would make sense to make CDI the default. It's already the recommended approach for podman and docker will support CDI for the "--device" syntax in the upcoming release: docker/cli#3864. In the meantime we could use the -e NVIDIA_VISIBLE_DEVICES approach as above. AFAIU we could remove the --gpu usages entirely if there currently are any.

If I remember correctly the nvidia-gpu-operator also looks at the CDI yaml file and uses that to configure GPU-container launches on K8s (at least with podman, and I wouldn't be surprised if it's already the case for docker as well). My guess is that this is kind of the main use case for many users and having everything run on CDI by default seems like it would make this usecase more straightforward.

@ereslibre
Copy link
Member

I wonder whether it would make sense to make CDI the default

I agree that we should move to CDI and make that the default.

We also have to take into account the use case of cross-compiling or building a NixOS system for a remote system, where we cannot use nvidia-ctk cdi generate on the build system in an automatic fashion, because the build system might be completely different from the host one -- it might not even have the GPU's that will be present on the host. --

@ereslibre
Copy link
Member

Although nvidia-smi works in the case I described, having an application relying on CUDA (e.g. ludwigai/ludwig-gpu:latest) does not identify the GPU's. I fear this might have to do with the removal of the update-ldcache hook, but I might be wrong.

I will keep this PR updated with what I find.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although nvidia-smi works in the case I described, having an application relying on CUDA (e.g. ludwigai/ludwig-gpu:latest) does not identify the GPU's

I don't know yet what exactly update-ldcache refers to, but the FHS apps rely on /etc/ld.so.{cache,conf} to discover "global" libraries, whereas NixOS deploys the impure drivers in a predefined location. You'd have to generate the /etc/ld.so.* stuff in the container as aware of the drivers' location for the FHS apps to work, and you would also have to mount the /run/opengl-driver/lib link farm for Nixpkgs apps to work. Alternatively, you can set the LD_LIBRARY_PATH for both.

You could test if the Nixpkgs apps currently work for you by building and docker load-ing something simple like

with import ./. { config.allowUnfree = true; };

dockerTools.buildLayeredImage rec{
  name = cudaPackages.saxpy.pname;
  tag = "latest";

  contents = buildEnv {
    inherit name;
    paths = [ cudaPackages.saxpy ];
  };
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming update-ldcache is useful given we inject libraries from the host on the container with the CDI hooks. We cannot assume the container image will be NixOS: huge chances it won't be.

I think we probably still want the update-ldcache hook for all containerized and packaged software we can run from within a NixOS host, that get the CUDA libraries injected from the host.

This hook is a no-op if it cannot find /etc/ld.so.{cache,conf}, which is also fine.

Copy link
Member

@ereslibre ereslibre Jan 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I was able to make LocalAI work just fine with podman and CDI. I removed the ldcache hook, it's not really mandatory. I did a couple of approaches with this:

  1. Improve the update-ldcache subcommand of nvidia-container-toolkit by adding -n to ldconfig call. This made this command succeed along with a change to point to an existing ldconfig in the nix store.

  2. Delete the update-ldcache hook and add a CDI hook that alters the LD_LIBRARY_PATH envvar, like so:

    {
      "name": "0",
      "containerEdits": {
        "env": [
          "LD_LIBRARY_PATH=/nix/store/l543ki4i4z56gc9gx5p4qzna2m24aywr-nvidia-x11-545.29.06-6.1.72/lib:$LD_LIBRARY_PATH"
        ],
        "deviceNodes": [
          {
            "path": "/dev/nvidia1"
    ...

I'm feeling more positive about the second option; I think is probably less brittle. This means ditching the first option and never needing to call to update-ldcache on the container root. When I'm happy with the changes I can share with you.

Also, do you know if we could create a mechanism that generates a CDI spec by calling to nvidia-ctk cdi generate on boot, and writes the result with our modifications on /etc/cdi/nvidia.yaml? I think a systemd unit that performs this on boot would be ideal.

WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update-ldcache

There are two tiers of needs: there's what NixOS needs, and there's the fact that the Nixpkgs package is broken. At the bare minimum, we want the NixOS module to work. Ideally, we want the Nixpkgs package to work both in FHS and on NixOS with or without a dedicated module.

For the latter we do need, imo, to patch both libnvidia-container and nvidia-docker-toolkit to skip ldcache if there's a static configuration available (e.g. if we taught it about @driverLink@ at build time and the directory happens to exist at runtime, although that's rather specific).

For the former it'd be seemingly enough to update the module to deploy etc."cdi/nvidia.yaml"

Also, do you know if we could create a mechanism that generates a CDI spec by calling to nvidia-ctk cdi generate on boot, and writes the result with our modifications on /etc/cdi/nvidia.yaml

I would first consider generating a static nvidia.yaml, described in the Nix language?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

options.xxxxxxx.cdi.settings = mkOption { type = (pkgs.formats.toml { }).type; /* ... */ } I guess, and etc."cdi/nvidia.yaml" = pkgs.formats.toml.generate config.xcxxxx.cdi.settings

My main motivation is to avoid any weird nvidia stuff doing mutable operations on boot

@ereslibre
Copy link
Member

ereslibre commented Jan 16, 2024

I would first consider generating a static nvidia.yaml, described in the Nix language?

Do you mean having something like:

virtualisation.containers.cdi = ''
---
cdiVersion: 0.5.0
...
'';

Or, do you mean literally having "typed" CDI settings? I would say the latter is not worth the effort. Just double checking if we are on the same page.

@ereslibre
Copy link
Member

ereslibre commented Jan 17, 2024

@SomeoneSerge do you know if a runtime supports fully fledged CDI, whether nvidia-container-runtime is still required instead of running runc/crun directly?

One problem setting LD_LIBRARY_PATH is that we cannot augment this envvar with CDI, only set/replace it:

    {
      "name": "0",
      "containerEdits": {
        "env": [
          "LD_LIBRARY_PATH=/nix/store/l543ki4i4z56gc9gx5p4qzna2m24aywr-nvidia-x11-545.29.06-6.1.72/lib:$LD_LIBRARY_PATH"
        ],
        "deviceNodes": [
          {
            "path": "/dev/nvidia1"
    ...

This ^ is not working, and is not meant to work (confirmed on CNCF Slack on tag-runtime). The pattern "export VAR=/something/else:$VAR" is not going to fly on containerEdits.env[].

We probably want to track the migration to CDI and nvidia-docker deprecation and removal on some specific issue (so I stop spamming this PR :P). I don't see an issue open for that, do you have any thoughts on that @SomeoneSerge @aaronmondal?

@ereslibre
Copy link
Member

ereslibre commented Jan 17, 2024

On this document anchor you can find more information about NVIDIA's rationale on their shift to running ldconfig on the container root vs using LD_LIBRARY_PATH with the mapped libraries.

@bachp
Copy link
Member

bachp commented Jan 17, 2024

@SomeoneSerge do you know if a runtime supports fully fledged CDI, whether nvidia-container-runtime is still required instead of running runc/crun directly?

Not sure if this is what you are asking, but Podman has full CDI support and I don't think it needs nvidia-container-runtime.

@ereslibre
Copy link
Member

Not sure if this is what you are asking, but Podman has full CDI support and I don't think it needs nvidia-container-runtime.

Thank you. Yes, this was my question. Podman, crio, containerd and moby all of them seem to have integration with container-device-interface.

I think we can get rid of runtime wrappers. I was only dubious in case this nvidia runtime perform something missing on CDI, as I didn't check exhaustively.

@ereslibre
Copy link
Member

ereslibre commented Jan 18, 2024

More information: there is work ongoing for allowing to configure the ldconfig path as well as the invocation parameters on https://gitlab.com/nvidia/container-toolkit/container-toolkit/-/merge_requests/525 -- already mentioned in #278969 (comment).

This is going to help on our use case, we will be able to get rid of the patching for the ldconfig and potentially the generated CDI spec tweaking.

@ereslibre
Copy link
Member

ereslibre commented Jan 22, 2024

This is the WIP that I have, using this rebased PR as a base:

ereslibre/nixpkgs@nvidia-container-toolkit-v1.15.0-rc.1-orig...ereslibre:nixpkgs:nvidia-container-toolkit-v1.15.0-rc.1

This is a manual way of defining CDI with their content. This instance will be mapped to /etc/cdi/nvidia.json and /etc/cdi/other-provider.json with their respective contents:

virtualisation.containers.cdi = {
  nvidia = builtins.fromJSON ''
    {
      "cdiVersion": "0.5.0",
      ...
    }
  '';
  other-provider = builtins.fromJSON ''
    {
      "cdiVersion": "0.5.0",
      ...
    }
  '';
};

This is an automatic way of auto generating the CDI interface:

virtualisation.containers.cdi.nvidia = "nvidia-ctk-generate";

I am not sold on the types yet, because you could do something like the following:

virtualisation.containers.cdi = {
  nvidia = "nvidia-ctk-generate";
  other-provider = "nvidia-ctk-generate";
};

And now you would have /etc/cdi/nvidia.json and /etc/cdi/other-provider.json both auto-generated with the same content.

Another problem is that when you ask for the auto-generated CDI, the /etc/cdi/<provider>.json is auto-generated by a systemd unit on boot, given this information is strictly local to the machine where it's running. There are a couple of gotchas with this:

  • Is this acceptable in NixOS? Should we instead autogenerate somewhere in /run/current-system/some/path/cdi and symlink /etc/cdi to /run/current-system/some/path/cdi?

  • Cleaning. When we switch from virtualisation.containers.cdi.provider = "nvidia-ctk-generate and remove this attribute entirely for example, then /etc/cdi/provider.json needs to be removed, but it was generated by a systemd unit, not linked from the nix store.

I am early sharing to know your thoughts on this. Besides the issues I have mentioned the integration is working flawlessly for me in my tests.

@bachp
Copy link
Member

bachp commented Jan 22, 2024

virtualisation.containers.cdi = {
  nvidia = builtins.fromJSON ''
    {
      "cdiVersion": "0.5.0",
      ...
    }
  '';
  other-provider = builtins.fromJSON ''
    {
      "cdiVersion": "0.5.0",
      ...
    }
  '';
};

I like this simple way of defining custom CDI resources.

This is an automatic way of auto generating the CDI interface:

virtualisation.containers.cdi.nvidia = "nvidia-ctk-generate";

But I was a bit surprised how the autogeneration works. First I was thinking you can pass an arbitrary executable and it would just write the output. But then I would have expected to see nvidia-ctk cdi generate here. After looking into the code I realized that there is some magic, including post processing involced.

This makes me a bit worried about the maintenance burden. Maybe we can push some more things upstream so that no or less post processing is needed.

I am not sold on the types yet, because you could do something like the following:

virtualisation.containers.cdi = {
  nvidia = "nvidia-ctk-generate";
  other-provider = "nvidia-ctk-generate";
};

Another problem is that when you ask for the auto-generated CDI, the /etc/cdi/<provider>.json is auto-generated by a systemd unit on boot, given this information is strictly local to the machine where it's running. There are a couple of gotchas with this:

* Is this acceptable in NixOS? Should we instead autogenerate somewhere in `/run/current-system/some/path/cdi` and symlink `/etc/cdi` to `/run/current-system/some/path/cdi`?

* Cleaning. When we switch from `virtualisation.containers.cdi.provider = "nvidia-ctk-generate` and remove this attribute entirely for example, then `/etc/cdi/provider.json` needs to be removed, but it was generated by a systemd unit, not linked from the nix store.

I think generally this is fine as the generated CDI data is kind of autodetected runtime data. So generating at bootup is ok. Wouldn't puttint the CDI files somewhere in /run and use environment.etc to symlink to that location would take care of the removal in /etc/cdi?

@ereslibre
Copy link
Member

ereslibre commented Jan 22, 2024

But I was a bit surprised how the autogeneration works.

I'm going to give it a second thought, yes. I think it might be worth to just add a services.nvidia-container-toolkit, and keeping virtualisation.containers.cdi only for the static JSON definition.

This makes me a bit worried about the maintenance burden. Maybe we can push some more things upstream so that no or less post processing is needed.

I agree completely. I take ownership of this code and to find a sweet spot for improving the upstream project; they are very welcome to this kind of improvements.

I think generally this is fine as the generated CDI data is kind of autodetected runtime data. So generating at bootup is ok. Wouldn't puttint the CDI files somewhere in /run and use environment.etc to symlink to that location would take care of the removal in /etc/cdi?

Yes, I think that might be fine, still we don't want to leave dangling symlinks around so a cleanup is necessary IMO.

@jmbaur
Copy link
Contributor

jmbaur commented Jan 23, 2024

I do not think it would be good to put NVIDIA CDI devices into nixos configuration directly. There is a good amount of logic that nvidia-ctk cdi generate is doing that we definitely don't want to replicate the complexity of managing in nix directly. I think doing this once at bootup is the right choice. This is what we are doing here for example, granted that is for jetpack devices where it is a little bit easier to use CDI since the declaration of libraries and devices to passthrough to the container are declared in a CSV file, not auto-detected.

@aaronmondal
Copy link
Contributor Author

@jmbaur I've pulled in your version-related linker flags from https://github.com/NixOS/nixpkgs/pull/280184/files. I didn't add the patch though. Should we add it for this bump or is it fine to defer those changes to a presumed 1.15.0-rc.2 update?

@jmbaur
Copy link
Contributor

jmbaur commented Jan 25, 2024

Should we add it for this bump or is it fine to defer those changes to a presumed 1.15.0-rc.2 update?

I haven't yet submitted that patch to their project on GitLab, as I'm not sure if it aligns with their goals, but I was messing around with it on an x86_64 workstation I have access to and it does make nvidia-ctk cdi generate work without having to set LD_LIBRARY_PATH. It uses dlopen() and friends to find cuda libs, which I think is a less leaky abstraction than some of the other strategies used by nvidia-ctk to detect host libs. It does depend on us using cudaPackages.autoAddOpenGLRunpathHook, which I see we are already doing in this PR. If we are willing pull the patch into nixpkgs, at least for now, it would make CDI support better for NixOS. If that's something you think should be solved with the bump in this PR, by all means :)

Also to note is the draft PR of mine has a few other patches that address some of the /sbin/ldconfig problems. Those were already merged into their main branch, just not yet on a tagged release.

@aaronmondal aaronmondal force-pushed the nvidia-container-toolkit-v1.15.0-rc.1 branch from b338e84 to 24134da Compare January 25, 2024 20:06
@aaronmondal aaronmondal changed the title Update nvidia-container-toolkit to v1.15.0-rc.1 Update nvidia-container-toolkit to v1.15.0-rc.2 Jan 25, 2024
@GaetanLepage
Copy link
Contributor

Result of nixpkgs-review pr 278969 run on x86_64-linux 1

8 packages built:
  • apptainer
  • apptainer-overriden-nixos
  • nvidia-docker
  • nvidia-podman
  • singularity
  • singularity-overriden-nixos
  • udocker
  • udocker.dist

@aaronmondal
Copy link
Contributor Author

aaronmondal commented Jan 26, 2024

My findings at the current stage of this PR:

  1. docker run --rm -ti --gpus=all ubuntu nvidia-smi works out of the box, i.e. the legacy --gpus approach seems to work fine.
  2. nvidia-ctk cdi generate --output hello.yaml works without any additional flags and produces a seemingly correct cdi spec. To add --device support on a new system, run this:
sudo nvidia-ctk cdi generate --output /etc/cdi/nvidia.yaml
  1. For podman you can apply this patch to the generated file at /etc/cdi/nvidia.yaml:
10,15c10
<   - args:
<     - nvidia-ctk
<     - hook
<     - update-ldcache
<     - --folder
<     - /run/opengl-driver/lib
---
>   - args: []

Then you can use podman with CDI like so:

podman run \
  --rm \
  --device nvidia.com/gpu=all \
  -v /nix:/nix \
  ubuntu \
  /run/current-system/sw/bin/nvidia-smi -L
  1. Docker with CDI seems broken for me ATM with the following somewhat confusing error:
docker run \
  --rm -it \
  --runtime=nvidia \
  -e NVIDIA_VISIBLE_DEVICES=nvidia.com/gpu=all \
  ubuntu nvidia-smi

docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: /nix/store/bdgz70m507gzfjg52yrqj5sa0b3rf04n-nvidia-docker/bin/nvidia-container-runtime did not terminate successfully: exit status 125: unknown flag: --root
See 'docker --help'.

Usage:  docker [OPTIONS] COMMAND
...

Looks like some arguments are not passed correctly to nvidia-container-runtime. I've lost track of where the flags are coming from but I'm not too concerned that this is an issue. Chances are that this resolves itself with Docker 25.

@jmbaur
Copy link
Contributor

jmbaur commented Jan 26, 2024

3. For podman you can apply this patch to the generated file at `/etc/cdi/nvidia.yaml`:
10,15c10
<   - args:
<     - nvidia-ctk
<     - hook
<     - update-ldcache
<     - --folder
<     - /run/opengl-driver/lib
---
>   - args: []

What does doing this modification solve? Is there some error that occurs later on when spawning containers?

4. Docker with CDI seems broken for me ATM with the following somewhat confusing error:

Docker versions < 25 do not support CDI at all. You will need the correct version of docker and you will also need to start the daemon with --experimental. CLI usage for CDI with docker should be the same as podman (don't need --runtime, just --device=nvidia.com/gpu=all.

@ereslibre
Copy link
Member

ereslibre commented Jan 27, 2024

For podman you can apply this patch to the generated file at /etc/cdi/nvidia.yaml:

10,15c10
<   - args:
<     - nvidia-ctk
<     - hook
<     - update-ldcache
<     - --folder
<     - /run/opengl-driver/lib
---
>   - args: []

This will prevent some containers from finding the mounted libcuda & friends libraries if they have an ldcache already present, since it will not be refreshed.

I'll open a PR for CDI tomorrow with a proposal. I think it's good to have:

  1. A way for users to provide static CDI specs, that will, essentially, write the provided contents to /etc/cdi. The advantage in this case, as opposed to just setting environment.etc directly is that the JSON is ensured to be syntactically valid.

  2. A systemd service that, if enabled, calls nvidia-ctk cdi generate and performs the needed patches on the generated JSON, and then, writes it to /var/run/cdi.

But I think, as things are at this moment, we need to keep the update-ldcache hook.

@ereslibre ereslibre mentioned this pull request Jan 28, 2024
13 tasks
@ereslibre
Copy link
Member

Opened the PR to add support for CDI: #284507.

There are some mounts I need to re-validate to only mount what is really required. Please provide feedback about that integration over there if you feel like it. :)

@aaronmondal aaronmondal force-pushed the nvidia-container-toolkit-v1.15.0-rc.1 branch from 24134da to a55c829 Compare February 1, 2024 16:45
@aaronmondal aaronmondal changed the title Update nvidia-container-toolkit to v1.15.0-rc.2 Update nvidia-container-toolkit to v1.15.0-rc.3 Feb 1, 2024
Copy link
Contributor

@SomeoneSerge SomeoneSerge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aaronmondal could you update the commit message? Smth like nvidia-container-toolkit: 1.9.0 -> 1.15.0-rc.3.

At a (yet another) glance, this looks good, we should probably merge and move on to libnvidia-container and the CDI PR

@aaronmondal aaronmondal force-pushed the nvidia-container-toolkit-v1.15.0-rc.1 branch from a55c829 to 9daafdf Compare February 2, 2024 00:33
@aaronmondal aaronmondal changed the title Update nvidia-container-toolkit to v1.15.0-rc.3 nvidia-container-toolkit: 1.9.0 -> 1.15.0-rc.3 Feb 2, 2024
Copy link
Contributor

@SomeoneSerge SomeoneSerge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aside from ldflags, I suppose this is ready?

Copy link
Contributor

@SomeoneSerge SomeoneSerge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been open for a while and there haven't been any objections. I intend to merge this as soon as Ofborg finishes re-evaluation.

@aaronmondal
Copy link
Contributor Author

Sry for the delay. Currently travelling. Looks good 👌

@SomeoneSerge SomeoneSerge merged commit fcb6b1d into NixOS:master Feb 13, 2024
@nixos-discourse
Copy link

This pull request has been mentioned on NixOS Discourse. There might be relevant details there:

https://discourse.nixos.org/t/using-nvidia-container-runtime-with-containerd-on-nixos/27865/30

@nixos-discourse
Copy link

This pull request has been mentioned on NixOS Discourse. There might be relevant details there:

https://discourse.nixos.org/t/nvidia-gpu-support-in-podman-and-cdi-nvidia-ctk/36286/4

@SomeoneSerge
Copy link
Contributor

Uhm, I screwed up. Ofborg didn't actually rebuild anything, and my commit updating ldflags broke the whole thing because I assumed the wrong ld...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

10.rebuild-darwin: 0 This PR does not cause any packages to rebuild on Darwin. 10.rebuild-linux: 1-10 This PR causes between 1 and 10 packages to rebuild on Linux. 12.approvals: 1 This PR was reviewed and approved by one person. 12.first-time contribution This PR is the author's first one; please be gentle!

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Package request: nvidia-ctk Update request: nvidia-container-toolkit 1.9.0 → 1.14.3

9 participants