Skip to content

feat: Add --allow-rootless flag for rootless Podman support#68

Open
AlexonOliveiraRH wants to merge 3 commits intominc-org:mainfrom
AlexonOliveiraRH:feat/allow-rootless
Open

feat: Add --allow-rootless flag for rootless Podman support#68
AlexonOliveiraRH wants to merge 3 commits intominc-org:mainfrom
AlexonOliveiraRH:feat/allow-rootless

Conversation

@AlexonOliveiraRH
Copy link
Copy Markdown

@AlexonOliveiraRH AlexonOliveiraRH commented Apr 15, 2026

Summary

This PR adds rootless Podman support to minc, enabling minc create without sudo or root privileges via a new --allow-rootless flag. It resolves all blockers identified in #22.

With all changes applied, all MicroShift system pods reach Running status under rootless Podman on Fedora 43 / kernel 6.19 / crun 1.23.

Problems Solved

Eight distinct issues were discovered and resolved during rootless bringup:

# Problem Solution
1 CRI-O cgroup manager fails (Failed to connect to bus: No medium found) - no D-Bus user session in rootless CRI-O drop-in (20-rootless.conf) switching to cgroupfs, overriding 10-microshift.conf
2 IPv6 localhost resolution breaks etcd connection --sysctl net.ipv6.conf.all.disable_ipv6=1 at container creation
3 oom_score_adj writes fail with EPERM in user namespaces crun-rootless wrapper strips oomScoreAdj from OCI spec via jq
4 cpuset cgroup controller not delegated to user sessions Host-side systemd drop-in (documented, same as kind rootless)
5 chown fails with EINVAL for high OpenShift SCC UIDs Host-side subordinate UID/GID range expansion (documented)
6 crio-subid.service failure inside container No impact - subordinate IDs managed by host Podman
7 /dev/kmsg not accessible in rootless namespaces Mount /dev/null as /dev/kmsg
8 Kubelet needs user namespace awareness KubeletInUserNamespace feature gate via MicroShift config drop-in

Code Changes

File Change Purpose
cmd/minc/main.go Modified --allow-rootless persistent flag, rootless marker lifecycle
cmd/minc/config.go Modified allow-rootless in default config
pkg/providers/podman/provider.go Refactored Instance fields replacing globals, rootless config generation, dynamic graph root
pkg/providers/options.go Modified Rootless-specific container flags and bind mounts
pkg/providers/moby/provider.go Modified Use shared retry constants
pkg/providers/register/register.go Modified Pass allowRootless to provider constructor
pkg/providers/wait.go New Shared MicroShift retry constants (40 retries / ~26 min)
pkg/rootlessmarker/marker.go New Persistent rootless mode marker at ~/.config/minc/.rootless-cluster

Key design decisions

  • Rootless marker file: Persists rootless mode between create and delete so subsequent commands (delete, generate-kubeconfig) work without re-specifying --allow-rootless.
  • Provider refactor: Replaced global useSudo variable and sync.Once with instance fields on the provider struct, eliminating shared mutable state.
  • Dynamic storage path: Queries Podman for the actual graph root (Store.GraphRoot) instead of hardcoding /var/lib/containers/storage, which differs in rootless mode.
  • Config file generation: Three files (MicroShift YAML, CRI-O conf, crun wrapper) are written to ~/.config/minc/ and bind-mounted into the container at creation time.
  • Drop-in numbering: 20- prefix ensures rootless configs override MicroShift's 10-microshift.conf.

Host Prerequisites (One-Time Setup)

Four host-side changes are required before using --allow-rootless. All require root/sudo and persist across reboots.

1. Kernel parameters (sysctl):

sudo tee /etc/sysctl.d/99-minc-rootless.conf <<'SYSCTL'
net.ipv4.ip_forward = 1
net.ipv4.ip_unprivileged_port_start = 80
fs.inotify.max_user_instances = 1024
SYSCTL
sudo sysctl --system
  • ip_forward - required for container networking (packet forwarding)
  • ip_unprivileged_port_start = 80 - allows rootless Podman to bind HTTP/HTTPS ports (80, 443)
  • max_user_instances = 1024 - increases inotify watch limit for MicroShift services

2. Load ip_tables kernel module:

sudo tee /etc/modules-load.d/minc-rootless.conf <<'MOD'
ip_tables
MOD
sudo modprobe ip_tables

3. Delegate cpuset cgroup controller:

sudo mkdir -p /etc/systemd/system/user@.service.d/
sudo tee /etc/systemd/system/user@.service.d/delegate.conf <<'CONF'
[Service]
Delegate=cpuset cpu io memory pids
CONF
sudo systemctl daemon-reload

4. Expand subordinate UID/GID ranges (for OpenShift SCC high UIDs):

sudo usermod --add-subuids 165536-1265535999 --add-subgids 165536-1265535999 $(whoami)
podman system migrate

Note: These steps are only needed once per host. After a reboot the sysctl and module settings load automatically via the files in /etc/sysctl.d/ and /etc/modules-load.d/.

Usage

# Create a rootless cluster:
minc create --allow-rootless

# Or via config (persistent):
minc config set allow-rootless true
minc create

# Delete (auto-detects rootless mode via marker):
minc delete

Test Environment

Component Version
OS Fedora 43
Kernel 6.19.11-200.fc43.x86_64
Podman rootless mode
CRI-O 1.32.8 (inside container)
crun 1.23 (inside container)
MicroShift 4.19 (OKD/SCOS)

Test Results

All 9 MicroShift system pods reach Running status under rootless Podman:

NAMESPACE              NAME                                       READY   STATUS    AGE
kube-flannel           kube-flannel-ds-xxxxx                      1/1     Running   ...
kube-proxy             kube-proxy-xxxxx                           1/1     Running   ...
kube-system            csi-snapshot-controller-xxxxx              1/1     Running   ...
openshift-dns          dns-default-xxxxx                          2/2     Running   ...
openshift-dns          node-resolver-xxxxx                        1/1     Running   ...
openshift-ingress      router-default-xxxxx                       1/1     Running   ...
openshift-service-ca   service-ca-xxxxx                           1/1     Running   ...

Future Work

  • Automated host prerequisite checks with actionable error messages
  • macOS support (Podman Machine VM prerequisites)
  • Podman Desktop integration (podman-desktop#2861, podman-desktop#15302)
  • Upstream CRI-O/crun improvements (automatic cgroupfs fallback, --rootless flag fixing oom_score_adj)

References

Fixes #22

Enable running MicroShift in Container (minc) under rootless Podman
without requiring sudo or root privileges.

This resolves all blockers identified in issue minc-org#22 by addressing
eight distinct problems discovered during rootless bringup:

1. CRI-O cgroup manager fails without D-Bus user session - switched
   to cgroupfs via a 20-rootless.conf drop-in that overrides
   10-microshift.conf.

2. IPv6 localhost resolution causes etcd connection failure - disabled
   IPv6 inside the container so glibc resolves localhost to 127.0.0.1.

3. oom_score_adj writes fail with EPERM in user namespaces - added a
   crun-rootless wrapper that strips oomScoreAdj from OCI specs
   before invoking crun.

4. cpuset cgroup controller not delegated - documented host-side
   systemd drop-in requirement (same as kind rootless docs).

5. chown projected volumes fails with EINVAL for high UIDs -
   documented subordinate UID/GID range expansion for OpenShift SCC.

6. crio-subid.service failure inside container - no impact, CRI-O
   continues to function (subordinate IDs managed by host Podman).

7. /dev/kmsg not accessible in rootless namespaces - mount /dev/null
   as /dev/kmsg so kubelet OOM watcher proceeds without error.

8. Kubelet needs user namespace awareness - enabled
   KubeletInUserNamespace feature gate via MicroShift config drop-in.

Code changes:
- New --allow-rootless persistent flag with viper/config support
- Refactored Podman provider from global state to instance fields
- Dynamic Podman graph root detection (rootless uses ~/.local/share/)
- Auto-generated rootless configs (MicroShift, CRI-O, crun wrapper)
  written to ~/.config/minc/ and bind-mounted into the container
- Rootless marker file (~/.config/minc/.rootless-cluster) persists
  mode across create/delete lifecycle
- Shared MicroShift retry constants (40 retries / ~26 min) to
  accommodate slower rootless startup

Fixes minc-org#22

Signed-off-by: Alexon Oliveira <alolivei@redhat.com>
@praveenkumar
Copy link
Copy Markdown
Contributor

praveenkumar commented Apr 15, 2026

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 15, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 15, 2026

📝 Walkthrough

Walkthrough

Adds rootless container support across the CLI and providers: new persistent flag and default config key, an on-disk rootless marker, provider-level rootless options and config generation, podman provider refactor to instance mode, and centralized retry constants for MicroShift service polling.

Changes

Cohort / File(s) Summary
Configuration
cmd/minc/config.go
Added default config entry allow-rootless = false to defaultConfig.
CLI & Command Wiring
cmd/minc/main.go
Added persistent --allow-rootless flag bound to viper; set/remove rootless marker on create/delete and conditional cleanup on failures.
Rootless Marker Module
pkg/rootlessmarker/marker.go
New marker implementation: Path(), Present(), Set(), Remove() managing .rootless-cluster in the user config dir.
Provider Options & Retry Constants
pkg/providers/options.go, pkg/providers/wait.go
Extended COptions with rootless fields (AllowRootless, HostContainerStorage, RootlessMicroShiftConfig, RootlessCRIOConfig, RootlessCrunWrapper); added MicroShiftServiceMaxRetries and MicroShiftServiceInitialRetryDelay.
Moby Provider
pkg/providers/moby/provider.go
Updated WaitForMicroShiftService to use centralized retry constants; removed unused import.
Podman Provider
pkg/providers/podman/provider.go
Refactored to instance-based provider (New(allowRootless bool)), removed package globals, added instance methods (podmanCmd, fetchProviderInfo, checkCGroupsAndRootFulMode), rootless support: query store.GraphRoot, populate COptions, and write rootless config artifacts (MicroShift YAML, CRI-O drop-in, crun wrapper).
Provider Registration
pkg/providers/register/register.go
Register() now determines allowRootless from viper allow-rootless or marker presence and passes it to podman.New(allowRootless).

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI as "minc CLI"
    participant Marker as "Rootless Marker"
    participant Registry as "Provider Registry"
    participant Provider as "Podman Provider"
    participant Config as "Rootless Config Generator"

    User->>CLI: minc create --allow-rootless
    CLI->>Marker: Set()
    CLI->>Registry: Register("podman")
    Registry->>Registry: read allow-rootless (viper OR marker)
    Registry->>Provider: New(allowRootless=true)
    CLI->>Provider: Create(options)
    Provider->>Provider: Query Podman store.GraphRoot
    Provider->>Config: generate MicroShift YAML / CRI-O drop-in / crun wrapper
    Config-->>Provider: artifacts written
    Provider-->>CLI: Create complete

    User->>CLI: minc delete
    CLI->>Provider: Delete()
    Provider-->>CLI: Delete complete
    CLI->>Marker: Remove()
    Marker-->>CLI: Removed
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇 I dug a tiny marker bright,
Flags flipped on for rootless flight,
Podman learned to change its way,
Config scraps written out in play,
Hopping clusters through the night.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The PR title accurately describes the main feature: adding a --allow-rootless flag for rootless Podman support, which aligns with the primary code changes across multiple modules.
Linked Issues check ✅ Passed The PR comprehensively addresses the objectives in issue #22 by enabling rootless Podman support, documenting blockers with solutions, persisting rootless mode via marker files, and adapting storage paths to rootless contexts.
Out of Scope Changes check ✅ Passed All changes directly support rootless Podman functionality and are in scope for issue #22. Changes include flag/config addition, provider refactoring, retry constants, and rootless marker management.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cmd/minc/config.go`:
- Line 27: The "allow-rootless" boolean is being written as a string by `config
set` and thus won't be recognized by `writeConfigWithoutDefaults()` as the
default false; update the code path that handles `config set` (the code that
stores raw args as strings) to detect boolean-like strings ("true"/"false") and
coerce them to actual bool types before persisting, or alternatively add a small
normalization step in `writeConfigWithoutDefaults()` to parse string values for
known boolean keys (e.g., "allow-rootless") back to bool; ensure the change
references the `allow-rootless` key and the `writeConfigWithoutDefaults()`
function so the stored JSON uses a real boolean rather than a string.

In `@cmd/minc/main.go`:
- Around line 114-115: The call to rootlessmarker.Remove() is currently ignoring
errors which can leave stale CLI state while still printing success; update the
delete path in main (where rootlessmarker.Remove() is called) to check the
returned error, and if non-nil print a clear error to stderr (e.g., via
fmt.Fprintf(os.Stderr,...)) and exit non-zero (os.Exit(1)) or return the error
from the command handler so the process reports failure; ensure you reference
and handle the error from rootlessmarker.Remove() and avoid printing "Item
deleted" on failure.
- Around line 60-76: The deferred rootlessmarker.Remove() is never reached
because log.Fatal exits immediately after minc.Create fails; change the error
path so the marker is cleaned up explicitly before exiting: after err :=
minc.Create(cType) if err != nil { if allowRL { _ = rootlessmarker.Remove() }
log.Fatal("error creating cluster", "err", err) } or alternatively replace
log.Fatal with a non-exiting log call and then ensure you call
rootlessmarker.Remove() before os.Exit; reference minc.Create,
rootlessmarker.Remove, allowRL and createFailed when making the change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: 74c984c0-c2e4-4c57-9e5c-43793fc2ccaa

📥 Commits

Reviewing files that changed from the base of the PR and between 5d70364 and eea65c9.

📒 Files selected for processing (8)
  • cmd/minc/config.go
  • cmd/minc/main.go
  • pkg/providers/moby/provider.go
  • pkg/providers/options.go
  • pkg/providers/podman/provider.go
  • pkg/providers/register/register.go
  • pkg/providers/wait.go
  • pkg/rootlessmarker/marker.go

Comment thread cmd/minc/config.go
"http-port": "9080",
"microshift-config": "",
"disable-overlay-cache": false,
"allow-rootless": false,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

allow-rootless will not round-trip cleanly through config set.

config set stores raw args as strings, so minc config set allow-rootless false will be written as "false" and writeConfigWithoutDefaults() will never treat it as the default false bool. The new key will linger in the JSON with the wrong type.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/minc/config.go` at line 27, The "allow-rootless" boolean is being written
as a string by `config set` and thus won't be recognized by
`writeConfigWithoutDefaults()` as the default false; update the code path that
handles `config set` (the code that stores raw args as strings) to detect
boolean-like strings ("true"/"false") and coerce them to actual bool types
before persisting, or alternatively add a small normalization step in
`writeConfigWithoutDefaults()` to parse string values for known boolean keys
(e.g., "allow-rootless") back to bool; ensure the change references the
`allow-rootless` key and the `writeConfigWithoutDefaults()` function so the
stored JSON uses a real boolean rather than a string.

Comment thread cmd/minc/main.go Outdated
Comment thread cmd/minc/main.go Outdated
Comment on lines 114 to 115
_ = rootlessmarker.Remove()
fmt.Println("Item deleted")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Don't silently ignore marker cleanup failures after delete.

If rootlessmarker.Remove() fails here, the command still reports success while leaving stale CLI state behind for the next Podman operation.

Possible fix
-		_ = rootlessmarker.Remove()
+		if err := rootlessmarker.Remove(); err != nil {
+			fmt.Fprintf(os.Stderr, "warning: deleted cluster but failed to clear rootless marker: %v\n", err)
+		}
 		fmt.Println("Item deleted")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
_ = rootlessmarker.Remove()
fmt.Println("Item deleted")
if err := rootlessmarker.Remove(); err != nil {
fmt.Fprintf(os.Stderr, "warning: deleted cluster but failed to clear rootless marker: %v\n", err)
}
fmt.Println("Item deleted")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/minc/main.go` around lines 114 - 115, The call to rootlessmarker.Remove()
is currently ignoring errors which can leave stale CLI state while still
printing success; update the delete path in main (where rootlessmarker.Remove()
is called) to check the returned error, and if non-nil print a clear error to
stderr (e.g., via fmt.Fprintf(os.Stderr,...)) and exit non-zero (os.Exit(1)) or
return the error from the command handler so the process reports failure; ensure
you reference and handle the error from rootlessmarker.Remove() and avoid
printing "Item deleted" on failure.

Address two issues found during code review:

1. log.Fatal calls os.Exit(1) which skips deferred functions. The
   rootless marker was set before minc.Create() but the deferred
   cleanup never ran on failure, leaving a stale marker that would
   cause future commands to incorrectly assume rootless mode.
   Fix: explicitly remove the marker before log.Fatal.

2. Marker removal errors on delete were silently ignored. If Remove()
   fails (e.g. permission issue), the user now sees a warning instead
   of a false success message.

Signed-off-by: Alexon Oliveira <alolivei@redhat.com>
@AlexonOliveiraRH
Copy link
Copy Markdown
Author

@coderabbitai review

Hi @praveenkumar, thanks for the initial investigation on this issue. Your findings around KubeletInUserNamespace and cgroup delegation pointed me in the right direction.

The CreateContainerError you hit was caused by CRI-O defaulting to systemd cgroup management, which fails without a D-Bus user session inside the container. Switching to cgroupfs via a 20- numbered drop-in (to override 10-microshift.conf) fixed that, but five more issues surfaced after that, most notably oom_score_adj EPERM and high-UID chown failures from OpenShift SCCs. The PR description documents all eight problems and their solutions.

Regarding the CodeRabbit comment about config set not round-tripping booleans cleanly: that's a pre-existing issue that also affects disable-overlay-cache, so I left it out of scope for this PR.

I tested this on Fedora 43 / kernel 6.19 / rootless Podman with the 4.19.0-okd-scos.17 image. All system pods come up successfully. Happy to address any feedback or split the PR if you'd prefer smaller chunks.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 15, 2026

Tip

For best results, initiate chat on the files or code changes.

@AlexonOliveiraRH, thank you for the detailed context — it's very helpful for the review. The explanation of the cgroupfs fallback via the 20--numbered drop-in (to override CRI-O's 10-microshift.conf default), the oom_score_adj EPERM workaround via the crun-rootless wrapper, and the high-UID chown issue from OpenShift SCCs are all well-documented in the PR description.

Noted on the config set boolean round-trip issue being pre-existing and also affecting disable-overlay-cache — I'll record that to avoid flagging it again in future reviews.


✏️ Learnings added
Learnt from: AlexonOliveiraRH
URL: https://github.com/minc-org/minc/pull/68

Timestamp: 2026-04-15T16:36:21.313Z
Learning: In the minc project (cmd/minc/config.go), the `config set` command has a pre-existing issue where boolean configuration values (e.g., `allow-rootless`, `disable-overlay-cache`) do not round-trip cleanly. This is a known limitation and intentionally left out of scope for individual feature PRs.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

AlexonOliveiraRH added a commit to AlexonOliveiraRH/minc-extension that referenced this pull request Apr 15, 2026
Add an "Allow rootless Podman" checkbox to the cluster creation form
that passes --allow-rootless to the minc CLI. Change the Linux audit
from a blocking error to an informational message when running as
non-root, since rootless mode is now a supported configuration.

Depends on minc-org/minc#68.
Comment thread pkg/providers/wait.go Outdated
// MicroShiftServiceMaxRetries bounds how long we poll systemctl is-active for the microshift
// unit inside the container. While the unit is "activating", is-active exits non-zero (often 3),
// so each poll is a retry. Rootless or slow storage can keep the unit activating for many minutes.
// With pkg/retry's delay = InitialDelay * attempt, 40 attempts implies ~26 minutes of backoff before failure.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how does this implies ~26 mins delay, to me it is 40 attempts with every 2 second delay which should be ~80 sec.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how does this implies ~26 mins delay, to me it is 40 attempts with every 2 second delay which should be ~80 sec.

@praveenkumar, you're right that at first glance it looks like 80s, but pkg/retry/retry.go uses linear backoff (initialDelay * attempt), not constant delay. So attempt 1 sleeps 2s, attempt 2 sleeps 4s, ..., attempt 39 sleeps 78s. The total is 2 * sum(1..39) = 1560s ≈ 26 min. I'll update the comment to make the math clearer.

@praveenkumar
Copy link
Copy Markdown
Contributor

So I just testing this PR without going through the codebase and during my testing all the pods are not coming up, some issue around mount

$ cat /etc/fedora-release 
Fedora release 43 (Forty Three)

$ podman version
Client:        Podman Engine
Version:       5.8.2
API Version:   5.8.2
Go Version:    go1.25.9 X:nodwarf5
Git Commit:    5b263b5f5b48004a87caac44e67349a8266d9ef4
Built:         Tue Apr 14 05:30:00 2026
Build Origin:  Fedora Project
OS/Arch:       linux/amd64

$ oc get pods -A
NAMESPACE                              NAME                                       READY   STATUS              RESTARTS   AGE
kube-kindnet                           kube-kindnet-ds-bc7dr                      1/1     Running             0          167m
kube-proxy                             kube-proxy-8k85h                           1/1     Running             0          167m
kube-system                            csi-snapshot-controller-5ccb457c75-5f4ts   1/1     Running             0          167m
openshift-dns                          dns-default-vwvvz                          0/2     ContainerCreating   0          166m
openshift-dns                          node-resolver-vzsqf                        1/1     Running             0          167m
openshift-ingress                      router-default-b6d64f795-j97j6             0/1     ContainerCreating   0          167m
openshift-operator-lifecycle-manager   catalog-operator-79b5c486cd-nq4sk          0/1     ContainerCreating   0          167m
openshift-operator-lifecycle-manager   olm-operator-79c46b86cb-gxmq9              0/1     ContainerCreating   0          167m
openshift-service-ca                   service-ca-7cd95d84b-kghxg                 0/1     ContainerCreating   0          167m

$ oc describe pod router-default-b6d64f795-j97j6 -n openshift-ingress
[...]
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  168m                 default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
  Normal   Scheduled         167m                 default-scheduler  Successfully assigned openshift-ingress/router-default-b6d64f795-j97j6 to 127.0.0.1.nip.io
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_00_49.1867628363/token: invalid argument
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_00_49.3446587844/token: invalid argument
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_00_50.3899844677/token: invalid argument
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_00_52.278975672/token: invalid argument
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_00_56.1228469481/token: invalid argument
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_01_05.1991489494/token: invalid argument
  Warning  FailedMount       166m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_01_21.4116446698/token: invalid argument
  Warning  FailedMount       166m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_01_53.361839715/token: invalid argument
  Warning  FailedMount       165m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_02_57.2700154340/token: invalid argument
  Warning  FailedMount       47m (x58 over 163m)  kubelet            (combined from similar events): MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_14_00_57.2050759217/token: invalid argument
  Warning  FailedMount       31s                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_14_47_41.2350080494/token: invalid argument

The comment said "~26 minutes" but did not show the calculation,
which was confusing because 40 * 2s looks like ~80s at first
glance. pkg/retry uses linear backoff (sleep = initialDelay *
attempt), so the actual total is 2s * sum(1..39) = 1560s ≈ 26 min.

Signed-off-by: Alexon Oliveira <alolivei@redhat.com>
@AlexonOliveiraRH
Copy link
Copy Markdown
Author

AlexonOliveiraRH commented Apr 22, 2026

So I just testing this PR without going through the codebase and during my testing all the pods are not coming up, some issue around mount

$ cat /etc/fedora-release 
Fedora release 43 (Forty Three)

$ podman version
Client:        Podman Engine
Version:       5.8.2
API Version:   5.8.2
Go Version:    go1.25.9 X:nodwarf5
Git Commit:    5b263b5f5b48004a87caac44e67349a8266d9ef4
Built:         Tue Apr 14 05:30:00 2026
Build Origin:  Fedora Project
OS/Arch:       linux/amd64

$ oc get pods -A
NAMESPACE                              NAME                                       READY   STATUS              RESTARTS   AGE
kube-kindnet                           kube-kindnet-ds-bc7dr                      1/1     Running             0          167m
kube-proxy                             kube-proxy-8k85h                           1/1     Running             0          167m
kube-system                            csi-snapshot-controller-5ccb457c75-5f4ts   1/1     Running             0          167m
openshift-dns                          dns-default-vwvvz                          0/2     ContainerCreating   0          166m
openshift-dns                          node-resolver-vzsqf                        1/1     Running             0          167m
openshift-ingress                      router-default-b6d64f795-j97j6             0/1     ContainerCreating   0          167m
openshift-operator-lifecycle-manager   catalog-operator-79b5c486cd-nq4sk          0/1     ContainerCreating   0          167m
openshift-operator-lifecycle-manager   olm-operator-79c46b86cb-gxmq9              0/1     ContainerCreating   0          167m
openshift-service-ca                   service-ca-7cd95d84b-kghxg                 0/1     ContainerCreating   0          167m

$ oc describe pod router-default-b6d64f795-j97j6 -n openshift-ingress
[...]
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  168m                 default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
  Normal   Scheduled         167m                 default-scheduler  Successfully assigned openshift-ingress/router-default-b6d64f795-j97j6 to 127.0.0.1.nip.io
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_00_49.1867628363/token: invalid argument
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_00_49.3446587844/token: invalid argument
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_00_50.3899844677/token: invalid argument
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_00_52.278975672/token: invalid argument
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_00_56.1228469481/token: invalid argument
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_01_05.1991489494/token: invalid argument
  Warning  FailedMount       166m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_01_21.4116446698/token: invalid argument
  Warning  FailedMount       166m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_01_53.361839715/token: invalid argument
  Warning  FailedMount       165m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_02_57.2700154340/token: invalid argument
  Warning  FailedMount       47m (x58 over 163m)  kubelet            (combined from similar events): MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_14_00_57.2050759217/token: invalid argument
  Warning  FailedMount       31s                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_14_47_41.2350080494/token: invalid argument

@praveenkumar, this is Problem #5 documented in the PR description. The error chown .../token: invalid argument happens because the default subordinate UID range on the host (165536:165536, mapping up to UID ~331071) is too small for OpenShift's Security Context Constraints, which assign UIDs around 1000100000. This requires the host prerequisite described in the PR under "Host Prerequisites - Step 2":

sudo usermod --add-subuids 165536-1265535999 --add-subgids 165536-1265535999 $(whoami)
podman system migrate

After that, minc delete + minc create --allow-rootless should bring all pods up. Could you try that?

AlexonOliveiraRH added a commit to AlexonOliveiraRH/minc-extension that referenced this pull request May 4, 2026
Add an "Allow rootless Podman" checkbox to the cluster creation form
that passes --allow-rootless to the minc CLI. Change the Linux audit
from a blocking error to an informational message when running as
non-root, since rootless mode is now a supported configuration.

Depends on minc-org/minc#68.

Signed-off-by: Alexon Oliveira <alolivei@redhat.com>
@AlexonOliveiraRH
Copy link
Copy Markdown
Author

So I just testing this PR without going through the codebase and during my testing all the pods are not coming up, some issue around mount

$ cat /etc/fedora-release 
Fedora release 43 (Forty Three)

$ podman version
Client:        Podman Engine
Version:       5.8.2
API Version:   5.8.2
Go Version:    go1.25.9 X:nodwarf5
Git Commit:    5b263b5f5b48004a87caac44e67349a8266d9ef4
Built:         Tue Apr 14 05:30:00 2026
Build Origin:  Fedora Project
OS/Arch:       linux/amd64

$ oc get pods -A
NAMESPACE                              NAME                                       READY   STATUS              RESTARTS   AGE
kube-kindnet                           kube-kindnet-ds-bc7dr                      1/1     Running             0          167m
kube-proxy                             kube-proxy-8k85h                           1/1     Running             0          167m
kube-system                            csi-snapshot-controller-5ccb457c75-5f4ts   1/1     Running             0          167m
openshift-dns                          dns-default-vwvvz                          0/2     ContainerCreating   0          166m
openshift-dns                          node-resolver-vzsqf                        1/1     Running             0          167m
openshift-ingress                      router-default-b6d64f795-j97j6             0/1     ContainerCreating   0          167m
openshift-operator-lifecycle-manager   catalog-operator-79b5c486cd-nq4sk          0/1     ContainerCreating   0          167m
openshift-operator-lifecycle-manager   olm-operator-79c46b86cb-gxmq9              0/1     ContainerCreating   0          167m
openshift-service-ca                   service-ca-7cd95d84b-kghxg                 0/1     ContainerCreating   0          167m

$ oc describe pod router-default-b6d64f795-j97j6 -n openshift-ingress
[...]
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  168m                 default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
  Normal   Scheduled         167m                 default-scheduler  Successfully assigned openshift-ingress/router-default-b6d64f795-j97j6 to 127.0.0.1.nip.io
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_00_49.1867628363/token: invalid argument
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_00_49.3446587844/token: invalid argument
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_00_50.3899844677/token: invalid argument
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_00_52.278975672/token: invalid argument
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_00_56.1228469481/token: invalid argument
  Warning  FailedMount       167m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_01_05.1991489494/token: invalid argument
  Warning  FailedMount       166m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_01_21.4116446698/token: invalid argument
  Warning  FailedMount       166m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_01_53.361839715/token: invalid argument
  Warning  FailedMount       165m                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_12_02_57.2700154340/token: invalid argument
  Warning  FailedMount       47m (x58 over 163m)  kubelet            (combined from similar events): MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_14_00_57.2050759217/token: invalid argument
  Warning  FailedMount       31s                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-cww82" : chown /var/lib/kubelet/pods/1d609b28-2384-4b4d-862d-4952c4b6e4d4/volumes/kubernetes.io~projected/kube-api-access-cww82/..2026_04_20_14_47_41.2350080494/token: invalid argument

@praveenkumar, this is Problem #5 documented in the PR description. The error chown .../token: invalid argument happens because the default subordinate UID range on the host (165536:165536, mapping up to UID ~331071) is too small for OpenShift's Security Context Constraints, which assign UIDs around 1000100000. This requires the host prerequisite described in the PR under "Host Prerequisites - Step 2":

sudo usermod --add-subuids 165536-1265535999 --add-subgids 165536-1265535999 $(whoami)
podman system migrate

After that, minc delete + minc create --allow-rootless should bring all pods up. Could you try that?

Hey @praveenkumar, just a follow-up on this: the companion PR on the extension side (minc-org/minc-extension#396) has been tested and approved by @danivilla9. During testing, he also initially hit the same port binding error (exit status 125) when running rootless, which was resolved by applying the host prerequisites documented here, specifically net.ipv4.ip_unprivileged_port_start. After that, the cluster came up successfully both from CLI and from the Podman Desktop extension.

The chown error you encountered is the same scenario, problem #5 in the PR description. It requires expanding the subordinate UID/GID ranges on the host (Step 4 in "Host Prerequisites") so that OpenShift SCC UIDs (~1000100000) fall within the mapped range:

sudo usermod --add-subuids 165536-1265535999 --add-subgids 165536-1265535999 $(whoami)                                            
podman system migrate       

Could you give it another try with all four host prerequisites applied? Happy to help troubleshoot if anything else comes up.

benoitf pushed a commit to minc-org/minc-extension that referenced this pull request May 5, 2026
* feat: add rootless Podman support for Linux

Add an "Allow rootless Podman" checkbox to the cluster creation form
that passes --allow-rootless to the minc CLI. Change the Linux audit
from a blocking error to an informational message when running as
non-root, since rootless mode is now a supported configuration.

Depends on minc-org/minc#68.

Signed-off-by: Alexon Oliveira <alolivei@redhat.com>

* fix: reword audit message to conditional phrasing

The audit info message now instructs the user to enable the toggle
rather than asserting the cluster will be created in rootless mode,
since the audit runs independently of the form parameters.

Signed-off-by: Alexon Oliveira <alolivei@redhat.com>

---------

Signed-off-by: Alexon Oliveira <alolivei@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Running minc with rootless

2 participants