The Podman AI Stack provides a secure, configurable, and systemd-native orchestration stack for deploying containerized AI environments (Open WebUI and Ollama).
It leverages Podman Quadlets to integrate seamlessly with systemd and supports both rootless and rootfull deployments on Fedora and other RPM-based distributions.
Pull requests are validated with ShellCheck, actionlint, Markdown, and RPM
checks plus install smoke tests across Fedora 40, 41, 42, and Rawhide for
current-user rootless, service-user rootless, and rootfull package paths.
- Rootless-first β Run entirely without root privileges
- Systemd-native β Managed via Podman Quadlets
- Secure by default β Isolated networking, read-only root filesystems, dropped capabilities, and strict SELinux boundaries
- Flexible configuration β Environment-based configuration via
/etc/sysconfig/podman-ai-stack - Multiple deployment modes β User, dedicated service user, or system-wide
Packages are distributed via a dedicated DNF repository hosted on GitHub Pages:
π https://fedorabee.github.io/podman-ai-stack/rpms/
sudo tee /etc/yum.repos.d/podman-ai-stack.repo <<'EOF'
[podman-ai-stack]
name=Podman AI Stack - Stable
baseurl=https://fedorabee.github.io/podman-ai-stack/rpms/latest/stable/
enabled=1
gpgcheck=1
gpgkey=https://fedorabee.github.io/podman-ai-stack/rpms/gpg.key
[podman-ai-stack-testing]
name=Podman AI Stack - Testing
baseurl=https://fedorabee.github.io/podman-ai-stack/rpms/latest/testing/
enabled=0
gpgcheck=1
gpgkey=https://fedorabee.github.io/podman-ai-stack/rpms/gpg.key
EOFsudo dnf makecacheThe GPG key is available at https://fedorabee.github.io/podman-ai-stack/rpms/gpg.key.
Fingerprint:
8D12 D614 9E1E 5E83 29DD E6FD 9B99 A03F 6577 BF59
The stack is split into a base package and deployment-specific subpackages. By default, only the Open WebUI service is started.
Ideal for personal workstations.
sudo dnf install podman-ai-stack
systemctl --user daemon-reload
systemctl --user start podman-ai-stack-podMonitor logs:
journalctl --user -u open-webui.service -fRecommended for server-like deployments.
sudo dnf install podman-ai-stack-user
sudo -u podman-ai systemctl --user start podman-ai-stack-podβΉοΈ Lingering is enabled automatically by the package.
Monitor logs:
sudo -u podman-ai XDG_RUNTIME_DIR=/run/user/$(id -u podman-ai) \
journalctl --user -u ollama.service -fsudo dnf install podman-ai-stack-root
sudo systemctl start podman-ai-stack-podMonitor logs:
sudo journalctl -u podman-ai-stack-pod.service -fThe stack includes an optional Ollama service.
By default, Open WebUI connects to:
http://localhost:11434
# Rootless (current user)
systemctl --user start ollama
# Dedicated user
sudo -u podman-ai systemctl --user start ollama
# Rootfull
sudo systemctl start ollamaSet:
OLLAMA_BASE_URL=<your-server>
By default, the Podman AI Stack binds its ports strictly to 127.0.0.1
(localhost). This "safe-by-default" approach ensures that if you install the
stack on a cloud VPS without a firewall, your LLM models and chat interface are
not instantly exposed to the public internet.
The most secure way to expose Open WebUI is by placing a reverse proxy (like Nginx or Caddy) in front of it to handle TLS/SSL encryption and authentication.
Example Caddyfile:
ai.yourdomain.com {
reverse_proxy 127.0.0.1:3000
}If you are deploying on a trusted local network (LAN) and want the services reachable by other devices without a proxy, you can override the bind address.
If building from source, set the BIND_IP variable:
make BIND_IP=0.0.0.0 rpmIf installed via RPM, override the pod definition via a systemd drop-in:
systemctl --user edit podman-ai-stack-pod.podAdd the following to bind to all interfaces (0.0.0.0):
[Pod]
# Clear existing ports first
PublishPort=
PublishPort=0.0.0.0:3000:8080
PublishPort=0.0.0.0:11434:11434Then reload and restart:
systemctl --user daemon-reload
systemctl --user restart podman-ai-stack-podAI workloads require specific hardware considerations, particularly GPU VRAM. For a detailed breakdown of model sizes (e.g., Llama 3 8B vs 70B) and instructions on how to dynamically tweak CPU and Memory constraints safely via systemd drop-ins, please read the Hardware Guide.
Configuration files are loaded in order:
/etc/sysconfig/podman-ai-stack~/.config/podman-ai-stack.env
Common options:
OLLAMA_BASE_URLOLLAMA_HOST
Certain parameters (ports, limits, image versions) are defined at build time.
See: DEVELOPMENT.md
User-level Quadlets override system templates:
~/.config/containers/systemd/Overrides:
/etc/containers/systemd/users/mkdir -p ~/.config/containers/systemd/
cp /etc/containers/systemd/users/open-webui.container \
~/.config/containers/systemd/systemctl --user daemon-reload
systemctl --user restart open-webuiFor larger deployments, you can decouple Open WebUI's state from SQLite to
PostgreSQL. Uncomment and configure DATABASE_URL in
/etc/sysconfig/podman-ai-stack:
DATABASE_URL=postgresql://openwebui:openwebui_secret@localhost:5432/openwebuiWe ship an optional Postgres Quadlet template if you wish to run it within the stack:
# Start the postgres database
systemctl --user start postgres
# Restart open-webui to pick up the new database connection
systemctl --user restart open-webuiEdit:
~/.config/containers/systemd/podman-ai-stack.pod[Pod]
# Network=podman-ai-stack.networksystemctl --user daemon-reload
systemctl --user restart podman-ai-stack-podFor enhanced security, avoid storing database passwords or external API keys (like OpenAI keys) in plain-text configuration files. Podman Quadlets support native secrets.
Initialize your secrets using the podman secret create command:
# Set a PostgreSQL password
echo "my-secret-db-pass" | podman secret create postgres_password -
# (Optional) Set an external database URL for Open WebUI
echo "postgresql://openwebui:my-secret-db-pass@localhost:5432/openwebui" | \
podman secret create openwebui_database_url -
# (Optional) Set an OpenAI API Key
echo "sk-your-api-key" | podman secret create openai_api_key -(Note: If using the dedicated service user deployment, prefix with
sudo -u podman-ai)
Override your Quadlets to use the created secrets via systemd drop-ins
(systemctl --user edit open-webui or postgres):
[Container]
Secret=postgres_password,type=env,target=POSTGRES_PASSWORD
# Secret=openwebui_database_url,type=env,target=DATABASE_URL
# Secret=openai_api_key,type=env,target=OPENAI_API_KEYOr uncomment the Secret= directives directly if you manage the .container
templates manually.
The Quadlet containers are configured to automatically pull new image versions
(AutoUpdate=registry). To operationalize this, enable the Podman auto-update
timer:
# Rootless (current user or dedicated user)
systemctl --user enable --now podman-auto-update.timer
# Rootfull
sudo systemctl enable --now podman-auto-update.timerβΉοΈ For Rootfull deployments, the RPM package automatically enables this timer during installation.
Are you upgrading from a previous version (e.g., v0.4.x to v0.5.x)? Check out our Migration Guide for information on database transitions and backwards compatibility.
Open WebUI and Ollama store important state (chats, configurations, and models) in Podman volumes. We provide a script to safely export these volumes without corrupting active database writes by temporarily pausing the container processes.
Run the included backup script to pause the containers and export their volumes safely:
./scripts/backup-ai-stack.sh /path/to/backup/dir(Note: If using the dedicated service user deployment, prefix with
sudo -u podman-ai)
To restore from a backup archive:
# 1. Stop the pod
systemctl --user stop podman-ai-stack-pod
# 2. Import the volume data
podman volume import open-webui /path/to/backup/dir/open-webui-backup.tar
podman volume import ollama /path/to/backup/dir/ollama-backup.tar
# 3. Restart the pod
systemctl --user start podman-ai-stack-pod# Rootless
systemctl --user restart podman-ai-stack-pod
# Dedicated user
sudo -u podman-ai systemctl --user restart podman-ai-stack-pod
# Rootfull
sudo systemctl restart podman-ai-stack-podThe package repository contains:
-
RPM packages:
podman-ai-stackpodman-ai-stack-userpodman-ai-stack-root
-
Repository metadata (
repodata/) -
GPG signing key
The project includes a scripts/gitops-pr-cli-tool.sh to automate and enforce
the Pull Request workflow. It performs the following checks:
- Branch naming validation.
- Version extraction from branch name.
- Verification that
CHANGELOG.mdcontains the version. - Verification that the RPM spec file's
Versionfield is automatically updated byscripts/update-rpm-metadata.pyfrom theMakefile'sVERSIONvariable, and this value is validated. - Ensure the
Makefileversion is synchronized with the RPM spec andCHANGELOG.md. - Automatic PR body generation from commit messages.
- GitHub CLI (
gh): The tool requires the GitHub CLI to be installed and authenticated.
Usage:
./scripts/gitops-pr-cli-tool.sh --target <branch-name> \
[--base main] \
[--title "PR Title"] \
[--message "PR Body"] \
[--reviewers user1,user2] \
[--remote origin] \
[--dry-run]A scripts/git-clean-switch-tool.sh is provided to safely reset the current Git
branch to a remote source, clean the worktree, and prepare a development branch.
This is useful for quickly synchronizing a development environment to a known
good state.
Usage:
./scripts/git-clean-switch-tool.sh \
[--base main] \
[--target dev] \
[--backup backup-main-timestamp] \
[--remote origin] \
[--dry-run]- π DNF Repository: https://fedorabee.github.io/podman-ai-stack/rpms/
- π¦ Repo Source (gh-pages): https://github.com/fedoraBee/podman-ai-stack/tree/gh-pages
- π» Development: https://github.com/fedoraBee/podman-ai-stack
This is an independent project and not affiliated with Fedora.
Use in production environments at your own discretion.