The complete guide to running OpenShift in your home lab, on a single machine.
Many people have used this repo to get OpenShift running on everything from Intel NUCs to enterprise servers. Whether you're learning Kubernetes, building a home lab, or need a portable demo environment, Single Node OpenShift (SNO) is a great way to get started with enterprise-grade Kubernetes!
These step-by-step video guides have helped nearly 15,000 viewers get their SNO clusters up and running:
- Single Node OpenShift Installation Walkthrough — Complete installation using the Assisted Installer
- OpenShift Virtualization — Containers and VMs on the same control plane — Run containers and virtual machines side-by-side
Before running the day 2 playbook you'll need the following on your workstation:
- Ansible 2.14+:
pip install ansible - Ansible collections:
ansible-galaxy collection install kubernetes.core community.general ocCLI: Download from console.redhat.com and add to your$PATH- kubeconfig: Export your cluster's kubeconfig —
export KUBECONFIG=~/path/to/kubeconfig - SSH key: The private key you registered during Assisted Installer image creation. Defaults to
~/.ssh/id_rsa— override with-e ssh_key_path=~/.ssh/your_keyif different.
Identify your storage device first. SSH into the SNO node and run
lsblkto confirm which device is your secondary data disk. It is commonly/dev/nvme1n1or/dev/sdbbut varies by hardware. Setstorage_deviceinvars/defaults.ymlaccordingly before running the playbook.
Assisted Installer does not wipe secondary disks. If you reinstall SNO on a machine that previously ran LVMS, the secondary disk will still have stale LVM partition table entries and device mapper state from the previous cluster. The day 2 playbook handles this automatically — it runs dmsetup remove_all, partx -d, and sgdisk --zap-all via SSH before creating the LVMCluster. If LVMS still reports has children block devices and could not be considered after a fresh install, SSH into the node and run:
sudo dmsetup remove_all -f
sudo partx -d /dev/nvme1n1 # use your data disk device
sudo sgdisk --zap-all /dev/nvme1n1Then re-run the playbook. The playbook's wipe tasks are skipped if LVM children are already detected (idempotent on re-runs), so this is a safe operation.
oc debug cannot wipe disks reliably. The playbook uses SSH directly to the node for all disk operations. oc debug sessions hit kernel ioctl limitations (BLKRRPART: Device or resource busy) that make partition table reloads unreliable. SSH as the core user avoids this entirely.
Follow the Installation Guide to deploy SNO using Red Hat's Assisted Installer. The process takes about 45 minutes and produces a fully functional cluster.
Everything after installation is automated by a single Ansible playbook:
# Install the required Ansible collections
ansible-galaxy collection install kubernetes.core community.general
# Clone the repo
git clone https://github.com/ryannix123/single-node-openshift.git
cd single-node-openshift
# Run the day 2 playbook
ansible-playbook sno-day2.ymlThe playbook will:
- Wipe and prepare your secondary drive for LVM storage
- Install the LVM Storage Operator and create an LVMCluster
- Set
lvms-vg1as the default StorageClass - Patch the image registry to use persistent storage
- Patch cluster monitoring (Prometheus) to use persistent storage
- Install OpenShift Virtualization and activate the HyperConverged instance
Each step has readiness gates — it won't proceed until the previous phase is healthy.
Override any variable on the command line or create a vars file:
# Common overrides
ansible-playbook sno-day2.yml \
-e kubeconfig=/path/to/kubeconfig \
-e storage_device=/dev/nvme1n1 \
-e registry_size=200Gi \
-e monitoring_size=80GiSee vars/defaults.yml for all available variables and their defaults.
After the day 2 playbook completes, you can layer in additional capabilities using the pre-built manifests:
| Add-on | Manifests | Notes |
|---|---|---|
| Ansible Automation Platform | manifests/operators/aap/ |
Requires 12+ cores, 48GB RAM |
| Advanced Cluster Management | manifests/operators/acm/ |
Requires 16+ cores, 64GB RAM |
| Let's Encrypt TLS certs | manifests/tls/ |
See TLS Guide |
| Setup | CPU | RAM | Storage |
|---|---|---|---|
| Base cluster | 8 cores | 32 GB | 120 GB boot + data disk |
| With Virtualization | 8 cores | 32 GB | 120 GB + 100 GB |
| With AAP | 12 cores | 48 GB | 120 GB + 100 GB |
| With ACM | 16 cores | 64 GB | 120 GB + 200 GB |
| Full stack | 16+ cores | 64 GB+ | 120 GB + 500 GB |
An NVMe drive for your data disk makes a noticeable difference in performance.
sno-day2.yml # Day 2 operations playbook — start here
vars/
defaults.yml # All tunable variables with documentation
docs/
01-installation.md # Getting SNO installed via Assisted Installer
02-tls.md # Free Let's Encrypt certificates
03-optional-operators.md # AAP, ACM, and other add-ons
manifests/
operators/
aap/ # Ansible Automation Platform 2.6
acm/ # Advanced Cluster Management
tls/ # Let's Encrypt configuration
scripts/
sno-shutdown.sh # Safe shutdown with certificate rotation
renew-letsencrypt.sh # Certificate renewal
SNO clusters that sit idle can develop certificate problems. Always shut down using the included script:
./scripts/sno-shutdown.shIt handles certificate rotation automatically before the node powers off.
- OpenShift 4.15 – 4.21
- Ansible Automation Platform 2.5, 2.6
- Advanced Cluster Management 2.10, 2.11, 2.12
Found a bug? Have a better way to do something? PRs and issues are welcome.
Created by Ryan Nix. This is a personal project to help the OpenShift community — not an official Red Hat resource.
If this repo helped you, consider giving it a ⭐. It helps others find it too.