Isolated, disposable Ubuntu Desktop VMs for running Claude Code safely
Claude Code is powerful β and it runs real commands on your real machine.
qdesk gives it a disposable Ubuntu 24.04 Desktop sandbox instead: shared folders
bridge your actual work into the VM, and --reset wipes the slate clean when you're done.
Your host stays untouched no matter what Claude does inside.
Claude Code is an agentic coding tool that executes real shell commands, writes files, and installs packages autonomously. Running it directly on your host is convenient β until it isn't.
qdesk puts Claude Code in a box:
- Claude Code operates inside the VM β your home directory, dotfiles, and credentials are never touched
- Shared folders expose only the specific project directories you choose
- Blew something up?
--resetand you're back to a clean Ubuntu Desktop in under a minute - Run one VM per project or per Claude session β each is fully isolated from the others
- Claude Code ready β full Ubuntu Desktop + Chromium, SSH access, shared folders for your projects
- Persistent by default β boots straight to the desktop in seconds after first provision
- Reset any time β wipe a VM back to a clean slate with
--reset, base image untouched - Multiple VMs, one base β each config file gets its own disk, all sharing
base.qcow2 - Flexible shared folders β any number of hostβguest folder mappings, fully independent paths
- SPICE display β smooth GUI access with clipboard integration via
remote-viewer - Cloud-init provisioning β install packages, set env vars, drop files, add SSH keys on first boot
- Simple config β one
.conffile per VM, no YAML manifests, no daemons
sudo apt install qemu-system-x86 qemu-utils cloud-image-utils virt-viewerEnable KVM (you'll want this β software emulation is painfully slow):
sudo usermod -aG kvm $USER # re-login afterchmod +x build-base-image.sh launch-vm.sh
./build-base-image.shDownloads Ubuntu 24.04 server cloud image, provisions it with Ubuntu Desktop
Minimal + Chromium + SPICE + SSH, and saves base.qcow2. The source image
is cached in .cache/ so rebuilds are faster.
mkdir research
cp vm.conf.example research/vm.conf
cp user-data.tpl.example research/user-data.tpl
cp meta-data.example research/meta-dataPoint shared folders at the project(s) you want Claude Code to work on β nothing else on your host will be accessible to the VM:
VM_NAME="research"
SSH_HOST_PORT="2222"
SPICE_PORT="5900"
SHARED_FOLDER_1="/home/you/projects/myapp"
SHARED_MOUNT_1="/home/dev/myapp"Add your SSH public key inside the users: block (top-level placement is
silently ignored by cloud-init for non-default users):
users:
- name: __VM_USER__
ssh_authorized_keys:
- ssh-ed25519 AAAA... you@hostUncomment any packages, env vars, or tools you need in the runcmd section.
./launch-vm.sh researchFirst launch provisions the VM and creates research/disk.qcow2.
Every launch after that boots the existing disk β no waiting.
| Method | Command |
|---|---|
| SSH | ssh -p 2222 vmuser@localhost |
| GUI | remote-viewer spice://localhost:5900 |
The VM auto-logs in to a GNOME X11 session. Clipboard sync works once the desktop is fully loaded.
SSH is available ~30β60 s after first boot while cloud-init finishes. Subsequent boots go straight to the desktop.
Each VM lives in its own directory. All share base.qcow2 β new disks
cost only a few MB at creation:
research/ β research/disk.qcow2 (ssh :2222, spice :5900)
dev/ β dev/disk.qcow2 (ssh :2223, spice :5901)
testing/ β testing/disk.qcow2 (ssh :2224, spice :5902)
mkdir dev
cp vm.conf.example dev/vm.conf
cp user-data.tpl.example dev/user-data.tpl
cp meta-data.example dev/meta-data
# set unique SSH_HOST_PORT and SPICE_PORT in dev/vm.conf
./launch-vm.sh devAdd as many pairs as you need β numbering must be consecutive from 1:
# vm.conf
SHARED_FOLDER_1="/home/you/projects" # must exist on host
SHARED_MOUNT_1="/home/vmuser/projects" # created in guest automatically
SHARED_FOLDER_2="/home/you/notes"
SHARED_MOUNT_2="/home/vmuser/notes"Note: virtio-9p shares directories only. Pointing at a file is caught early with a clear error and a suggested fix.
user-data.tpl is a standard cloud-config file that runs once on first boot.
Re-run it any time with --reset.
Installing packages
runcmd:
- apt-get install -y ripgrep fd-find python3-pipEnvironment variables
For static values (all sessions including GUI), use /etc/environment:
write_files:
- path: /etc/environment
content: |
MY_API_KEY="secret"
RUST_LOG="debug"For shell logic or export, use /etc/profile.d/ (login shells only):
write_files:
- path: /etc/profile.d/vm-env.sh
permissions: '0644'
content: |
export PATH="$HOME/.cargo/bin:$PATH"Template tokens
These tokens in user-data.tpl are substituted at launch time from the config file:
| Token | Source |
|---|---|
__VM_USER__ |
VM_USER |
__VM_PASSWORD__ |
VM_PASSWORD |
__VM_HOSTNAME__ |
VM_NAME |
__SHARED_MOUNTS__ |
auto-generated from SHARED_FOLDER_N / SHARED_MOUNT_N |
__SHARED_MKDIR__ |
auto-generated |
| Command | Effect |
|---|---|
./launch-vm.sh |
Boot (provision on first run, resume on subsequent) |
./launch-vm.sh --reset |
Wipe disk, reprovision on next launch |
./build-base-image.sh |
Rebuild base image (all disks become stale β reset each VM after) |
Ctrl-A x |
Kill VM from the terminal |
Ctrl-A c |
Switch to QEMU monitor |
After a base image rebuild, stale overlays are detected automatically:
[FAIL] vm-disk.qcow2 is stale β base.qcow2 was rebuilt after the overlay was created.
Run: ./launch-vm.sh --reset
qdesk/
βββ base.conf # Base image build settings (shared, safe to commit)
βββ base.qcow2 # Built once β shared by all VMs
βββ build-base-image.sh
βββ launch-vm.sh
βββ vm.conf.example # Copy into a VM directory to get started
βββ user-data.tpl.example # Copy into a VM directory to get started
βββ meta-data.example # Copy into a VM directory to get started
βββ research/ # One directory per VM
β βββ vm.conf # VM config (gitignored β may contain secrets)
β βββ user-data.tpl # Cloud-init template (gitignored)
β βββ meta-data # NoCloud metadata (gitignored)
β βββ disk.qcow2 # Persistent disk β created on first launch
β βββ seed.iso # Cloud-init ISO β created on first launch
βββ .cache/ # Cached cloud image download
SSH times out on first boot
Cloud-init is still running. Wait 60β90 s and retry, or watch progress in the SPICE console:
sudo cloud-init status --waitShared folder not mounted
Check the host path exists and is a directory, then inside the VM:
mount | grep 9p
dmesg | grep 9pChromium missing
The snap install can fail if snapd is slow at first boot. SSH in and run:
sudo snap install chromiumTo make it permanent, rebuild the base image.
No clipboard
Check spice-vdagentd is running inside the VM:
systemctl status spice-vdagentdClipboard only works over the SPICE connection, not over SSH.
Built through an iterative conversation with Claude by Anthropic β design, implementation, and documentation written entirely by AI based on requirements and feedback.
A tool for safely running Claude Code, built by Claude.