Skip to content

jeromer/qdesk

Repository files navigation

πŸ–₯️ qdesk

Isolated, disposable Ubuntu Desktop VMs for running Claude Code safely

Claude Code is powerful β€” and it runs real commands on your real machine. qdesk gives it a disposable Ubuntu 24.04 Desktop sandbox instead: shared folders bridge your actual work into the VM, and --reset wipes the slate clean when you're done. Your host stays untouched no matter what Claude does inside.

Ubuntu 24.04 QEMU Claude Code Built with Claude


🎯 Why qdesk

Claude Code is an agentic coding tool that executes real shell commands, writes files, and installs packages autonomously. Running it directly on your host is convenient β€” until it isn't.

qdesk puts Claude Code in a box:

  • Claude Code operates inside the VM β€” your home directory, dotfiles, and credentials are never touched
  • Shared folders expose only the specific project directories you choose
  • Blew something up? --reset and you're back to a clean Ubuntu Desktop in under a minute
  • Run one VM per project or per Claude session β€” each is fully isolated from the others

✨ Features

  • Claude Code ready β€” full Ubuntu Desktop + Chromium, SSH access, shared folders for your projects
  • Persistent by default β€” boots straight to the desktop in seconds after first provision
  • Reset any time β€” wipe a VM back to a clean slate with --reset, base image untouched
  • Multiple VMs, one base β€” each config file gets its own disk, all sharing base.qcow2
  • Flexible shared folders β€” any number of host↔guest folder mappings, fully independent paths
  • SPICE display β€” smooth GUI access with clipboard integration via remote-viewer
  • Cloud-init provisioning β€” install packages, set env vars, drop files, add SSH keys on first boot
  • Simple config β€” one .conf file per VM, no YAML manifests, no daemons

πŸ“‹ Requirements

sudo apt install qemu-system-x86 qemu-utils cloud-image-utils virt-viewer

Enable KVM (you'll want this β€” software emulation is painfully slow):

sudo usermod -aG kvm $USER   # re-login after

πŸš€ Quick start

1 β€” Build the base image (once, ~15–40 min)

chmod +x build-base-image.sh launch-vm.sh
./build-base-image.sh

Downloads Ubuntu 24.04 server cloud image, provisions it with Ubuntu Desktop Minimal + Chromium + SPICE + SSH, and saves base.qcow2. The source image is cached in .cache/ so rebuilds are faster.

2 β€” Create a VM directory from the examples

mkdir research
cp vm.conf.example        research/vm.conf
cp user-data.tpl.example  research/user-data.tpl
cp meta-data.example      research/meta-data

3 β€” Edit research/vm.conf

Point shared folders at the project(s) you want Claude Code to work on β€” nothing else on your host will be accessible to the VM:

VM_NAME="research"
SSH_HOST_PORT="2222"
SPICE_PORT="5900"

SHARED_FOLDER_1="/home/you/projects/myapp"
SHARED_MOUNT_1="/home/dev/myapp"

4 β€” Edit research/user-data.tpl

Add your SSH public key inside the users: block (top-level placement is silently ignored by cloud-init for non-default users):

users:
  - name: __VM_USER__
    ssh_authorized_keys:
      - ssh-ed25519 AAAA... you@host

Uncomment any packages, env vars, or tools you need in the runcmd section.

5 β€” Launch

./launch-vm.sh research

First launch provisions the VM and creates research/disk.qcow2. Every launch after that boots the existing disk β€” no waiting.

πŸ”Œ Connecting

Method Command
SSH ssh -p 2222 vmuser@localhost
GUI remote-viewer spice://localhost:5900

The VM auto-logs in to a GNOME X11 session. Clipboard sync works once the desktop is fully loaded.

SSH is available ~30–60 s after first boot while cloud-init finishes. Subsequent boots go straight to the desktop.


πŸ—‚οΈ Multiple VMs

Each VM lives in its own directory. All share base.qcow2 β€” new disks cost only a few MB at creation:

research/   β†’  research/disk.qcow2   (ssh :2222, spice :5900)
dev/        β†’  dev/disk.qcow2        (ssh :2223, spice :5901)
testing/    β†’  testing/disk.qcow2    (ssh :2224, spice :5902)
mkdir dev
cp vm.conf.example       dev/vm.conf
cp user-data.tpl.example dev/user-data.tpl
cp meta-data.example     dev/meta-data
# set unique SSH_HOST_PORT and SPICE_PORT in dev/vm.conf
./launch-vm.sh dev

πŸ“ Shared folders

Add as many pairs as you need β€” numbering must be consecutive from 1:

# vm.conf
SHARED_FOLDER_1="/home/you/projects"   # must exist on host
SHARED_MOUNT_1="/home/vmuser/projects" # created in guest automatically

SHARED_FOLDER_2="/home/you/notes"
SHARED_MOUNT_2="/home/vmuser/notes"

Note: virtio-9p shares directories only. Pointing at a file is caught early with a clear error and a suggested fix.


βš™οΈ Provisioning

user-data.tpl is a standard cloud-config file that runs once on first boot. Re-run it any time with --reset.

Installing packages
runcmd:
  - apt-get install -y ripgrep fd-find python3-pip
Environment variables

For static values (all sessions including GUI), use /etc/environment:

write_files:
  - path: /etc/environment
    content: |
      MY_API_KEY="secret"
      RUST_LOG="debug"

For shell logic or export, use /etc/profile.d/ (login shells only):

write_files:
  - path: /etc/profile.d/vm-env.sh
    permissions: '0644'
    content: |
      export PATH="$HOME/.cargo/bin:$PATH"
Template tokens

These tokens in user-data.tpl are substituted at launch time from the config file:

Token Source
__VM_USER__ VM_USER
__VM_PASSWORD__ VM_PASSWORD
__VM_HOSTNAME__ VM_NAME
__SHARED_MOUNTS__ auto-generated from SHARED_FOLDER_N / SHARED_MOUNT_N
__SHARED_MKDIR__ auto-generated

πŸ”„ Lifecycle

Command Effect
./launch-vm.sh Boot (provision on first run, resume on subsequent)
./launch-vm.sh --reset Wipe disk, reprovision on next launch
./build-base-image.sh Rebuild base image (all disks become stale β€” reset each VM after)
Ctrl-A x Kill VM from the terminal
Ctrl-A c Switch to QEMU monitor

After a base image rebuild, stale overlays are detected automatically:

[FAIL] vm-disk.qcow2 is stale β€” base.qcow2 was rebuilt after the overlay was created.
       Run: ./launch-vm.sh --reset

πŸ—ƒοΈ Project layout

qdesk/
β”œβ”€β”€ base.conf                # Base image build settings (shared, safe to commit)
β”œβ”€β”€ base.qcow2               # Built once β€” shared by all VMs
β”œβ”€β”€ build-base-image.sh
β”œβ”€β”€ launch-vm.sh
β”œβ”€β”€ vm.conf.example          # Copy into a VM directory to get started
β”œβ”€β”€ user-data.tpl.example    # Copy into a VM directory to get started
β”œβ”€β”€ meta-data.example        # Copy into a VM directory to get started
β”œβ”€β”€ research/                # One directory per VM
β”‚   β”œβ”€β”€ vm.conf              # VM config (gitignored β€” may contain secrets)
β”‚   β”œβ”€β”€ user-data.tpl        # Cloud-init template (gitignored)
β”‚   β”œβ”€β”€ meta-data            # NoCloud metadata (gitignored)
β”‚   β”œβ”€β”€ disk.qcow2           # Persistent disk β€” created on first launch
β”‚   └── seed.iso             # Cloud-init ISO β€” created on first launch
└── .cache/                  # Cached cloud image download

πŸ› Troubleshooting

SSH times out on first boot

Cloud-init is still running. Wait 60–90 s and retry, or watch progress in the SPICE console:

sudo cloud-init status --wait
Shared folder not mounted

Check the host path exists and is a directory, then inside the VM:

mount | grep 9p
dmesg | grep 9p
Chromium missing

The snap install can fail if snapd is slow at first boot. SSH in and run:

sudo snap install chromium

To make it permanent, rebuild the base image.

No clipboard

Check spice-vdagentd is running inside the VM:

systemctl status spice-vdagentd

Clipboard only works over the SPICE connection, not over SSH.


Built through an iterative conversation with Claude by Anthropic β€” design, implementation, and documentation written entirely by AI based on requirements and feedback.

A tool for safely running Claude Code, built by Claude.

About

Isolated, disposable Ubuntu Desktop VMs for running Claude Code safely

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors