feat: add --workspace flag for host directory mounting#160
feat: add --workspace flag for host directory mounting#160aniketmaurya merged 3 commits intomainfrom
Conversation
…+ overlayfs Mount a host directory read-only into the guest via QEMU's virtio-9p passthrough, with an overlayfs layer on top so the agent can read and write freely inside /workspace without touching the host files. - New WorkspaceMount type and VMConfig.workspace_mounts field - QEMU command builder emits -fsdev + virtio-9p-device/pci args - Guest-side mount via SSH: modprobe 9p, mount 9p lower, overlay on top - Firecracker backend rejects workspace mounts with clear error - Snapshot guard blocks VMs with workspace mounts - CLI: smolvm create --workspace ~/my-project - SDK: SmolVM(config, workspace="~/my-project") Closes #157 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughThis pull request introduces a workspace mounting feature to SmolVM. Users can now mount host directories into guest VMs via the Changes
Sequence DiagramsequenceDiagram
actor User
participant CLI
participant Facade as SmolVM Facade
participant Config as VMConfig
participant QEMU
participant Guest
User->>CLI: smolvm create --workspace /path
CLI->>Facade: SmolVM(config, workspace=/path)
Facade->>Facade: Validate & normalize workspace
Facade->>Config: Add to workspace_mounts
Note over Config: workspace_mounts = [WorkspaceMount(...)]
User->>Facade: vm.start()
Facade->>Facade: Check can_run_commands()
Facade->>Facade: Wait for SSH ready
Facade->>Guest: modprobe 9pnet_virtio
Facade->>Guest: mount -t 9p with overlayfs
Guest->>Guest: Mount succeeded or error
Facade->>QEMU: (during boot) -fsdev local,id=fsdev-workspace0
QEMU->>QEMU: -device virtio-9p-pci,fsdev=fsdev-workspace0
Note over Guest: 9p share available at guest_path
Guest->>User: ✓ Workspace mounted and accessible
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Poem
Right then. Listen carefully. What you're looking at here is a carefully orchestrated feature—nothing careless, nothing hasty. The workspace mounting system works like a proper operation: every component in its place, every validation deliberate. The facade layer, that's your command structure. It takes the workspace path, vets it proper, ensures it's legitimate before anything gets mounted. In The QEMU configuration—that's your territory now. The The tests, they're comprehensive. Every angle covered—validation, rejection paths, the whole picture. Someone's thought this through, and that matters. Now, this is a solid bit of work. Just remember: it's not just about the code. It's about understanding why each guard rail's there, why certain backends are rejected, why the sequence matters. Read it like you'd read a map before a job. The details will tell you everything you need to know. 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/smolvm/facade.py`:
- Around line 1543-1595: In _mount_workspaces, guard the privileged
modprobe/mount operations by verifying the SSH session is root before running
mount_script: run a quick command like 'id -u' via self._ssh (or check an
existing self._ssh.user/is_root property if available) and if the returned UID
is not 0 raise a SmolVMError (include vm_id and mount_tag) that clearly
instructs the caller to use a root SSH account or open a dedicated root session
for workspace mounting; alternatively implement opening a dedicated root SSH
connection for the mount steps (use a helper like start()/a new root SSH method)
if you prefer automatic escalation instead of failing fast.
In `@src/smolvm/types.py`:
- Around line 362-382: The validator validate_workspace_mounts currently
computes a fallback tag (mount_tag or f"workspace{index}") but doesn't persist
it (models are frozen), forcing vm.py to duplicate the same logic; fix by
centralizing tag generation and persisting it: either (A) create a single helper
generate_workspace_tag(index) and use it from validate_workspace_mounts and the
callers in vm.py, or (B) have validate_workspace_mounts assign the computed tag
back into each WorkspaceMount by replacing the item with a new instance (e.g.,
using WorkspaceMount.model_copy(update={"mount_tag": tag}) or equivalent) when
mount.mount_tag is falsy so downstream code (vm.py) can rely on mount.mount_tag
being set; update references to validate_workspace_mounts and vm.py usage
accordingly.
In `@src/smolvm/vm.py`:
- Around line 750-755: The current workspace_mounts guard only blocks
BACKEND_FIRECRACKER; change the logic to allow workspace_mounts only for
BACKEND_QEMU (e.g., check backend != BACKEND_QEMU and raise SmolVMError) in the
existing block (the function where effective_config.workspace_mounts is checked)
so Libkrun and any other non-QEMU backends are rejected up front; additionally
add the same backend != BACKEND_QEMU guard to SmolVMManager.async_create() (and
ensure you do not rely on _start_libkrun() to handle 9p wiring) so
async_create() also throws the contract error when
effective_config.workspace_mounts is present for non-QEMU backends.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f28d8f22-a3b8-418f-9bf5-10401f2731ec
📒 Files selected for processing (8)
src/smolvm/__init__.pysrc/smolvm/cli.pysrc/smolvm/facade.pysrc/smolvm/types.pysrc/smolvm/vm.pytests/test_cli.pytests/test_facade.pytests/test_workspace.py
…ogic - Add fail-fast check in _mount_workspaces for non-root ssh_user - Block workspace mounts on all non-QEMU backends (not just Firecracker) - Add resolved_tag() helper to WorkspaceMount, use it everywhere - Add tests: libkrun rejection, resolved_tag, non-root SSH guard Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…yntax - `--mount ~/project` mounts at /workspace (single mount default) - `--mount ~/project:/code` mounts at custom guest path - `--mount ~/a --mount ~/b:/data` supports multiple mounts - Colon-separated syntax avoids argparse ambiguity with space-separated args - SDK kwarg renamed from workspace= to mounts= (list of spec strings) - _parse_mount_specs() handles the HOST[:GUEST] parsing Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
--mount HOST_PATH[:GUEST_PATH]flag tosmolvm createthat mounts a host directory inside the guestmounts=kwarg onSmolVM()and aWorkspaceMounttype for advanced useHow it works
Usage
Python SDK
Closes #157
Test plan
tests/test_workspace.pycovering:WorkspaceMountvalidation (valid dir, nonexistent path, file-not-dir, relative guest_path, custom tag/path, resolved_tag)VMConfig.workspace_mounts(default empty, duplicate guest_path rejected, duplicate tag rejected)-fsdevandvirtio-9p-deviceargs withreadonly=on,security_model=mapped-xattr)--mountsingle, with guest path, multiples, default None)_parse_mount_specs(host-only, host:guest, indexed defaults, mixed)smolvm create --mount /tmp/test-dir && smolvm ssh <id>→ verify/workspaceis mounted and writable🤖 Generated with Claude Code