From ca65df58d6b0f47999a71266ffce9da8f83fb579 Mon Sep 17 00:00:00 2001 From: Claude Date: Tue, 17 Feb 2026 14:34:44 +0000 Subject: [PATCH 1/6] feat: add Podman runtime support Add support for using Podman as an alternative container runtime to Docker. Changes: - Add RUNTIME build arg to Dockerfile.base for conditional Docker installation - Skip Docker daemon setup in entrypoint when DOCKER_HOST is set - Add runtime config option to AgentConfig type - Skip privileged mode and Docker-in-Docker volume for Podman workspaces - Add comprehensive Podman documentation When runtime is set to "podman", workspaces connect to an external container engine via DOCKER_HOST instead of running Docker-in-Docker. Co-Authored-By: Claude (anthropic.claude-sonnet-4-5-20250929-v1:0) --- docs/docs/podman.md | 169 ++++++++++++++++++++++ perry/Dockerfile.base | 31 ++-- perry/internal/src/commands/entrypoint.ts | 29 +++- src/shared/types.ts | 1 + src/workspace/manager.ts | 53 ++++--- 5 files changed, 246 insertions(+), 37 deletions(-) create mode 100644 docs/docs/podman.md diff --git a/docs/docs/podman.md b/docs/docs/podman.md new file mode 100644 index 00000000..fa14b91a --- /dev/null +++ b/docs/docs/podman.md @@ -0,0 +1,169 @@ +--- +sidebar_position: 10 +--- + +# Podman Support + +Perry supports running with Podman as an alternative to Docker. This allows you to use Perry in environments where Podman is preferred or required. + +## Overview + +When using Podman, Perry workspaces connect to an external container engine instead of running Docker-in-Docker. This is achieved through a podman-in-podman sidecar pattern where the workspace container connects to the host's Podman socket. + +## Prerequisites + +- **Podman** - [Install Podman](https://podman.io/getting-started/installation) +- **Podman socket enabled** - Required for container management +- **macOS or Linux** - Windows via WSL2 + +Verify Podman is running: + +```bash +podman info +``` + +Enable the Podman socket: + +```bash +# On systemd-based systems +systemctl --user enable --now podman.socket + +# Verify socket is running +systemctl --user status podman.socket +``` + +## Configuration + +Add the `runtime` field to your Perry configuration file (`~/.perry/config.json`): + +```json +{ + "runtime": "podman", + "port": 7391, + "host": "0.0.0.0", + "credentials": { + "env": {}, + "files": {} + }, + "scripts": { + "post_start": [], + "fail_on_error": false + } +} +``` + +The `runtime` field accepts two values: +- `"docker"` (default) - Use Docker with Docker-in-Docker +- `"podman"` - Use external Podman engine + +## Building the Workspace Image + +When building a workspace image for Podman, use the `RUNTIME` build argument: + +```bash +# Build for Podman +podman build \ + --build-arg RUNTIME=podman \ + -t perry-workspace:podman \ + -f perry/Dockerfile.base \ + . + +# Build for Docker (default) +docker build \ + -t perry-workspace:latest \ + -f perry/Dockerfile.base \ + . +``` + +The `RUNTIME=podman` build argument: +- Skips Docker CE installation +- Omits containerd.io and Docker plugins +- Sets `DOCKER_HOST=tcp://host.containers.internal:2375` environment variable + +## Podman-in-Podman Sidecar Pattern + +Perry workspaces running with Podman use an external container engine. The workspace container connects to the host's Podman socket through the `DOCKER_HOST` environment variable. + +### Container Creation + +When `runtime: "podman"` is configured, Perry: +- Does NOT set `privileged: true` on workspace containers +- Skips the Docker-in-Docker volume (`workspace-name-docker` → `/var/lib/docker`) +- Relies on `DOCKER_HOST` for container operations + +### Entrypoint Behavior + +The workspace entrypoint (`perry/internal/src/commands/entrypoint.ts`) checks for the `DOCKER_HOST` environment variable: +- If set: Skips `ensureDockerd()` and `waitForDocker()` +- If not set: Starts Docker daemon as normal (Docker-in-Docker) + +All other initialization (SSH, Tailscale, user scripts) proceeds normally. + +## Networking + +When using Podman, ensure the workspace container can reach the host's Podman socket: + +```bash +# Start workspace with host network access +podman run \ + --network slirp4netns:allow_host_loopback=true \ + ... +``` + +Or expose the Podman socket on a TCP port: + +```bash +# Expose Podman socket on TCP (development only) +podman system service --time=0 tcp:0.0.0.0:2375 +``` + +**Security Note**: Exposing the Podman socket on TCP without authentication is insecure. Use this only in trusted development environments. + +## Differences from Docker + +| Feature | Docker | Podman | +|---------|--------|--------| +| Privileged mode | Required | Not used | +| Docker-in-Docker volume | Created | Skipped | +| Container engine | Internal (dind) | External (host) | +| Socket location | `/var/run/docker.sock` | Via `DOCKER_HOST` | + +## Troubleshooting + +### Workspace can't connect to Podman + +Check that `DOCKER_HOST` is set correctly: + +```bash +perry exec -- env | grep DOCKER_HOST +``` + +Verify the Podman socket is accessible: + +```bash +podman system connection list +``` + +### Permission denied errors + +Ensure the workspace user has access to the Podman socket. You may need to adjust socket permissions or run Podman in rootless mode. + +### Container operations fail + +Check Podman logs: + +```bash +journalctl --user -u podman.socket -f +``` + +## Limitations + +- Docker Compose may have compatibility issues with Podman +- Some Docker-specific features may not work identically +- Performance characteristics differ from Docker-in-Docker + +## Next Steps + +- [Workspaces](./workspaces.md) - Learn about workspace management +- [Configuration](./configuration/overview.md) - Advanced configuration options +- [Troubleshooting](./troubleshooting.md) - Common issues and solutions diff --git a/perry/Dockerfile.base b/perry/Dockerfile.base index afc56232..167527b7 100644 --- a/perry/Dockerfile.base +++ b/perry/Dockerfile.base @@ -5,6 +5,8 @@ FROM ubuntu:noble +ARG RUNTIME=docker + ENV DEBIAN_FRONTEND=noninteractive # Install prerequisites for adding Docker repository @@ -15,20 +17,18 @@ RUN apt-get update && apt-get install -y --no-install-recommends \ lsb-release \ && rm -rf /var/lib/apt/lists/* -# Add Docker's official GPG key and repository -RUN install -m 0755 -d /etc/apt/keyrings \ - && curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc \ - && chmod a+r /etc/apt/keyrings/docker.asc \ - && echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ - $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null +# Add Docker's official GPG key and repository (only for docker runtime) +RUN if [ "$RUNTIME" = "docker" ]; then \ + install -m 0755 -d /etc/apt/keyrings \ + && curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc \ + && chmod a+r /etc/apt/keyrings/docker.asc \ + && echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ + $(. /etc/os-release && echo \"$VERSION_CODENAME\") stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null; \ + fi -# Install Docker Engine, CLI, and essential tools +# Install Docker Engine, CLI, and essential tools (conditionally install Docker packages) RUN apt-get update && apt-get install -y --no-install-recommends \ - docker-ce \ - docker-ce-cli \ - containerd.io \ - docker-buildx-plugin \ - docker-compose-plugin \ + $(if [ "$RUNTIME" = "docker" ]; then echo "docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin"; fi) \ bash \ sudo \ openssh-server \ @@ -101,11 +101,16 @@ ENV BUN_INSTALL=/usr/local RUN bash -lc "curl -fsSL https://bun.sh/install | bash" \ && bun --version +# Set DOCKER_HOST for podman runtime (external container engine) +RUN if [ "$RUNTIME" = "podman" ]; then \ + echo "ENV DOCKER_HOST=tcp://host.containers.internal:2375" >> /etc/environment; \ + fi + # Create workspace user with passwordless sudo RUN useradd -m -s /bin/bash workspace \ && echo "workspace:workspace" | chpasswd \ && usermod -aG sudo workspace \ - && usermod -aG docker workspace \ + && (getent group docker && usermod -aG docker workspace || true) \ && echo "%sudo ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers # Configure npm to use user-writable global directory diff --git a/perry/internal/src/commands/entrypoint.ts b/perry/internal/src/commands/entrypoint.ts index 2ffce975..8febe691 100644 --- a/perry/internal/src/commands/entrypoint.ts +++ b/perry/internal/src/commands/entrypoint.ts @@ -14,13 +14,21 @@ export const runEntrypoint = async () => { } catch (error) { console.log(`[entrypoint] Failed to add SSH key (non-fatal): ${(error as Error).message}`); } - console.log("[entrypoint] Starting Docker daemon..."); - ensureDockerd(); - const ready = await waitForDocker(); - if (!ready) { - process.exit(1); - return; + + // Skip Docker daemon setup if DOCKER_HOST is set (external container engine) + const useExternalDocker = !!process.env.DOCKER_HOST; + if (!useExternalDocker) { + console.log("[entrypoint] Starting Docker daemon..."); + ensureDockerd(); + const ready = await waitForDocker(); + if (!ready) { + process.exit(1); + return; + } + } else { + console.log("[entrypoint] Using external container engine at DOCKER_HOST"); } + console.log("[entrypoint] Running workspace initialization as workspace user..."); try { await runCommand("sudo", ["-u", "workspace", "-E", "/usr/local/bin/workspace-internal", "init"], { @@ -44,5 +52,12 @@ export const runEntrypoint = async () => { await waitForTailscaled(); } void monitorServices(); - await tailDockerdLogs(); + + // Skip tailing dockerd logs if using external container engine + if (!useExternalDocker) { + await tailDockerdLogs(); + } else { + // Keep process alive for external container engine mode + await new Promise(() => {}); + } }; diff --git a/src/shared/types.ts b/src/shared/types.ts index 91756f27..919118ee 100644 --- a/src/shared/types.ts +++ b/src/shared/types.ts @@ -109,6 +109,7 @@ export interface McpServerDefinition { export interface AgentConfig { port: number; host?: string; + runtime?: 'docker' | 'podman'; credentials: WorkspaceCredentials; scripts: WorkspaceScripts; agents?: CodingAgents; diff --git a/src/workspace/manager.ts b/src/workspace/manager.ts index 078251ec..1e8ad182 100644 --- a/src/workspace/manager.ts +++ b/src/workspace/manager.ts @@ -930,22 +930,29 @@ export class WorkspaceManager { containerEnv.TS_AUTHKEY = this.config.tailscale.authKey; } + const isPodman = this.config.runtime === 'podman'; const dockerVolumeName = `${VOLUME_PREFIX}${name}-docker`; - if (!(await docker.volumeExists(dockerVolumeName))) { + + // Only create Docker-in-Docker volume for docker runtime + if (!isPodman && !(await docker.volumeExists(dockerVolumeName))) { await docker.createVolume(dockerVolumeName); } + const volumes = [{ source: volumeName, target: '/home/workspace', readonly: false }]; + + // Only add Docker-in-Docker volume for docker runtime + if (!isPodman) { + volumes.push({ source: dockerVolumeName, target: '/var/lib/docker', readonly: false }); + } + const containerId = await docker.createContainer({ name: containerName, image: workspaceImage, hostname: name, - privileged: true, + privileged: !isPodman, // Skip privileged mode for podman restartPolicy: 'unless-stopped', env: containerEnv, - volumes: [ - { source: volumeName, target: '/home/workspace', readonly: false }, - { source: dockerVolumeName, target: '/var/lib/docker', readonly: false }, - ], + volumes, ports: [{ hostPort: sshPort, containerPort: 22, protocol: 'tcp' }], labels: { 'workspace.name': name, @@ -1070,22 +1077,29 @@ export class WorkspaceManager { containerEnv.TS_AUTHKEY = this.config.tailscale.authKey; } + const isPodman = this.config.runtime === 'podman'; const dockerVolumeName = `${VOLUME_PREFIX}${name}-docker`; - if (!(await docker.volumeExists(dockerVolumeName))) { + + // Only create Docker-in-Docker volume for docker runtime + if (!isPodman && !(await docker.volumeExists(dockerVolumeName))) { await docker.createVolume(dockerVolumeName); } + const volumes = [{ source: volumeName, target: '/home/workspace', readonly: false }]; + + // Only add Docker-in-Docker volume for docker runtime + if (!isPodman) { + volumes.push({ source: dockerVolumeName, target: '/var/lib/docker', readonly: false }); + } + const containerId = await docker.createContainer({ name: containerName, image: workspaceImage, hostname: name, - privileged: true, + privileged: !isPodman, // Skip privileged mode for podman restartPolicy: 'unless-stopped', env: containerEnv, - volumes: [ - { source: volumeName, target: '/home/workspace', readonly: false }, - { source: dockerVolumeName, target: '/var/lib/docker', readonly: false }, - ], + volumes, ports: [{ hostPort: sshPort, containerPort: 22, protocol: 'tcp' }], labels: { 'workspace.name': name, @@ -1345,17 +1359,22 @@ export class WorkspaceManager { containerEnv.TS_AUTHKEY = this.config.tailscale.authKey; } + const isPodman = this.config.runtime === 'podman'; + const volumes = [{ source: cloneVolumeName, target: '/home/workspace', readonly: false }]; + + // Only add Docker-in-Docker volume for docker runtime + if (!isPodman) { + volumes.push({ source: cloneDockerVolume, target: '/var/lib/docker', readonly: false }); + } + const containerId = await docker.createContainer({ name: cloneContainerName, image: workspaceImage, hostname: cloneName, - privileged: true, + privileged: !isPodman, // Skip privileged mode for podman restartPolicy: 'unless-stopped', env: containerEnv, - volumes: [ - { source: cloneVolumeName, target: '/home/workspace', readonly: false }, - { source: cloneDockerVolume, target: '/var/lib/docker', readonly: false }, - ], + volumes, ports: [{ hostPort: sshPort, containerPort: 22, protocol: 'tcp' }], labels: { 'workspace.name': cloneName, From bc82636cb967b2fddd08ce8bea8787145f828178 Mon Sep 17 00:00:00 2001 From: Claude Date: Wed, 18 Feb 2026 06:59:07 +0000 Subject: [PATCH 2/6] refactor: add Podman support to worker client communication When runtime is 'podman', the worker client now communicates with the worker server inside containers via 'docker exec curl' instead of direct HTTP to container IPs. This is necessary because rootless podman-in-podman containers have IPs in nested network namespaces that are unreachable from the host. Changes: - Add execFetch() helper that uses 'docker exec curl' as HTTP transport - Update createWorkerClient() to accept optional runtime parameter - Add runtime-aware health checks in startWorkerServer() - Thread runtime parameter through session agent functions - Update router to pass runtime from config to worker client calls The Docker runtime path is completely unchanged - all changes are gated behind runtime === 'podman' checks. Co-Authored-By: Claude (anthropic.claude-sonnet-4-5-20250929-v1:0) --- src/agent/router.ts | 19 +- src/sessions/agents/index.ts | 25 ++- src/sessions/agents/worker-provider.ts | 34 ++-- src/worker/client.ts | 258 ++++++++++++++++++------- src/workspace/manager.ts | 99 +++++++--- 5 files changed, 307 insertions(+), 128 deletions(-) diff --git a/src/agent/router.ts b/src/agent/router.ts index ee64b0e8..31b47315 100644 --- a/src/agent/router.ts +++ b/src/agent/router.ts @@ -240,7 +240,9 @@ export function createRouter(ctx: RouterContext) { if (workspace.status === 'running') { try { const containerName = getContainerName(input.name); - const client = await createWorkerClient(containerName); + const client = await createWorkerClient(containerName, { + runtime: ctx.config.get().runtime, + }); const health = await client.health(); workerVersion = health.version; } catch { @@ -951,8 +953,9 @@ export function createRouter(ctx: RouterContext) { } const containerName = `workspace-${input.workspaceName}`; + const runtime = ctx.config.get().runtime; - const rawSessions = await discoverAllSessions(containerName, execInContainer); + const rawSessions = await discoverAllSessions(containerName, execInContainer, runtime); const customNames = await getSessionNamesForWorkspace(ctx.stateDir, input.workspaceName); @@ -973,7 +976,7 @@ export function createRouter(ctx: RouterContext) { const detailsResults = await Promise.all( paginatedRawSessions.map((rawSession) => - getAgentSessionDetails(containerName, rawSession, execInContainer) + getAgentSessionDetails(containerName, rawSession, execInContainer, runtime) ) ); @@ -1057,15 +1060,17 @@ export function createRouter(ctx: RouterContext) { ? toClientAgentType(record.agentType) : input.agentType; + const runtime = ctx.config.get().runtime; result = resolvedAgentType ? await getSessionMessages( containerName, agentSessionId, resolvedAgentType, execInContainer, - input.projectPath + input.projectPath, + runtime ) - : await findSessionMessages(containerName, agentSessionId, execInContainer); + : await findSessionMessages(containerName, agentSessionId, execInContainer, runtime); if (result && !record) { const agentType = toRegistryAgentType(result.agentType || resolvedAgentType); @@ -1201,6 +1206,7 @@ export function createRouter(ctx: RouterContext) { } const containerName = `workspace-${input.workspaceName}`; + const runtime = ctx.config.get().runtime; const record = await resolveSessionRecord(input.sessionId); const agentSessionId = record?.agentSessionId || input.sessionId; @@ -1209,7 +1215,8 @@ export function createRouter(ctx: RouterContext) { containerName, agentSessionId, agentType, - execInContainer + execInContainer, + runtime ); if (!result.success) { diff --git a/src/sessions/agents/index.ts b/src/sessions/agents/index.ts index 713bcadf..eba33aa4 100644 --- a/src/sessions/agents/index.ts +++ b/src/sessions/agents/index.ts @@ -27,17 +27,19 @@ const _providers: Record = { export async function discoverAllSessions( containerName: string, - _exec: ExecInContainer + _exec: ExecInContainer, + runtime?: 'docker' | 'podman' ): Promise { - return discoverSessionsViaWorker(containerName); + return discoverSessionsViaWorker(containerName, runtime); } export async function getSessionDetails( containerName: string, rawSession: RawSession, - _exec: ExecInContainer + _exec: ExecInContainer, + runtime?: 'docker' | 'podman' ): Promise { - return getSessionDetailsViaWorker(containerName, rawSession); + return getSessionDetailsViaWorker(containerName, rawSession, runtime); } export async function getSessionMessages( @@ -45,9 +47,10 @@ export async function getSessionMessages( sessionId: string, agentType: AgentType, _exec: ExecInContainer, - _projectPath?: string + _projectPath?: string, + runtime?: 'docker' | 'podman' ): Promise<{ id: string; agentType: AgentType; messages: SessionMessage[] } | null> { - const result = await getSessionMessagesViaWorker(containerName, sessionId); + const result = await getSessionMessagesViaWorker(containerName, sessionId, runtime); if (!result) return null; return { ...result, agentType }; } @@ -55,10 +58,11 @@ export async function getSessionMessages( export async function findSessionMessages( containerName: string, sessionId: string, - _exec: ExecInContainer + _exec: ExecInContainer, + runtime?: 'docker' | 'podman' ): Promise<{ id: string; agentType: AgentType; messages: SessionMessage[] } | null> { const { createWorkerClient } = await import('../../worker/client'); - const client = await createWorkerClient(containerName); + const client = await createWorkerClient(containerName, { runtime }); const session = await client.getSession(sessionId); if (!session) { @@ -87,9 +91,10 @@ export async function deleteSession( containerName: string, sessionId: string, _agentType: AgentType, - _exec: ExecInContainer + _exec: ExecInContainer, + runtime?: 'docker' | 'podman' ): Promise<{ success: boolean; error?: string }> { - return deleteSessionViaWorker(containerName, sessionId); + return deleteSessionViaWorker(containerName, sessionId, runtime); } export interface SearchResult { diff --git a/src/sessions/agents/worker-provider.ts b/src/sessions/agents/worker-provider.ts index d90de117..c5f9d748 100644 --- a/src/sessions/agents/worker-provider.ts +++ b/src/sessions/agents/worker-provider.ts @@ -4,17 +4,24 @@ import { createWorkerClient, type WorkerClient } from '../../worker/client'; const clientCache = new Map(); -async function getWorkerClient(containerName: string): Promise { - let client = clientCache.get(containerName); +async function getWorkerClient( + containerName: string, + runtime?: 'docker' | 'podman' +): Promise { + const cacheKey = `${containerName}:${runtime || 'docker'}`; + let client = clientCache.get(cacheKey); if (!client) { - client = await createWorkerClient(containerName); - clientCache.set(containerName, client); + client = await createWorkerClient(containerName, { runtime }); + clientCache.set(cacheKey, client); } return client; } -export async function discoverSessionsViaWorker(containerName: string): Promise { - const client = await getWorkerClient(containerName); +export async function discoverSessionsViaWorker( + containerName: string, + runtime?: 'docker' | 'podman' +): Promise { + const client = await getWorkerClient(containerName, runtime); const sessions = await client.listSessions(); return sessions.map((s) => ({ @@ -29,9 +36,10 @@ export async function discoverSessionsViaWorker(containerName: string): Promise< export async function getSessionDetailsViaWorker( containerName: string, - rawSession: RawSession + rawSession: RawSession, + runtime?: 'docker' | 'podman' ): Promise { - const client = await getWorkerClient(containerName); + const client = await getWorkerClient(containerName, runtime); const session = await client.getSession(rawSession.id); if (!session) { @@ -51,9 +59,10 @@ export async function getSessionDetailsViaWorker( export async function getSessionMessagesViaWorker( containerName: string, - sessionId: string + sessionId: string, + runtime?: 'docker' | 'podman' ): Promise<{ id: string; messages: SessionMessage[] } | null> { - const client = await getWorkerClient(containerName); + const client = await getWorkerClient(containerName, runtime); const result = await client.getMessages(sessionId, { limit: 1000, offset: 0 }); if (!result || result.messages.length === 0) { @@ -74,9 +83,10 @@ export async function getSessionMessagesViaWorker( export async function deleteSessionViaWorker( containerName: string, - sessionId: string + sessionId: string, + runtime?: 'docker' | 'podman' ): Promise<{ success: boolean; error?: string }> { - const client = await getWorkerClient(containerName); + const client = await getWorkerClient(containerName, runtime); return client.deleteSession(sessionId); } diff --git a/src/worker/client.ts b/src/worker/client.ts index ba6d6bf7..68a07a18 100644 --- a/src/worker/client.ts +++ b/src/worker/client.ts @@ -43,12 +43,42 @@ async function fetchWithTimeout( } } -async function isWorkerRunning(ip: string): Promise { +async function execFetch( + containerName: string, + path: string, + options?: { method?: string; timeout?: number } +): Promise<{ ok: boolean; status: number; json(): Promise; text(): Promise }> { + const method = options?.method || 'GET'; + const url = `http://localhost:${WORKER_PORT}${path}`; + const curlArgs = ['-s', '-w', '\\n%{http_code}', '-X', method, url]; + const result = await execInContainer(containerName, ['curl', ...curlArgs], { user: 'workspace' }); + + const lines = result.stdout.trim().split('\n'); + const statusCode = parseInt(lines.pop() || '0', 10); + const body = lines.join('\n'); + + return { + ok: statusCode >= 200 && statusCode < 300, + status: statusCode, + json: async () => JSON.parse(body), + text: async () => body, + }; +} + +async function isWorkerRunning( + ipOrContainer: string, + runtime?: 'docker' | 'podman' +): Promise { try { - const response = await fetchWithTimeout(`http://${ip}:${WORKER_PORT}/health`, { - timeout: HEALTH_TIMEOUT, - }); - return response.ok; + if (runtime === 'podman') { + const response = await execFetch(ipOrContainer, '/health', { timeout: HEALTH_TIMEOUT }); + return response.ok; + } else { + const response = await fetchWithTimeout(`http://${ipOrContainer}:${WORKER_PORT}/health`, { + timeout: HEALTH_TIMEOUT, + }); + return response.ok; + } } catch { return false; } @@ -66,86 +96,168 @@ async function startWorkerInContainer(containerName: string): Promise { ); } -async function ensureWorkerRunning(containerName: string): Promise { - const ip = await getContainerIp(containerName); - if (!ip) { - throw new Error(`Could not get IP for container: ${containerName}`); - } +async function ensureWorkerRunning( + containerName: string, + runtime?: 'docker' | 'podman' +): Promise { + if (runtime === 'podman') { + if (await isWorkerRunning(containerName, runtime)) { + return containerName; + } - if (await isWorkerRunning(ip)) { - return ip; - } + await startWorkerInContainer(containerName); + + const deadline = Date.now() + STARTUP_TIMEOUT; + while (Date.now() < deadline) { + await new Promise((resolve) => setTimeout(resolve, STARTUP_POLL_INTERVAL)); + if (await isWorkerRunning(containerName, runtime)) { + return containerName; + } + } - await startWorkerInContainer(containerName); + throw new Error(`Worker failed to start in container: ${containerName}`); + } else { + const ip = await getContainerIp(containerName); + if (!ip) { + throw new Error(`Could not get IP for container: ${containerName}`); + } - const deadline = Date.now() + STARTUP_TIMEOUT; - while (Date.now() < deadline) { - await new Promise((resolve) => setTimeout(resolve, STARTUP_POLL_INTERVAL)); if (await isWorkerRunning(ip)) { return ip; } - } - throw new Error(`Worker failed to start in container: ${containerName}`); + await startWorkerInContainer(containerName); + + const deadline = Date.now() + STARTUP_TIMEOUT; + while (Date.now() < deadline) { + await new Promise((resolve) => setTimeout(resolve, STARTUP_POLL_INTERVAL)); + if (await isWorkerRunning(ip)) { + return ip; + } + } + + throw new Error(`Worker failed to start in container: ${containerName}`); + } } -export async function createWorkerClient(containerName: string): Promise { - const ip = await ensureWorkerRunning(containerName); - const baseUrl = `http://${ip}:${WORKER_PORT}`; +export async function createWorkerClient( + containerName: string, + options?: { runtime?: 'docker' | 'podman' } +): Promise { + const runtime = options?.runtime; + const ipOrContainer = await ensureWorkerRunning(containerName, runtime); - return { - async health(): Promise { - const response = await fetchWithTimeout(`${baseUrl}/health`); - if (!response.ok) { - throw new Error(`Failed to get health: ${response.statusText}`); - } - return response.json(); - }, + if (runtime === 'podman') { + return { + async health(): Promise { + const response = await execFetch(containerName, '/health'); + if (!response.ok) { + throw new Error(`Failed to get health: ${response.status}`); + } + return response.json(); + }, - async listSessions(): Promise { - const response = await fetchWithTimeout(`${baseUrl}/sessions`); - if (!response.ok) { - throw new Error(`Failed to list sessions: ${response.statusText}`); - } - const data = await response.json(); - return data.sessions; - }, - - async getSession(id: string): Promise { - const response = await fetchWithTimeout(`${baseUrl}/sessions/${encodeURIComponent(id)}`); - if (response.status === 404) { - return null; - } - if (!response.ok) { - throw new Error(`Failed to get session: ${response.statusText}`); - } - const data = await response.json(); - return data.session; - }, - - async getMessages( - id: string, - opts: { limit?: number; offset?: number } = {} - ): Promise<{ id: string; messages: Message[]; total: number }> { - const params = new URLSearchParams(); - if (opts.limit !== undefined) params.set('limit', String(opts.limit)); - if (opts.offset !== undefined) params.set('offset', String(opts.offset)); - - const url = `${baseUrl}/sessions/${encodeURIComponent(id)}/messages?${params}`; - const response = await fetchWithTimeout(url); - if (!response.ok) { - throw new Error(`Failed to get messages: ${response.statusText}`); - } - return response.json(); - }, + async listSessions(): Promise { + const response = await execFetch(containerName, '/sessions'); + if (!response.ok) { + throw new Error(`Failed to list sessions: ${response.status}`); + } + const data = await response.json(); + return data.sessions; + }, - async deleteSession(id: string): Promise<{ success: boolean; error?: string }> { - const response = await fetchWithTimeout(`${baseUrl}/sessions/${encodeURIComponent(id)}`, { - method: 'DELETE', - }); - return response.json(); - }, - }; + async getSession(id: string): Promise { + const response = await execFetch(containerName, `/sessions/${encodeURIComponent(id)}`); + if (response.status === 404) { + return null; + } + if (!response.ok) { + throw new Error(`Failed to get session: ${response.status}`); + } + const data = await response.json(); + return data.session; + }, + + async getMessages( + id: string, + opts: { limit?: number; offset?: number } = {} + ): Promise<{ id: string; messages: Message[]; total: number }> { + const params = new URLSearchParams(); + if (opts.limit !== undefined) params.set('limit', String(opts.limit)); + if (opts.offset !== undefined) params.set('offset', String(opts.offset)); + + const path = `/sessions/${encodeURIComponent(id)}/messages?${params}`; + const response = await execFetch(containerName, path); + if (!response.ok) { + throw new Error(`Failed to get messages: ${response.status}`); + } + return response.json(); + }, + + async deleteSession(id: string): Promise<{ success: boolean; error?: string }> { + const response = await execFetch(containerName, `/sessions/${encodeURIComponent(id)}`, { + method: 'DELETE', + }); + return response.json(); + }, + }; + } else { + const baseUrl = `http://${ipOrContainer}:${WORKER_PORT}`; + + return { + async health(): Promise { + const response = await fetchWithTimeout(`${baseUrl}/health`); + if (!response.ok) { + throw new Error(`Failed to get health: ${response.statusText}`); + } + return response.json(); + }, + + async listSessions(): Promise { + const response = await fetchWithTimeout(`${baseUrl}/sessions`); + if (!response.ok) { + throw new Error(`Failed to list sessions: ${response.statusText}`); + } + const data = await response.json(); + return data.sessions; + }, + + async getSession(id: string): Promise { + const response = await fetchWithTimeout(`${baseUrl}/sessions/${encodeURIComponent(id)}`); + if (response.status === 404) { + return null; + } + if (!response.ok) { + throw new Error(`Failed to get session: ${response.statusText}`); + } + const data = await response.json(); + return data.session; + }, + + async getMessages( + id: string, + opts: { limit?: number; offset?: number } = {} + ): Promise<{ id: string; messages: Message[]; total: number }> { + const params = new URLSearchParams(); + if (opts.limit !== undefined) params.set('limit', String(opts.limit)); + if (opts.offset !== undefined) params.set('offset', String(opts.offset)); + + const url = `${baseUrl}/sessions/${encodeURIComponent(id)}/messages?${params}`; + const response = await fetchWithTimeout(url); + if (!response.ok) { + throw new Error(`Failed to get messages: ${response.statusText}`); + } + return response.json(); + }, + + async deleteSession(id: string): Promise<{ success: boolean; error?: string }> { + const response = await fetchWithTimeout(`${baseUrl}/sessions/${encodeURIComponent(id)}`, { + method: 'DELETE', + }); + return response.json(); + }, + }; + } } export { WORKER_PORT }; diff --git a/src/workspace/manager.ts b/src/workspace/manager.ts index 1e8ad182..1087c6c1 100644 --- a/src/workspace/manager.ts +++ b/src/workspace/manager.ts @@ -493,13 +493,7 @@ export class WorkspaceManager { options: { strictWorker: boolean } ): Promise { const WORKER_PORT = 7392; - const ip = await docker.getContainerIp(containerName); - if (!ip) { - console.warn( - `[sync] Could not get container IP for ${containerName}, skipping worker server` - ); - return; - } + const isPodman = this.config.runtime === 'podman'; const desiredVersion = pkg.version; @@ -510,21 +504,58 @@ export class WorkspaceManager { }) ).exitCode === 0; - try { - const healthResponse = await fetch(`http://${ip}:${WORKER_PORT}/health`, { - signal: AbortSignal.timeout(1000), - }); + const checkHealth = async (): Promise<{ ok: boolean; version?: string }> => { + if (isPodman) { + try { + const result = await docker.execInContainer( + containerName, + ['curl', '-s', '-w', '\\n%{http_code}', `http://localhost:${WORKER_PORT}/health`], + { user: 'workspace' } + ); + const lines = result.stdout.trim().split('\n'); + const statusCode = parseInt(lines.pop() || '0', 10); + const body = lines.join('\n'); + if (statusCode >= 200 && statusCode < 300) { + const health = JSON.parse(body); + return { ok: true, version: health.version }; + } + return { ok: false }; + } catch { + return { ok: false }; + } + } else { + const ip = await docker.getContainerIp(containerName); + if (!ip) { + console.warn( + `[sync] Could not get container IP for ${containerName}, skipping worker server` + ); + return { ok: false }; + } + try { + const healthResponse = await fetch(`http://${ip}:${WORKER_PORT}/health`, { + signal: AbortSignal.timeout(1000), + }); + if (healthResponse.ok) { + const health = (await healthResponse.json().catch(() => null)) as { + version?: string; + } | null; + return { ok: true, version: health?.version }; + } + return { ok: false }; + } catch { + return { ok: false }; + } + } + }; - if (healthResponse.ok) { + try { + const health = await checkHealth(); + if (health.ok) { if (!hasSyncedPerry) { return; } - const health = (await healthResponse.json().catch(() => null)) as { - version?: string; - } | null; - - if (health?.version === desiredVersion) { + if (health.version === desiredVersion) { return; } @@ -552,11 +583,8 @@ export class WorkspaceManager { while (Date.now() < deadline) { await new Promise((r) => setTimeout(r, 200)); try { - const response = await fetch(`http://${ip}:${WORKER_PORT}/health`, { - signal: AbortSignal.timeout(500), - }); - - if (!response.ok) { + const health = await checkHealth(); + if (!health.ok) { continue; } @@ -564,8 +592,7 @@ export class WorkspaceManager { return; } - const health = (await response.json().catch(() => null)) as { version?: string } | null; - if (health?.version === desiredVersion) { + if (health.version === desiredVersion) { return; } } catch { @@ -931,6 +958,12 @@ export class WorkspaceManager { } const isPodman = this.config.runtime === 'podman'; + + // For podman runtime, pass DOCKER_HOST so entrypoint skips local dockerd + if (isPodman && process.env.DOCKER_HOST) { + containerEnv.DOCKER_HOST = process.env.DOCKER_HOST; + } + const dockerVolumeName = `${VOLUME_PREFIX}${name}-docker`; // Only create Docker-in-Docker volume for docker runtime @@ -948,7 +981,7 @@ export class WorkspaceManager { const containerId = await docker.createContainer({ name: containerName, image: workspaceImage, - hostname: name, + hostname: isPodman ? undefined : name, // Skip hostname for podman (UTS namespace conflict) privileged: !isPodman, // Skip privileged mode for podman restartPolicy: 'unless-stopped', env: containerEnv, @@ -1078,6 +1111,12 @@ export class WorkspaceManager { } const isPodman = this.config.runtime === 'podman'; + + // For podman runtime, pass DOCKER_HOST so entrypoint skips local dockerd + if (isPodman && process.env.DOCKER_HOST) { + containerEnv.DOCKER_HOST = process.env.DOCKER_HOST; + } + const dockerVolumeName = `${VOLUME_PREFIX}${name}-docker`; // Only create Docker-in-Docker volume for docker runtime @@ -1095,7 +1134,7 @@ export class WorkspaceManager { const containerId = await docker.createContainer({ name: containerName, image: workspaceImage, - hostname: name, + hostname: isPodman ? undefined : name, // Skip hostname for podman (UTS namespace conflict) privileged: !isPodman, // Skip privileged mode for podman restartPolicy: 'unless-stopped', env: containerEnv, @@ -1360,6 +1399,12 @@ export class WorkspaceManager { } const isPodman = this.config.runtime === 'podman'; + + // For podman runtime, pass DOCKER_HOST so entrypoint skips local dockerd + if (isPodman && process.env.DOCKER_HOST) { + containerEnv.DOCKER_HOST = process.env.DOCKER_HOST; + } + const volumes = [{ source: cloneVolumeName, target: '/home/workspace', readonly: false }]; // Only add Docker-in-Docker volume for docker runtime @@ -1370,7 +1415,7 @@ export class WorkspaceManager { const containerId = await docker.createContainer({ name: cloneContainerName, image: workspaceImage, - hostname: cloneName, + hostname: isPodman ? undefined : cloneName, // Skip hostname for podman (UTS namespace conflict) privileged: !isPodman, // Skip privileged mode for podman restartPolicy: 'unless-stopped', env: containerEnv, From 6020522db26da20120cb6cd94d2d879e37a33365 Mon Sep 17 00:00:00 2001 From: Claude Date: Wed, 18 Feb 2026 06:59:54 +0000 Subject: [PATCH 3/6] fix: ensure runtime config defaults to 'docker' and respect DOCKER_HOST in entrypoint - Add runtime field to config loader with 'docker' default - Skip dockerd monitoring in entrypoint when DOCKER_HOST is set These changes complete the Podman runtime support by ensuring the config is properly loaded and the entrypoint doesn't try to manage dockerd when using an external container engine. Co-Authored-By: Claude (anthropic.claude-sonnet-4-5-20250929-v1:0) --- perry/internal/src/lib/services.ts | 3 ++- src/config/loader.ts | 1 + 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/perry/internal/src/lib/services.ts b/perry/internal/src/lib/services.ts index 143dc192..43714aa9 100644 --- a/perry/internal/src/lib/services.ts +++ b/perry/internal/src/lib/services.ts @@ -49,9 +49,10 @@ const isProcessRunning = async (name: string) => { export const monitorServices = async () => { console.log("[entrypoint] Starting service monitor..."); const hasTailscale = !!process.env.TS_AUTHKEY; + const useExternalDocker = !!process.env.DOCKER_HOST; while (true) { await delay(10000); - if (!(await isProcessRunning("dockerd"))) { + if (!useExternalDocker && !(await isProcessRunning("dockerd"))) { console.log("[entrypoint] Restarting Docker daemon..."); startDockerd(); await delay(2000); diff --git a/src/config/loader.ts b/src/config/loader.ts index d12d61cc..fb43dfa5 100644 --- a/src/config/loader.ts +++ b/src/config/loader.ts @@ -102,6 +102,7 @@ export async function loadAgentConfig(configDir?: string): Promise }, skills: Array.isArray(config.skills) ? config.skills : [], mcpServers: Array.isArray(config.mcpServers) ? config.mcpServers : [], + runtime: config.runtime || 'docker', allowHostAccess: config.allowHostAccess ?? true, ssh: { autoAuthorizeHostKeys: config.ssh?.autoAuthorizeHostKeys ?? true, From ceca2c1516ccb1553f0a933d53f25dbf3ab2ed4e Mon Sep 17 00:00:00 2001 From: Claude Date: Wed, 18 Feb 2026 07:35:08 +0000 Subject: [PATCH 4/6] fix: copy JS dist + bun wrapper instead of compiled binary for podman The compiled perry-worker binary bakes in the host's glibc dynamic linker path (e.g. /nix/store/.../ld-linux-x86-64.so.2) which doesn't exist in the Ubuntu workspace container. For podman runtime, copy the JS dist directory and create a bun wrapper at /usr/local/bin/perry instead. Bun is already installed in the workspace image and can run the JS dist directly. The Docker runtime path is unchanged. --- src/workspace/manager.ts | 88 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 88 insertions(+) diff --git a/src/workspace/manager.ts b/src/workspace/manager.ts index 1087c6c1..3bd2f4a7 100644 --- a/src/workspace/manager.ts +++ b/src/workspace/manager.ts @@ -405,6 +405,14 @@ export class WorkspaceManager { } private async copyPerryWorker(containerName: string): Promise { + if (this.config.runtime === 'podman') { + // Compiled binaries use host glibc linker paths (e.g. /nix/store/...) + // that don't exist in the Ubuntu workspace container. Copy JS dist + // and use bun (already installed in the image) as runtime instead. + await this.copyPerryWorkerJs(containerName); + return; + } + const installedPath = path.join(os.homedir(), '.perry', 'bin', 'perry'); const cwdDistPath = path.join(process.cwd(), 'dist', 'perry-worker'); const distDir = path.dirname(new URL(import.meta.url).pathname); @@ -452,6 +460,86 @@ export class WorkspaceManager { }); } + /** + * Podman-specific worker sync: copy JS dist directory + bun wrapper + * instead of a compiled binary that may have incompatible linker paths. + */ + private async copyPerryWorkerJs(containerName: string): Promise { + // Find dist directory containing index.js + const cwdDistDir = path.join(process.cwd(), 'dist'); + const metaDistDir = path.dirname(new URL(import.meta.url).pathname); + + let sourceDistDir: string | null = null; + for (const candidate of [cwdDistDir, metaDistDir]) { + try { + await fs.access(path.join(candidate, 'index.js')); + sourceDistDir = candidate; + break; + } catch { + // Try next + } + } + + if (!sourceDistDir) { + console.warn('[sync] JS dist directory not found, session discovery may not work'); + return; + } + + // Find package.json (needed by bun for module resolution) + const cwdPkgJson = path.join(process.cwd(), 'package.json'); + const parentPkgJson = path.join(sourceDistDir, '..', 'package.json'); + let packageJsonPath: string | null = null; + for (const candidate of [cwdPkgJson, parentPkgJson]) { + try { + await fs.access(candidate); + packageJsonPath = candidate; + break; + } catch { + // Try next + } + } + + try { + // Create destination and copy dist directory + await docker.execInContainer(containerName, ['mkdir', '-p', '/opt/perry'], { user: 'root' }); + await docker.copyToContainer(containerName, sourceDistDir, '/opt/perry/dist', { + timeoutMs: 60_000, + }); + + // Copy package.json if found + if (packageJsonPath) { + await docker.copyToContainer(containerName, packageJsonPath, '/opt/perry/package.json', { + timeoutMs: 10_000, + }); + } + + // Create bun wrapper at /usr/local/bin/perry + await docker.execInContainer( + containerName, + [ + 'sh', + '-c', + 'printf \'#!/bin/sh\\nexec bun /opt/perry/dist/index.js "$@"\\n\' > /usr/local/bin/perry && chmod +x /usr/local/bin/perry', + ], + { user: 'root' } + ); + + // Symlink for workspace user PATH + await docker.execInContainer( + containerName, + [ + 'sh', + '-c', + 'mkdir -p /home/workspace/.local/bin && ln -sf /usr/local/bin/perry /home/workspace/.local/bin/perry', + ], + { user: 'root' } + ); + } catch (err) { + const message = err instanceof Error ? err.message : String(err); + throw new Error(`[sync] Failed to copy JS dist to ${containerName}: ${message}`); + } + } + private async ensurePerryOnPath(containerName: string): Promise { await docker.execInContainer( containerName, From b065eea720d76f9d4c84e4296f673270bed00eed Mon Sep 17 00:00:00 2001 From: Claude Date: Wed, 18 Feb 2026 08:09:15 +0000 Subject: [PATCH 5/6] fix: enforce timeout in execFetch via curl --max-time The execFetch function accepted a timeout option but never passed it to curl. In podman environments, unresponsive workers could cause health checks to hang indefinitely. Now passes --max-time to curl when timeout is specified, matching the Docker path's AbortController behavior. --- src/worker/client.ts | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/src/worker/client.ts b/src/worker/client.ts index 68a07a18..2a310bd9 100644 --- a/src/worker/client.ts +++ b/src/worker/client.ts @@ -50,7 +50,11 @@ async function execFetch( ): Promise<{ ok: boolean; status: number; json(): Promise; text(): Promise }> { const method = options?.method || 'GET'; const url = `http://localhost:${WORKER_PORT}${path}`; - const curlArgs = ['-s', '-w', '\\n%{http_code}', '-X', method, url]; + const curlArgs = ['-s', '-w', '\\n%{http_code}', '-X', method]; + if (options?.timeout) { + curlArgs.push('--max-time', String(Math.ceil(options.timeout / 1000))); + } + curlArgs.push(url); const result = await execInContainer(containerName, ['curl', ...curlArgs], { user: 'workspace' }); const lines = result.stdout.trim().split('\n'); From a4a82b8e51c61b8908e60f168a8dc4dabbfab173 Mon Sep 17 00:00:00 2001 From: Claude Date: Wed, 18 Feb 2026 08:18:57 +0000 Subject: [PATCH 6/6] fix: add --max-time to health check curl in podman path The checkHealth function for podman used curl without a timeout, meaning an unresponsive worker could block workspace startup indefinitely. Add --max-time 1 to match the Docker path's 1-second timeout behavior. --- src/workspace/manager.ts | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/src/workspace/manager.ts b/src/workspace/manager.ts index 3bd2f4a7..59176c76 100644 --- a/src/workspace/manager.ts +++ b/src/workspace/manager.ts @@ -597,7 +597,15 @@ export class WorkspaceManager { try { const result = await docker.execInContainer( containerName, - ['curl', '-s', '-w', '\\n%{http_code}', `http://localhost:${WORKER_PORT}/health`], + [ + 'curl', + '-s', + '--max-time', + '1', + '-w', + '\\n%{http_code}', + `http://localhost:${WORKER_PORT}/health`, + ], { user: 'workspace' } ); const lines = result.stdout.trim().split('\n');