You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The plugin's hook orchestration currently lives in bin/core-pre-tool.sh, bin/core-post-tool.sh, bin/nvim-socket.sh, and bin/nvim-send.sh (~600 lines of bash). This setup has accumulated a few real costs:
Windows is unsupported. Socket discovery globs /var/folders, /tmp, and $XDG_RUNTIME_DIR; Windows uses named pipes (\\.\pipe\nvim.*) and none of the bash tooling (lsof, compgen, kill -0) is available.
nvim-socket.sh is fragile. 129 lines of OS-specific guessing for something the plugin already knows at setup() time (vim.v.servername).
String-interpolation RPC.escape_lua exists because we build Lua source as bash strings. Structured RPC args remove a whole class of quoting bugs.
External dependencies.jq and lsof are required on the host and the healthcheck has to verify them.
The Lua-worker pattern is already proven in the codebase: apply-edit.lua, apply-multi-edit.lua, and apply-patch.lua all run via nvim --headless -l. This issue tracks extending that pattern to the orchestration layer.
Non-goals
No user-facing API changes.require('code-preview').setup({...}) works identically before and after.
No config schema changes.
No behavior changes for any existing backend.
If you're a current user: nothing about your setup needs to change.
Plan — four independently shippable phases
Each phase lands as its own PR on main, is individually revertable, and ships to users for validation before the next phase begins. No long-lived rewrite branch.
Phase 1 — Pidfile-based socket discovery.setup() writes vim.v.servername to stdpath('cache')/code-preview/sockets/<hash>. nvim-socket.sh checks the pidfile first, falls back to existing discovery. Unblocks Windows on its own. No user-visible change.
Phase 2 — Replace nvim-send.sh quoting with structured RPC. Lua entrypoint takes args via nvim_exec_lua's args table. Removes escape_lua and the interpolation surface. Bash on the outside.
Phase 3 — Port core-pre-tool.sh → core-pre-tool.lua, opt-in per backend. Each backend module emits either .sh or nvim --headless -l ...lua in its hook config. Roll out: claudecode first, then opencode, codex, copilot, gemini — one PR per backend.
Phase 4 — Same for core-post-tool.sh, then remove the bash.
Backend work in flight
Codex and Gemini work in progress is unaffected. Continue in bash. When phase 3 lands, each backend's hook command flips from .sh to .lua in a ~10-line follow-up PR. The per-backend logic itself doesn't change.
Open questions / discussion welcome
Cold-start cost.nvim --headless -l is ~50–100ms vs ~5–10ms for bash. On a 20-edit refactor this is ~1–2s total. Acceptable, but if it bites we may want a tiny launcher that does sockconnect + RPC without spawning a second nvim. Real-world feedback after phase 3 will tell us.
Stale pidfile cleanup. When nvim crashes, the pidfile lingers. Options: kill -0-equivalent check on read, atime-based GC in setup(), or rely on cache-dir hygiene. Leaning toward the first.
Anything else? Comment below.
Out of scope for this issue
Windows support itself (testing, CI, docs) — tracked in Request Windows 11 support #46. Phase 1 unblocks it; the actual Windows enablement is a follow-up.
Motivation
The plugin's hook orchestration currently lives in
bin/core-pre-tool.sh,bin/core-post-tool.sh,bin/nvim-socket.sh, andbin/nvim-send.sh(~600 lines of bash). This setup has accumulated a few real costs:/var/folders,/tmp, and$XDG_RUNTIME_DIR; Windows uses named pipes (\\.\pipe\nvim.*) and none of the bash tooling (lsof,compgen,kill -0) is available.nvim-socket.shis fragile. 129 lines of OS-specific guessing for something the plugin already knows atsetup()time (vim.v.servername).escape_luaexists because we build Lua source as bash strings. Structured RPC args remove a whole class of quoting bugs.jqandlsofare required on the host and the healthcheck has to verify them.The Lua-worker pattern is already proven in the codebase:
apply-edit.lua,apply-multi-edit.lua, andapply-patch.luaall run vianvim --headless -l. This issue tracks extending that pattern to the orchestration layer.Non-goals
require('code-preview').setup({...})works identically before and after.If you're a current user: nothing about your setup needs to change.
Plan — four independently shippable phases
Each phase lands as its own PR on
main, is individually revertable, and ships to users for validation before the next phase begins. No long-lived rewrite branch.setup()writesvim.v.servernametostdpath('cache')/code-preview/sockets/<hash>.nvim-socket.shchecks the pidfile first, falls back to existing discovery. Unblocks Windows on its own. No user-visible change.nvim-send.shquoting with structured RPC. Lua entrypoint takes args vianvim_exec_lua's args table. Removesescape_luaand the interpolation surface. Bash on the outside.core-pre-tool.sh→core-pre-tool.lua, opt-in per backend. Each backend module emits either.shornvim --headless -l ...luain its hook config. Roll out:claudecodefirst, thenopencode,codex,copilot,gemini— one PR per backend.core-post-tool.sh, then remove the bash.Backend work in flight
Codex and Gemini work in progress is unaffected. Continue in bash. When phase 3 lands, each backend's hook command flips from
.shto.luain a ~10-line follow-up PR. The per-backend logic itself doesn't change.Open questions / discussion welcome
nvim --headless -lis ~50–100ms vs ~5–10ms for bash. On a 20-edit refactor this is ~1–2s total. Acceptable, but if it bites we may want a tiny launcher that doessockconnect+ RPC without spawning a second nvim. Real-world feedback after phase 3 will tell us.kill -0-equivalent check on read, atime-based GC insetup(), or rely on cache-dir hygiene. Leaning toward the first.Out of scope for this issue
lua/code-preview/diff.lua(tracked separately).