Local LLM coding agent for CLI — powered by LM Studio + Qwen3
Autonomous coding agent that runs in your terminal. Connects to a local model via LM Studio, supports tool calling (file operations + shell commands), reasoning display, and safe confirmation prompts.
source/
cli.tsx Entry point (meow CLI flags)
app.tsx Root React Ink component + state
types/
index.ts Shared TypeScript types
agent/
client.ts OpenAI client factory (LM Studio)
loop.ts Recursive agent loop (tool calling)
prompt.ts System prompt
tools/
index.ts Tool definitions + handler registry
file-tree.ts Directory tree builder
list-files.ts list_files tool
read-file.ts read_file tool
write-file.ts write_file tool (requires y/n)
execute-command.ts shell_execute tool (requires y/n)
ui/
banner.tsx Dynamic banner (model + update check)
log-panel.tsx Log items display
approval-panel.tsx y/n confirmation panel
spinner-row.tsx Loading spinner
prompt-input.tsx TextInput wrapper
- Node.js >= 18
- LM Studio running with a loaded model (e.g.
qwen/qwen3-14b)- Local network server address (default):
http://192.168.0.240:1234/v1 - Optional override via env:
UZCODE_LM_STUDIO_BASE_URL
- Local network server address (default):
git clone https://github.com/Yeb-Ich/uzcode-cli.git
cd uzcode-cli
npm install
npm run build# Interactive mode — type your prompt inside the app
npm start
# Pass a prompt directly from the command line
node dist/cli.js "List all files in the current directory"
# Specify a different model
node dist/cli.js --model=qwen/qwen3-14b "Fix the build errors"Go to GitHub Releases and
download the latest uzcode binary.
# Make it executable
chmod +x uzcode
# Run it
./uzcode "List files in the current directory"
# Or move it to PATH for global access
sudo mv uzcode /usr/local/bin/
uzcode "Fix the build errors"npm install
npm run build
npm run bundleThe binary will be at releases/uzcode.
| Tool | Description | Confirmation |
|---|---|---|
list_files |
List directory tree | No |
read_file |
Read file content | No |
write_file |
Write content to file | Yes (y/n) |
shell_execute |
Execute shell command | Yes (y/n) |
The startup banner automatically checks:
- LM Studio server — shows
Online/Offlineand the active model ID - GitHub Releases — compares latest release tag with your version and
shows
Update Available: vX.X.Xif a newer version exists
-
Build check — make sure TypeScript compiles without errors:
npm run build
-
Quick smoke test — start the CLI and type a prompt:
npm start
Then type:
List files in the current directoryand press Enter. You should see the agent calllist_files, display the result, and return a final answer. -
Safety check — when the agent calls
write_fileorshell_execute, a confirmation prompt appears. Typento deny and verify the operation is blocked. -
Reasoning check — if the model returns
reasoning_content, it will be displayed in a gray-bordered box before the final answer. -
Banner check — start the CLI with LM Studio running and verify the banner shows
Onlineand the model ID. Stop LM Studio and restart to seeServer Offline.
npm run devMIT