An experimental skill for iterative visual design exploration using AI. This is the companion tool to Learning to Draw Again With AI — an essay on divergence, taste, and visual exploration in AI-native design workflows.
Design Evolve is a Claude Code skill that runs an iterative design loop on a tldraw canvas. Instead of prompting for a single output and refining it line by line, you explore a broad space of visual possibilities and converge through curation.
The workflow:
SEED → REVIEW → EVOLVE → REVIEW → EVOLVE → ... → CONVERGE
- Seed — Describe what you want. The skill generates multiple diverse UI candidates using image generation.
- Review — Annotate directly on the canvas. Circle what you like, cross out what you don't, add sticky notes with feedback. You can also click the mic to dictate feedback straight onto the canvas.
- Hand off — When you're ready, right-click the canvas → Send Context. The skill picks up your annotations and the current state.
- Evolve — The skill reads your annotations and evolves all candidates, applying your feedback globally.
- Converge — Repeat until you're happy, then export as code, design specs, or a polished image.
The skill generates seed candidates, and you annotate them with visual feedback directly on the tldraw canvas. Feedback on any candidate is applied to all candidates in the next evolution round.
Each evolution round applies your feedback and branches into new variations. Over multiple rounds, the designs converge towards a unified direction.
Once you're satisfied, the skill can export your chosen design as HTML/CSS, a React component, design specs, or a polished high-resolution image.
The fastest way to get started: paste this repo's URL into a Claude Code chat. Claude will read this README and walk you through setup, asking permission at each step.
Alternatively, follow the manual steps below.
- Node.js 18+
- bun — required for image generation scripts (
npm install -g bun) - Python 3 — required for canvas operations (
eval_helper.py) - Claude Code — the skill runs through Claude Code
- A Gemini API key — free at Google AI Studio
- Recommended:
/install frontend-designin Claude Code — Anthropic's official plugin produces significantly better HTML/CSS/React output during the final export step
-
Clone the repo and install dependencies:
git clone https://github.com/Thoughts-and-Experiments/design-evolve.git cd design-evolve/paper npm install -
Create a
.envfile in thepaper/directory with your Gemini key:echo "GEMINI_API_KEY=your_key_here" > .env
-
Install the skill into Claude Code:
mkdir -p ~/.claude/skills/design-evolve/scripts cp ../skills/design-evolve/SKILL.md ~/.claude/skills/design-evolve/SKILL.md cp ../skills/design-evolve/scripts/eval_helper.py ~/.claude/skills/design-evolve/scripts/eval_helper.py
-
Start the tldraw app and eval server (both are required):
npm start
This launches both the Vite dev server and the eval server concurrently. Your browser should open automatically to
http://localhost:5173. -
In Claude Code, type
/design-evolvefollowed by a description of what you want to design. For example:/design-evolve a project creation page for a web app
When a user shares this repo link in Claude Code, follow these steps — ask permission before each:
-
Clone the repo (if not already local):
git clone https://github.com/Thoughts-and-Experiments/design-evolve.git ~/Documents/design-evolve -
Install dependencies:
cd ~/Documents/design-evolve/paper && npm install
-
Install bun (if not present):
npm install -g bun
-
Ask the user for their Gemini API key (get one free at https://aistudio.google.com/apikey), then create:
echo "GEMINI_API_KEY=<their key>" > ~/Documents/design-evolve/paper/.env
-
Install the skill into Claude Code:
mkdir -p ~/.claude/skills/design-evolve/scripts cp ~/Documents/design-evolve/skills/design-evolve/SKILL.md ~/.claude/skills/design-evolve/SKILL.md cp ~/Documents/design-evolve/skills/design-evolve/scripts/eval_helper.py ~/.claude/skills/design-evolve/scripts/eval_helper.py
-
Start the servers:
cd ~/Documents/design-evolve/paper && npm start
-
Open the tldraw canvas at http://localhost:5173 in Chrome (should auto-open).
-
Tell the user: "Setup complete! Type
/design-evolvefollowed by a description of what you want to design."
design-evolve/
paper/ # tldraw canvas app + eval server
scripts/ # Image generation CLI (generate.ts)
client/ # React app with tldraw
eval-server.ts # WebSocket bridge for canvas manipulation
skills/ # Claude Code skills
design-evolve/ # The iterative design evolution skill
agent-ui/ # Agent UI bridge (experimental)
docs/ # Documentation and images
- No status pill on the canvas? Re-run
/design-evolve— it re-injects the overlay. - "Send Context" doesn't do anything? The skill waits for clicks via a blocking poll; make sure your Claude Code session is still on the
/design-evolveturn (not exited to a fresh prompt). - Health check:
cd paper && source .env && python3 ../skills/design-evolve/scripts/eval_helper.py health— should return{"status": "ok", "browserConnected": true}. - Image generation fails? Verify
GEMINI_API_KEYis set inpaper/.envand thatbunis on yourPATH.
For the ideas behind this tool, read the full essay: Learning to Draw Again With AI
See LICENSE.md for details. The tldraw agent components are provided under the tldraw SDK license.

