LLM-Assisted Compiler β Transform architecture specs into production code.
Compose is an LLM-assisted compiler for architecture specifications. Define your application in structured .compose files, and the compiler generates framework-specific code through LLM-powered code generation with reproducible builds via caching.
At its core: A traditional compiler frontend (Lexer β Parser β Semantic Analyzer β IR) combined with an LLM-powered backend for code generation. The .compose DSL is the primary input format, with support for OpenAPI, GraphQL, and existing codebases planned.
Traditional Development:
- Write code β Review code β Version control code
- Architecture lives in docs (which drift from reality)
- Teams struggle to understand big codebases
With Compose:
- Write architecture specs β LLM generates code β Version control both
- Architecture IS the code generator (can't drift)
- Read 50-line
.composefile instead of 50-file codebase - Reproducible β Caching ensures same input = same output
- Framework-agnostic β Regenerate for different targets
- Incremental β Export maps enable smart regeneration (not everything)
# 1. model - Data structures
model User:
email: text
role: "admin" | "member"
# 2. feature - What users can do
feature "Authentication":
- Email/password signup
- Password reset
# 3. guide - Implementation details (added as you develop)
guide "Security":
- Rate limit login: 5 attempts per 15 min
- Use bcrypt cost factor 12
- Store sessions in Redis
That's the entire language. No classes, no functions, no syntax complexity.
β
Structured DSL β Three keywords: model, feature, guide
β
Export Maps β Track all exported symbols for intelligent incremental generation
β
LLM Caching β Reproducible builds via cached responses (commit cache to git)
β
Dependency Tracking β Regenerate only affected files when specs change
β
@ References β Link to external code; LLM translates to target language
β
Multi-Target β Same spec β Next.js, React, Vue (more coming)
β
Framework-Agnostic IR β Core system works with any input format
β
Version Controlled β Architecture specs in git, not ad-hoc prompts
This is v0.2.0 β here's what's NOT solved yet:
- β Drift detection β No automated validation that new code correctly uses existing exports
- β Model version pinning β LLM provider updates can break reproducibility
- β Complex domain logic β LLMs struggle with intricate business rules
- β Instruction limits β Sweet spot is ~10-20 guides per file; beyond that, quality degrades
- β Perfect determinism β Cache provides reproducibility, but LLMs are probabilistic
Use Cursor/Copilot instead if:
- Quick one-off scripts or prototypes
- You're solo and not maintaining long-term
- Exploring ideas rapidly
Write code manually if:
- Complex algorithmic logic
- Mission-critical systems (banking, healthcare, flight control)
- Edge-case-heavy domains
- You need 100% control over every line
Compose is best for:
- β Multi-developer teams maintaining apps over time
- β CRUD apps, internal tools, MVPs
- β Iterative development (adding features incrementally)
- β Framework migrations (regenerate for new tech stack)
- β Architecture documentation that can't go stale
| Tool | Reproducibility | Version Control | Team Collab | Incremental Gen | Framework Agnostic |
|---|---|---|---|---|---|
| ChatGPT | β | β | β | β | β |
| Cursor/Copilot | β | Partial | β | β | β |
| Compose | β (via cache) | β | β | β | β |
| Manual Coding | β | β | β | β | β |
Trade-off: Compose adds structure (DSL + tooling) in exchange for reproducibility and team collaboration. If you don't need those, simpler tools are better.
git clone https://github.com/darula-hpp/compose-lang.git
cd compose-lang
npm install
npm linkcompose init
# Choose: Vite + React, Express
# Include example files: Yes
cd my-compose-app
compose build# Frontend
cd generated/frontend
npm install
npm run dev
# Backend (separate terminal)
cd generated/backend
npm install
npm run dev# 1. model - Define your data structures
model Todo:
id: number
title: text
completed: boolean
priority: "low" | "medium" | "high"
model User:
name: text
email: text (unique)
role: "admin" | "member"
# 2. feature - Describe what users can do
feature "Todo Management":
- Users can create new todos
- Users can mark todos as complete
- Users can filter by priority
- Admins can delete any todo
feature "User Authentication":
- Email/password signup
- Password reset via email
- Session management
# 3. guide - Add implementation details as needed
guide "Todo Features":
- Sort todos by priority and date
- Use optimistic UI updates
- Persist in localStorage
guide "Security":
- Rate limit login: 5 attempts per 15 min
- Hash passwords with bcrypt cost 12
- Sessions expire after 24 hours
// models/todo.compose
model Todo:
id: number
title: text
completed: boolean
// models/user.compose
model User:
name: text
email: text
// features/app.compose
import "../models/todo.compose"
import "../models/user.compose"
feature "Todo App":
- Users can manage their todos
- Each user sees only their own todos
See Language Specification for full syntax.
Initialize a new project with framework scaffolding
compose init
# Prompts for:
# - Project name
# - Frontend framework (Vite, Next.js, Remix, Skip)
# - Backend framework (Express, Fastify, Skip)
# - Include example .compose files? (Y/n)Compile .compose files to target code
compose build
# Detects framework in generated/
# Generates code with LLM
# Merges intelligently into framework structureResult: Production-ready Next.js app in ./generated/web/
For complex business logic, write it in any language and reference it:
# reference/pricing.py (easy to read, test, audit)
def calculate_discount(user_tier, amount):
discounts = {'bronze': 0.05, 'silver': 0.10, 'gold': 0.15}
return amount * discounts.get(user_tier, 0)# app.compose
guide "Pricing Logic":
- Reference: @reference/pricing.py::calculate_discount
- LLM translates to target language
- Preserves exact business rules
Same logic works for TypeScript, Rust, Go, Swift β LLM translates!
my-app/
βββ assets/ # Framework-agnostic
β βββ logo.svg
β βββ images/
βββ generated/
βββ web/
βββ public/ # Assets copied here
Assets automatically copied to framework output during build.
Traditional development:
Senior Dev β Writes React code
β Reviews PRs
β Fixes bugs in generated code
With Compose:
Senior Dev β Writes .compose files
β Reviews architecture specs
β Fixes bugs by updating guides
The shift: From implementation β architecture. From code review β spec review.
Compose takes your typical Cursor/AI agent workflow and makes it reproducible:
Cursor Workflow:
You: "Add authentication"
AI: *generates code*
You: "Make it secure"
AI: *adds rate limiting*
You: "Handle edge case"
AI: *updates code*
Compose (Same Vibe, Reproducible):
feature "Authentication"
guide "Security":
- Rate limit to 5 attempts per 15 min
guide "Edge Cases":
- Handle expired tokens
- Refresh on 401
The .compose file IS the conversation history. Rebuild anytime, same result.
my-app/
βββ app.compose # Architecture spec (source of truth)
βββ compose.json # Framework/deployment config
βββ reference/ # Business logic (Python, SQL, etc.)
β βββ pricing.py
βββ assets/ # Static files (logos, images)
βββ generated/ # LLM-generated code (don't edit!)
βββ web/ # Next.js app
βββ mobile/ # React Native app
Important: If using import statements, specify entry in compose.json:
{
"targets": {
"web": {
"entry": "./app.compose", // Required with imports
"framework": "nextjs"
}
}
}Watch mode with automatic rebuilds
## Features
β
**Three Keywords** β `model` (data), `feature` (behavior), `guide` (implementation). That's the entire language.
β
**@ References** β Link to external code in any language; LLM translates to your target
β
**Multi-Target** β Generate web, mobile, and API from one specification
β
**Plain English** β Describe features naturally, LLM handles implementation
β
**Framework-Agnostic** β Regenerate for Next.js, Vue, Svelte anytime
β
**Deterministic** β Cached LLM responses ensure reproducible builds
β
**Version Controlled** β Track architectural changes in Git
---
## π Getting Started
### Installation
```bash
git clone https://github.com/darula-hpp/compose-lang.git
cd compose-lang
npm install
npm link# Initialize project
compose init
# Choose frameworks and include examples
# Build
cd my-project
compose build
# Run generated code
cd generated/web
npm install
npm run dev1. Create app.compose:
model User:
name: text
email: text (unique)
model Post:
title: text
content: markdown
author: User
feature "User Management":
- Sign up with email
- Login
- View profile
feature "Blog":
- Create posts with markdown
- List all posts
- View single post
feature "Theme":
- Modern, clean design
- Purple/pink gradient colors
# Add guides as you develop
guide "Performance":
- Cache blog posts for 10 minutes
- Use static generation for post pages
2. Create compose.json:
{
"targets": {
"web": {
"framework": "nextjs",
"styling": "tailwindcss",
"output": "./web"
},
"api": {
"framework": "express",
"database": "postgresql",
"output": "./api"
}
},
"llm": {
"provider": "gemini",
"apiKey": "${GEMINI_API_KEY}"
}
}3. Build:
export GEMINI_API_KEY="your-key"
compose buildUseful for:
- Fresh rebuild
- Troubleshooting build issues
- Freeing disk space
Create compose.json in your project root:
{
"llm": {
"provider": "gemini",
"model": "gemini-2.5-flash",
"apiKey": "${GEMINI_API_KEY}",
"temperature": 0.2,
"maxTokens": 8192
},
"targets": {
"frontend": {
"entry": "./src/frontend/app.compose",
"type": "react",
"framework": "vite",
"output": "./generated/frontend"
},
"backend": {
"entry": "./src/backend/api.compose",
"type": "node",
"framework": "express",
"output": "./generated/backend"
}
}
}- Gemini (Google) - Recommended, fast and cheap
- OpenAI (GPT-4, GPT-4o)
- Anthropic (Coming soon)
- Local models (Planned)
Set your API key:
export GEMINI_API_KEY="your-api-key"
# or
export OPENAI_API_KEY="your-api-key"Frontend:
- Vite + React β
- Next.js β
- Remix β
- Astro (Planned)
- SolidJS (Planned)
Backend:
- Express β
- Fastify (Planned)
- NestJS (Planned)
- Hono (Planned)
frontend.page "Home"
description: "Todo app with CRUD operations"
Lexer β Parser β Analyzer β Intermediate Representation
IR + Framework Context β LLM β Production Code
Framework Detection β Injection Strategy β Merged Output
Result: Complete, runnable applications with proper framework structure.
Same input always produces same output. Builds are deterministic and fast.
# First build: calls LLM
compose build # 10 seconds
# Second build: uses cache
compose build # 0.5 secondsDelegates to official tools instead of maintaining templates:
compose init
# Runs: npm create vite@latest
# Then: merges your generated code inNo outdated templates. Always fresh scaffolding.
Compose understands framework conventions:
- Vite: Injects routes into
App.jsx - Next.js: Uses file-based routing
- Express: Registers routes in
server.js
src/
βββ types/
β βββ todo.compose
βββ frontend/
β βββ app.compose
βββ backend/
βββ api.compose
Import and modularize your architecture.
See ROADMAP.md for the full vision.
- VS Code extension
- More framework adapters
- Testing support
- Type generation
- Compose Ingest - Reverse compiler that turns existing code into
.composefiles - Legacy modernization tool
- Cross-platform migration
- Architecture documentation
We welcome contributions! See CONTRIBUTING.md.
Good first issues:
- Add framework adapters
- Improve error messages
- Add examples
- Write documentation
- SYNTAX.md - Complete language reference
- docs/reference-code.md - Using @ operator and reference code
- docs/compose-json.md - Configuration options
- examples/ - Real-world examples
todo-simple/- Minimal exampleprojectflow/- Complex SaaS appecommerce-with-reference/- Shows @ operator usagepayment-system-evolution.md- How guides grow over time
- LLM Integration
- Compose Ingest (Future)
- Contributing Guide
- Project Roadmap
Compose is:
- Prompt-first - The
.composefile is your source of truth - Framework-agnostic - One description, many targets
- LLM-native - Built for the AI era
- Developer-friendly - Natural language with structure
MIT License - See LICENSE
Built with β€οΈ for the AI-native future of software development
GitHub β’ Documentation β’ Contributing