AI-powered terminal tool that critiques commit message quality and helps you write clear, high-signal commits from your shell.
CommitLens analyzes Git history, scores message quality, and suggests well-structured Conventional Commit messages from staged changes.
Commit messages are often:
- vague (
fixed bug) - noisy (
wip) - missing context
- inconsistent across teams
CommitLens turns commit history into actionable feedback and helps teams write clearer commits consistently.
- Analyze the last
Ncommits from local repositories - Optionally analyze public remote repositories via
--url - AI critique + score (0-10)
- Suggestions for weak commits
- Stats dashboard (average score, vague %, one-word %)
- Reads
git diff --staged - Summarizes staged changes
- Suggests a Conventional Commit message
- You always review/edit manually (tool never runs
git commit)
- Structured LLM output validation with Pydantic
- Rich terminal UX with progress and panels
- Diff filtering for lockfiles/binary assets
- Large diff truncation for prompt safety
- Lightweight eval harness for scoring behavior
- Minimal test suite for parsing/scoring/git-validation logic
- Python 3.11+
- OpenAI API
- Pydantic
- Rich
- Typer
- python-dotenv
- uv
- Python 3.11+
- Git
- Python package manager (recommended:
uv, install withpip install uv) - OpenAI API key
git clone <your-repo-url>
cd commitlens
cp .env.example .env
# add OPENAI_API_KEY to .env
uv sync
uv run python commit_critic.py --analyze --limit 10uv syncpython -m venv .venv && source .venv/bin/activate && pip install .Create .env:
OPENAI_API_KEY=your_key_hereOPENAI_API_KEYis read from environment variables (or local.env) at runtime.- CommitLens does not write your API key to project files.
.envis git-ignored; only.env.exampleis tracked.- If a key is exposed, rotate it immediately in your OpenAI dashboard.
- Use least-privilege practices: keep keys local, do not paste keys into commit messages, issues, or logs.
# Analyze last 50 commits (local repo)
uv run python commit_critic.py --analyze
# Analyze last 50 commits from a remote public repo
uv run python commit_critic.py --analyze --url="https://github.com/steel-dev/steel-browser"
# Interactive commit writer
uv run python commit_critic.py --writesource .venv/bin/activate
python commit_critic.py --analyze
python commit_critic.py --analyze --url="https://github.com/steel-dev/steel-browser"
python commit_critic.py --writeuv run python commit_critic.py --help━━━━━━━━━━━━━━━━━━━━━━━━━━━━
💩 COMMITS THAT NEED WORK
━━━━━━━━━━━━━━━━━━━━━━━━━━━━
╭─────────────────────────────────────────────────────────────────────────────╮
│ Commit: "add github action and quick start guide" │
│ Score: 3/10 │
│ Issue: Missing type prefix and scope; message is vague and not capitalized. │
│ Better: ci: add GitHub Action and docs: add quick start guide │
╰─────────────────────────────────────────────────────────────────────────────╯
━━━━━━━━━━━━━━━━━━━━━━━━━━━━
💎 WELL-WRITTEN COMMITS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ Commit: "chore: add MIT license and update README with license link │
│ │
│ - Added a new LICENSE file containing the full MIT License text │
│ - Updated README.md to replace placeholder license text with a link to the LICENSE file" │
│ Score: 9/10 │
│ Why it's good: Proper type 'chore' used; clear and descriptive message with useful details. │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────────────────╮
│ Commit: "docs: enhance README with detailed usage, features, examples, and architecture" │
│ Score: 8/10 │
│ Why it's good: Uses 'docs' type correctly and clearly describes the changes made. │
╰──────────────────────────────────────────────────────────────────────────────────────────╯
━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 YOUR STATS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━
┌──────────────────┬───────────┐
│ Average score │ 6.8/10 │
│ Vague commits │ 1 (25.0%) │
│ One-word commits │ 0 (0.0%) │
└──────────────────┴───────────┘
Analyzing staged changes... (2 files changed, +22 -1 lines)
╭─────────────────────────────────────────────────────────────────────────────────────────╮
│ SUMMARY: │
│ - Add MIT License file with full text │
│ - Update README to link to the new LICENSE file │
│ │
│ SUBJECT: │
│ chore: add MIT license and update README with license link │
│ │
│ BODY: │
│ - Added a new LICENSE file containing the full MIT License text │
│ - Updated README.md to replace placeholder license text with a link to the LICENSE file │
╰─────────────────────────────────────────────────────────────────────────────────────────╯
Changes detected:
- Add MIT License file with full text
- Update README to link to the new LICENSE file
Suggested commit message:
╭─────────────────────────────────────────────────────────────────────────────────────────╮
│ chore: add MIT license and update README with license link │
│ │
│ - Added a new LICENSE file containing the full MIT License text │
│ - Updated README.md to replace placeholder license text with a link to the LICENSE file │
╰─────────────────────────────────────────────────────────────────────────────────────────╯
Press Enter to accept, or type your own message ():
CommitLens scores each commit from 0 to 10 based on Conventional Commit clarity and specificity.
Score bands:
0-4->needs_work5-7->mid8-10->well_written
What the stats mean:
Average score: arithmetic mean of all analyzed commit scoresVague commits: commits with score< 5(same threshold asneeds_work)One-word commits: commits whose message contains only one word
Run eval suite:
uv run python evals/run_eval.pyEval report includes:
- Bucket accuracy (
needs_work/mid/well_written) - Score tolerance metric (
±1) for LLM variance - Repeatability check: each eval case is run
5times to measure consistency across runs
Eval bucket definitions:
needs_work: score< 5mid: score5-7well_written: score>= 8
How to read eval columns:
Expected: expected bucket fromevals/commits.jsonExpected Score: target score fromevals/commits.jsonPass %: percentage of runs where predicted bucket matched expected bucketScore μ: mean predicted score across repeated runsScore σ: score standard deviation across repeated runsTolerance %: percentage of runs whereabs(predicted - expected_score) <= 1
Run tests:
python -m unittest discover -s tests -vgit commits/diff
-> prompt construction
-> OpenAI LLM call
-> Pydantic validation
-> scoring + stats
-> Rich terminal rendering
Key modules:
commit_critic/app.py: CLI entry and mode orchestrationcommit_critic/git_ops.py: git cloning/log/diff utilitiescommit_critic/llm_client.py: LLM prompts, API calls, parsing, validationcommit_critic/scoring.py: thresholds and statisticscommit_critic/ui.py: rich output renderingevals/run_eval.py: lightweight scoring evaluation harness
--limit: number of commits to analyze (default: 50)--model: OpenAI model to use (default:gpt-4.1-mini). See available OpenAI models--url: analyze a remote repository--analyze: analyze commit history mode--write: interactive commit writer mode
- Uses shallow clone:
git clone --depth 200 - Clones into a temporary directory for analysis
- Temporary clone directory is deleted after the run
- Best used with
--limitto keep analysis focused on recent commits - Public repositories are supported by default; private repositories require pre-configured Git credentials
- OpenAI models are supported
- Remote
--urlanalysis supports public Git repositories
- Multi-provider support (Anthropic/Gemini)
- Local model support
- Optional caching for repeated analyses
.
├── commit_critic/ # Core package logic
│ ├── app.py # Typer CLI mode orchestration (--analyze / --write)
│ ├── config.py # .env loading and API key validation
│ ├── git_ops.py # Git operations (clone/log/diff/repo checks)
│ ├── llm_client.py # LLM prompts, API calls, response parsing
│ ├── models.py # Pydantic schemas for critiques/suggestions
│ ├── scoring.py # Commit bucket logic and aggregate stats
│ ├── ui.py # Rich terminal rendering
│ └── diff_cleaner.py # Diff filtering/truncation for prompt safety
├── tests/ # Unit tests (logic and mocked integrations)
├── evals/ # LLM scoring evaluation harness
├── commit_critic.py # CLI entry point
├── pyproject.toml # Project metadata and dependencies
├── uv.lock # Reproducible dependency lockfile
├── .env.example # Environment variable template
├── README.md # Documentation
└── LICENSE # MIT license