Hyntx is a CLI tool that analyzes your Claude Code prompts and helps you become a better prompt engineer through retrospective analysis and actionable feedback.
🧪 BETA: This project is functional but still evolving. Feedback and contributions welcome!
Hyntx reads your Claude Code conversation logs and uses AI to detect common prompt engineering anti-patterns. It provides you with:
- Pattern detection: Identifies recurring issues in your prompts (missing context, vague instructions, etc.)
- Actionable suggestions: Specific recommendations with concrete "Before/After" rewrites
- Privacy-first: Automatically redacts secrets and defaults to local AI (Ollama)
- Zero configuration: Interactive setup on first run with auto-save to shell config
Think of it as a retrospective code review for your prompts.
- Offline-first analysis with local Ollama (privacy-friendly, cost-free)
- Multi-provider support: Ollama (local), Anthropic Claude, Google Gemini with automatic fallback
- Before/After rewrites: Concrete examples showing how to improve your prompts
- Automatic secret redaction: API keys, emails, tokens, credentials
- Flexible date filtering: Analyze today, yesterday, specific dates, or date ranges
- Project filtering: Focus on specific Claude Code projects
- Multiple output formats: Beautiful terminal output or markdown reports
- Watch mode: Real-time monitoring and analysis of prompts as you work
- Smart reminders: Oh-my-zsh style periodic reminders (configurable)
- Auto-configuration: Saves settings to your shell config automatically
- Dry-run mode: Preview what will be analyzed before sending to AI
npm install -g hyntxnpx hyntxpnpm add -g hyntxRun Hyntx with a single command:
hyntxOn first run, Hyntx will guide you through an interactive setup:
- Select one or more AI providers (Ollama recommended for privacy)
- Configure models and API keys for selected providers
- Set reminder preferences
- Auto-save configuration to your shell (or get manual instructions)
That's it! Hyntx will analyze today's prompts and show you improvement suggestions with concrete "Before/After" examples.
# Analyze today's prompts
hyntx
# Analyze yesterday
hyntx --date yesterday
# Analyze a specific date
hyntx --date 2025-01-20
# Analyze a date range
hyntx --from 2025-01-15 --to 2025-01-20
# Filter by project name
hyntx --project my-awesome-app
# Save report to file
hyntx --output report.md
# Preview without sending to AI
hyntx --dry-run
# Check reminder status
hyntx --check-reminder
# Watch mode - real-time analysis
hyntx --watch
# Watch specific project only
hyntx --watch --project my-app
# Analysis modes - control speed vs accuracy trade-off
hyntx --analysis-mode batch # Fast (default): ~300-400ms/prompt
hyntx --analysis-mode individual # Accurate: ~1000-1500ms/prompt
hyntx -m individual # Short form# Analyze last week for a specific project
hyntx --from 2025-01-15 --to 2025-01-22 --project backend-api
# Generate markdown report for yesterday
hyntx --date yesterday --output yesterday-analysis.md
# Deep analysis with individual mode for critical project
hyntx -m individual --project production-api --date today
# Fast batch analysis across date range
hyntx --from 2025-01-15 --to 2025-01-20 --analysis-mode batch -o report.md
# Watch mode with individual analysis (slower but detailed)
hyntx --watch -m individual --project critical-appHyntx offers two analysis modes to balance speed and accuracy based on your needs:
- Speed: ~300-400ms per prompt
- Best for: Daily analysis, quick feedback, large prompt batches
- Accuracy: Good categorization for most use cases
- When to use: Regular check-ins, monitoring prompt quality over time
hyntx # Uses batch mode by default
hyntx --analysis-mode batch # Explicit batch mode- Speed: ~1000-1500ms per prompt
- Best for: Deep analysis, quality-focused reviews, important prompts
- Accuracy: Better categorization and more nuanced pattern detection
- When to use: Learning sessions, preparing critical prompts, detailed audits
hyntx --analysis-mode individual # Use individual mode
hyntx -m individual # Short form| Mode | Speed/Prompt | Use Case | Accuracy | When to Use |
|---|---|---|---|---|
| Batch | ~300-400ms | Daily analysis, monitoring | Good | Quick feedback, large datasets |
| Individual | ~1-1.5s | Deep analysis, learning | Better | Quality-focused reviews, critical prompts |
Speedup: Batch mode is 3-4x faster than individual mode.
Recommendation: Use batch mode (default) for daily analysis to get fast feedback. Switch to individual mode when:
- You need detailed, nuanced feedback on each prompt
- You're learning prompt engineering patterns
- Analyzing high-stakes or complex prompts
- Conducting quality audits or teaching sessions
Performance Note: Numbers based on gemma3:4b on CPU. Actual speed varies by hardware, model size, and prompt complexity.
Detailed Guide: See Analysis Modes Documentation for comprehensive comparison, examples, and decision guidelines.
Hyntx allows you to customize which analysis rules are enabled and their severity levels through a .hyntxrc.json file in your project root.
vague- Detects vague requests lacking specificityno-context- Detects missing background informationtoo-broad- Detects overly broad requests that should be broken downno-goal- Detects prompts without a clear outcomeimperative- Detects commands without explanation
For each pattern, you can:
- Disable: Set
enabled: falseto skip detection - Override severity: Set
severityto"low","medium", or"high"
Create .hyntxrc.json in your project root:
{
"rules": {
"imperative": {
"enabled": false
},
"vague": {
"severity": "high"
},
"no-context": {
"severity": "high"
},
"too-broad": {
"severity": "medium"
}
}
}- Filtered out: Disabled patterns are completely excluded from analysis results
- No detection: The AI will not look for those specific issues
- Updated stats: Pattern counts and frequency calculations exclude disabled patterns
- Warning: If all patterns are disabled, you'll see a warning that no analysis will occur
- Changed priority: Patterns are sorted by severity (high → medium → low), then by frequency
- Updated display: The reporter shows severity badges based on your configuration
- No effect on detection: Severity only affects sorting and display, not whether the pattern is detected
Hyntx will warn you about:
- Invalid pattern IDs: If you specify a pattern ID that doesn't exist
- All patterns disabled: If your configuration disables every pattern
These warnings appear immediately when the configuration is loaded.
Hyntx uses environment variables for configuration. The interactive setup can auto-save these to your shell config (~/.zshrc, ~/.bashrc).
Configure one or more providers in priority order. Hyntx will try each provider in order and fall back to the next if unavailable.
# Single provider (Ollama only)
export HYNTX_SERVICES=ollama
export HYNTX_OLLAMA_MODEL=gemma3:4b
# Multi-provider with fallback (tries Ollama first, then Anthropic)
export HYNTX_SERVICES=ollama,anthropic
export HYNTX_OLLAMA_MODEL=gemma3:4b
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here
# Cloud-first with local fallback
export HYNTX_SERVICES=anthropic,ollama
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here
export HYNTX_OLLAMA_MODEL=gemma3:4bOllama:
| Variable | Default | Description |
|---|---|---|
HYNTX_OLLAMA_MODEL |
gemma3:4b |
Model to use |
HYNTX_OLLAMA_HOST |
http://localhost:11434 |
Ollama server URL |
Anthropic:
| Variable | Default | Description |
|---|---|---|
HYNTX_ANTHROPIC_MODEL |
claude-3-5-haiku-latest |
Model to use |
HYNTX_ANTHROPIC_KEY |
- | API key (required) |
Google:
| Variable | Default | Description |
|---|---|---|
HYNTX_GOOGLE_MODEL |
gemini-2.0-flash-exp |
Model to use |
HYNTX_GOOGLE_KEY |
- | API key (required) |
# Set reminder frequency (7d, 14d, 30d, or never)
export HYNTX_REMINDER=7d# Add to ~/.zshrc or ~/.bashrc (or let Hyntx auto-save it)
export HYNTX_SERVICES=ollama,anthropic
export HYNTX_OLLAMA_MODEL=gemma3:4b
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here
export HYNTX_REMINDER=14d
# Optional: Enable periodic reminders
hyntx --check-reminder 2>/dev/nullThen reload your shell:
source ~/.zshrc # or source ~/.bashrcOllama runs AI models locally for privacy and cost savings.
-
Install Ollama: ollama.ai
-
Pull a model:
ollama pull gemma3:4b
-
Verify it's running:
ollama list
-
Run Hyntx (it will auto-configure on first run):
hyntx
-
Get API key from console.anthropic.com
-
Run Hyntx and select Anthropic during setup, or set manually:
export HYNTX_SERVICES=anthropic export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here
-
Get API key from ai.google.dev
-
Run Hyntx and select Google during setup, or set manually:
export HYNTX_SERVICES=google export HYNTX_GOOGLE_KEY=your-google-api-key
Configure multiple providers for automatic fallback:
# If Ollama is down, automatically try Anthropic
export HYNTX_SERVICES=ollama,anthropic
export HYNTX_OLLAMA_MODEL=gemma3:4b
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-hereWhen running, Hyntx will show fallback behavior:
⚠️ ollama unavailable, trying anthropic...
✅ anthropic connected
📊 Hyntx - 2025-01-20
──────────────────────────────────────────────────
📈 Statistics
Prompts: 15
Projects: my-app, backend-api
Score: 6.5/10
⚠️ Patterns (3)
🔴 Missing Context (60%)
• "Fix the bug in auth"
• "Update the component"
💡 Include specific error messages, framework versions, and file paths
Before:
❌ "Fix the bug in auth"
After:
✅ "Fix authentication bug in src/auth/login.ts where users get
'Invalid token' error. Using Next.js 14.1.0 with next-auth 4.24.5."
🟡 Vague Instructions (40%)
• "Make it better"
• "Improve this"
💡 Define specific success criteria and expected outcomes
Before:
❌ "Make it better"
After:
✅ "Optimize the database query to reduce response time from 500ms
to under 100ms. Focus on adding proper indexes."
──────────────────────────────────────────────────
💎 Top Suggestion
"Add error messages and stack traces to debugging requests for
10x faster resolution."
──────────────────────────────────────────────────
Hyntx can run as a Model Context Protocol (MCP) server, enabling real-time prompt analysis directly within MCP-compatible clients like Claude Code.
Add hyntx to your Claude Code MCP configuration. You have two options:
Configuration visible only to you, stored in ~/.claude.json:
# Add using Claude Code CLI
claude mcp add hyntx
# Or manually edit ~/.claude.json{
"mcpServers": {
"hyntx": {
"command": "hyntx",
"args": ["--mcp-server"]
}
}
}Configuration shared with your team via Git, stored in .mcp.json at your project root:
{
"mcpServers": {
"hyntx": {
"command": "hyntx",
"args": ["--mcp-server"]
}
}
}After adding the configuration, restart your Claude Code session. The hyntx tools will be available in your conversations.
- Hyntx installed globally:
npm install -g hyntx - AI provider configured: Set up Ollama (recommended) or cloud providers via environment variables
If using Ollama (recommended for privacy):
# Ensure Ollama is running
ollama serve
# Pull a model if needed
ollama pull gemma3:4b
# Set environment variables (add to ~/.zshrc or ~/.bashrc)
export HYNTX_SERVICES=ollama
export HYNTX_OLLAMA_MODEL=gemma3:4bHyntx exposes three tools through the MCP interface:
Analyze a prompt to detect anti-patterns, issues, and get improvement suggestions.
Input Schema:
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt |
string | Yes | The prompt text to analyze |
date |
string | No | Date context in ISO format. Defaults to current date. |
Example Output:
{
"patterns": [
{
"id": "no-context",
"name": "Missing Context",
"severity": "high",
"frequency": "100%",
"suggestion": "Include specific error messages and file paths",
"examples": ["Fix the bug in auth"]
}
],
"stats": {
"promptCount": 1,
"overallScore": 4.5
},
"topSuggestion": "Add error messages and stack traces for faster resolution"
}Get concrete before/after rewrites showing how to improve a prompt.
Input Schema:
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt |
string | Yes | The prompt text to analyze for improvements |
date |
string | No | Date context in ISO format. Defaults to current date. |
Example Output:
{
"improvements": [
{
"issue": "Missing Context",
"before": "Fix the bug in auth",
"after": "Fix authentication bug in src/auth/login.ts where users get 'Invalid token' error. Using Next.js 14.1.0 with next-auth 4.24.5.",
"suggestion": "Include specific error messages, framework versions, and file paths"
}
],
"summary": "Found 1 improvement(s)",
"topSuggestion": "Add error messages and stack traces for faster resolution"
}Verify if a prompt has sufficient context for effective AI interaction.
Input Schema:
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt |
string | Yes | The prompt text to check for context |
date |
string | No | Date context in ISO format. Defaults to current date. |
Example Output:
{
"hasSufficientContext": false,
"score": 4.5,
"issues": ["Missing Context", "Vague Instructions"],
"suggestion": "Include specific error messages and file paths",
"details": "Prompt lacks sufficient context for effective AI interaction"
}Once configured, you can use these tools in your Claude Code conversations:
Analyze a prompt before sending:
Use the analyze-prompt tool to check: "Fix the login bug"
Get improvement suggestions:
Use suggest-improvements on: "Make the API faster"
Check if your prompt has enough context:
Use check-context to verify: "Update the component to handle errors"
-
Verify hyntx is installed globally:
which hyntx # Should output: /usr/local/bin/hyntx or similar -
Test manual startup:
hyntx --mcp-server # Should output: MCP server running on stdio -
Check environment variables are set (if using cloud providers):
echo $HYNTX_SERVICES echo $HYNTX_ANTHROPIC_KEY # if using Anthropic
-
If using Ollama, ensure it's running:
ollama list # If no output, start Ollama: ollama serve -
If using cloud providers, verify API keys are set:
# Check if keys are configured env | grep HYNTX_
-
Restart Claude Code completely after config changes
-
Verify the config file exists and is in the correct location:
- User-scoped:
~/.claude.json - Project-scoped:
.mcp.json(in project root)
- User-scoped:
-
Check JSON syntax in the config file:
# Verify user-scoped config cat ~/.claude.json | jq . # Or verify project-scoped config cat .mcp.json | jq . # Or use Claude Code CLI to list MCP servers claude mcp list
- Local Ollama models are fastest but require GPU for best performance
- Consider using a faster model:
export HYNTX_OLLAMA_MODEL=gemma3:4b:1b - Cloud providers (Anthropic, Google) offer faster responses but require API keys
Hyntx takes your privacy seriously:
- Local-first: Defaults to Ollama for offline analysis
- Automatic redaction: Removes API keys, credentials, emails, tokens before analysis
- Read-only: Never modifies your Claude Code logs
- No telemetry: Hyntx doesn't send usage data anywhere
- OpenAI/Anthropic API keys (
sk-*,claude-*) - AWS credentials (
AKIA*, secret keys) - Bearer tokens
- HTTP credentials in URLs
- Email addresses
- Private keys (PEM format)
- Read logs: Parses Claude Code conversation logs from
~/.claude/projects/ - Extract prompts: Filters user messages from conversations
- Sanitize: Redacts sensitive information automatically
- Analyze: Sends sanitized prompts to AI provider for pattern detection
- Report: Displays findings with examples and suggestions
- Node.js: 22.0.0 or higher
- Claude Code: Must have Claude Code installed and at least one conversation
- AI Provider: At least one of the following:
- Ollama (recommended for privacy and cost savings)
- Anthropic Claude API key
- Google Gemini API key
For local analysis with Ollama, you need to have a compatible model installed. See docs/MINIMUM_VIABLE_MODEL.md for detailed recommendations and performance benchmarks.
Quick picks:
| Use Case | Model | Parameters | Disk Size | Speed (CPU) | Quality |
|---|---|---|---|---|---|
| Daily use | gemma3:4b |
2-3B | ~2GB | ~2-5s/prompt | Good |
| Production | mistral:7b |
7B | ~4GB | ~5-10s/prompt | Better |
| Maximum quality | qwen2.5:14b |
14B | ~9GB | ~15-30s/prompt | Excellent |
Installation:
# Install recommended model (gemma3:4b)
ollama pull gemma3:4b
# Or choose a different model
ollama pull mistral:7b
ollama pull qwen2.5:14bFor complete model comparison, compatibility info, and performance notes, see the Model Requirements documentation.
Make sure you've used Claude Code at least once. Logs are stored in:
~/.claude/projects/<project-hash>/logs.jsonl
- Check Ollama is running:
ollama list - Start Ollama:
ollama serve - Verify the host:
echo $HYNTX_OLLAMA_HOST(default:http://localhost:11434)
- Check the date format:
YYYY-MM-DD - Verify you used Claude Code on those dates
- Try
--dry-runto see what logs are being read
Hyntx can also be used as a library in your Node.js applications for custom integrations, CI/CD pipelines, or building tooling on top of the analysis engine.
npm install hyntx
# or
pnpm add hyntximport {
analyzePrompts,
sanitizePrompts,
readLogs,
createProvider,
getEnvConfig,
type AnalysisResult,
type ExtractedPrompt,
} from 'hyntx';
// Read Claude Code logs for a specific date
const { prompts } = await readLogs({ date: 'today' });
// Sanitize prompts to remove secrets
const { prompts: sanitizedTexts } = sanitizePrompts(
prompts.map((p: ExtractedPrompt) => p.content),
);
// Get environment configuration
const config = getEnvConfig();
// Create an AI provider
const provider = await createProvider('ollama', config);
// Analyze the prompts
const result: AnalysisResult = await analyzePrompts({
provider,
prompts: sanitizedTexts,
date: '2025-12-26',
});
// Use the results
console.log(`Overall score: ${result.stats.overallScore}/10`);
console.log(`Patterns detected: ${result.patterns.length}`);
result.patterns.forEach((pattern) => {
console.log(`- ${pattern.name}: ${pattern.severity}`);
console.log(` Suggestion: ${pattern.suggestion}`);
});CI/CD Integration - Fail builds when prompt quality drops below threshold:
import { analyzePrompts, readLogs, createProvider, getEnvConfig } from 'hyntx';
const config = getEnvConfig();
const provider = await createProvider('ollama', config);
const { prompts } = await readLogs({ date: 'today' });
const result = await analyzePrompts({
provider,
prompts: prompts.map((p) => p.content),
date: new Date().toISOString().split('T')[0],
});
// Fail CI if quality score is too low
const QUALITY_THRESHOLD = 7.0;
if (result.stats.overallScore < QUALITY_THRESHOLD) {
console.error(
`Quality score ${result.stats.overallScore} below threshold ${QUALITY_THRESHOLD}`,
);
process.exit(1);
}Custom Analysis - Analyze specific prompts without reading logs:
import { analyzePrompts, createProvider, getEnvConfig } from 'hyntx';
const config = getEnvConfig();
const provider = await createProvider('anthropic', config);
const customPrompts = [
'Fix the bug',
'Make it better',
'Refactor the authentication module to use JWT tokens instead of sessions',
];
const result = await analyzePrompts({
provider,
prompts: customPrompts,
date: '2025-12-26',
context: {
role: 'developer',
techStack: ['TypeScript', 'React', 'Node.js'],
},
});
console.log(result.patterns);History Management - Track analysis over time:
import {
analyzePrompts,
saveAnalysisResult,
loadAnalysisResult,
compareResults,
type HistoryMetadata,
} from 'hyntx';
// Run analysis
const result = await analyzePrompts({
/* ... */
});
// Save to history
const metadata: HistoryMetadata = {
date: '2025-12-26',
promptCount: result.stats.promptCount,
score: result.stats.overallScore,
projectFilter: undefined,
provider: 'ollama',
};
await saveAnalysisResult(result, metadata);
// Load previous analysis
const previousResult = await loadAnalysisResult('2025-12-19');
// Compare results
const comparison = await compareResults('2025-12-19', '2025-12-26');
console.log(
`Score change: ${comparison.scoreChange > 0 ? '+' : ''}${comparison.scoreChange}`,
);analyzePrompts(options: AnalysisOptions): Promise<AnalysisResult>- Analyze prompts and detect anti-patternsreadLogs(options?: ReadLogsOptions): Promise<LogReadResult>- Read Claude Code conversation logssanitize(text: string): SanitizeResult- Remove secrets from a single textsanitizePrompts(prompts: string[]): { prompts: string[]; totalRedacted: number }- Remove secrets from multiple prompts
createProvider(type: ProviderType, config: EnvConfig): Promise<AnalysisProvider>- Create an AI provider instancegetAvailableProvider(config: EnvConfig, onFallback?: Function): Promise<AnalysisProvider>- Get first available provider with fallbackgetAllProviders(services: string[], config: EnvConfig): AnalysisProvider[]- Get all configured providers
saveAnalysisResult(result: AnalysisResult, metadata: HistoryMetadata): Promise<void>- Save analysis to historyloadAnalysisResult(date: string): Promise<HistoryEntry | null>- Load analysis from historylistAvailableDates(): Promise<string[]>- Get list of dates with saved analysescompareResults(beforeDate: string, afterDate: string): Promise<ComparisonResult>- Compare two analyses
getEnvConfig(): EnvConfig- Get environment configurationclaudeProjectsExist(): boolean- Check if Claude projects directory existsparseDate(dateStr: string): Date- Parse date string to Date objectgroupByDay(prompts: ExtractedPrompt[]): DayGroup[]- Group prompts by day
generateCacheKey(config: CacheKeyConfig): string- Generate cache key for analysisgetCachedResult(cacheKey: string): Promise<AnalysisResult | null>- Get cached resultsetCachedResult(cacheKey: string, result: AnalysisResult, ttlMinutes?: number): Promise<void>- Cache analysis result
Hyntx is written in TypeScript and provides full type definitions. All types are exported:
import type {
AnalysisResult,
AnalysisPattern,
AnalysisStats,
ExtractedPrompt,
ProviderType,
EnvConfig,
HistoryEntry,
ComparisonResult,
} from 'hyntx';See the TypeScript definitions for complete API documentation.
# Clone the repository
git clone https://github.com/jmlweb/hyntx.git
cd hyntx
# Install dependencies
pnpm install
# Run in development mode
pnpm dev
# Build
pnpm build
# Test the CLI
pnpm starthyntx/
├── src/
│ ├── index.ts # Library entry point (re-exports api/)
│ ├── cli.ts # CLI entry point
│ ├── api/
│ │ └── index.ts # Public API surface
│ ├── core/ # Core business logic
│ │ ├── setup.ts # Interactive setup (multi-provider)
│ │ ├── reminder.ts # Reminder system
│ │ ├── log-reader.ts # Log parsing
│ │ ├── schema-validator.ts # Log schema validation
│ │ ├── sanitizer.ts # Secret redaction
│ │ ├── analyzer.ts # Analysis orchestration + batching
│ │ ├── reporter.ts # Output formatting (Before/After)
│ │ ├── watcher.ts # Real-time log file monitoring
│ │ └── history.ts # Analysis history management
│ ├── providers/ # AI providers
│ │ ├── base.ts # Interface & prompts
│ │ ├── ollama.ts # Ollama integration
│ │ ├── anthropic.ts # Claude integration
│ │ ├── google.ts # Gemini integration
│ │ └── index.ts # Provider factory with fallback
│ ├── utils/ # Utility functions
│ │ ├── env.ts # Environment config
│ │ ├── shell-config.ts # Shell auto-configuration
│ │ ├── paths.ts # System path constants
│ │ ├── logger-base.ts # Base logger (no CLI deps)
│ │ ├── logger.ts # CLI logger (with chalk)
│ │ └── terminal.ts # Terminal utilities
│ └── types/
│ └── index.ts # TypeScript type definitions
├── docs/
│ └── SPECS.md # Technical specifications
└── package.json
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes using Conventional Commits
- Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
For detailed development roadmap, planned features, and implementation status, see GitHub Issues and GitHub Projects.
MIT License - see LICENSE file for details.
- Built for Claude Code users
- Inspired by retrospective practices in Agile development
- Privacy-first approach inspired by local-first software movement
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Made with ❤️ for better prompt engineering