Skip to content

keyanUB/Prompt-Optimization-Agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 

Repository files navigation

🤖 Prompt Optimization Agent

A Python-based CLI tool that automatically analyzes and rewrites your prompts using a multi-step AI agent pipeline — powered by the OpenAI API.


✨ Features

  • Complexity-aware optimization — scores your task 1–5 and keeps low-complexity prompts short and snappy
  • Deep prompt analysis — surfaces vague instructions, missing context, and structural weaknesses
  • Smart rewriting — applies chain-of-thought, role assignment, few-shot examples, and more
  • Optional evaluation — on-demand scoring comparing your original vs. optimized prompt
  • Built-in executor — run and compare outputs from both prompts side by side
  • Interactive CLI — conversational loop with save-to-JSON support

🧩 Pipeline Overview

Your Prompt
     │
     ▼
┌─────────────────────┐
│  1. Complexity Check │  Score 1–5 → adapt verbosity
└─────────────────────┘
     │
     ▼
┌─────────────────────┐
│  2. Analyzer         │  Identify issues, score clarity
└─────────────────────┘
     │
     ▼
┌─────────────────────┐
│  3. Optimizer        │  Rewrite with best practices
└─────────────────────┘
     │
     ▼
  Show Result
     │
     ├──▶ [Optional] Evaluate? (y/n)  →  Quality & improvement scores
     │
     └──▶ [Optional] Execute? (y/n)
               ├── Optimized prompt only
               └── Both prompts (side-by-side comparison)

🚀 Getting Started

Prerequisites

Installation

# Clone or download the script
git clone https://github.com/your-username/prompt-optimizer.git
cd prompt-optimizer

# Install the OpenAI SDK
pip install openai

Set your API key

export OPENAI_API_KEY="your-api-key-here"

On Windows (Command Prompt): set OPENAI_API_KEY=your-api-key-here


💻 Usage

Interactive mode (recommended)

python prompt_optimizer_agent.py

You'll enter a prompt loop where you can optimize prompts one by one, choosing whether to evaluate or execute after each one.

Single prompt via CLI argument

python prompt_optimizer_agent.py "explain how transformers work"

Programmatic usage

from prompt_optimizer_agent import run_optimizer

result = run_optimizer("write something about climate change", verbose=True)

print(result["optimized_prompt"])
print(result["evaluation"])       # None if user skipped
print(result["outputs"])          # {} if user skipped execution

📋 Example

Input:

write something about climate change

Complexity score: 2/5 — Simple descriptive task, concise prompt preferred

Optimized output:

Explain the top 3 causes of climate change and their measurable impact since 1990.
For each cause include: a plain-language explanation, one specific data point, and
one real-world consequence. Keep the response under 400 words, factual, and
accessible to a general audience.

🔧 Configuration

Edit the constants at the top of prompt_optimizer_agent.py:

Variable Default Description
MODEL gpt-5-mini-2025-08-07 OpenAI model to use
TEMPERATURE 1 Sampling temperature

📁 Output

When saving results, a optimized_result.json file is created with the following structure:

{
  "original_prompt": "...",
  "complexity": {
    "complexity_score": 2,
    "reasoning": "..."
  },
  "analysis": {
    "issues": ["..."],
    "missing_elements": ["..."],
    "clarity_score": 4,
    "summary": "..."
  },
  "optimized_prompt": "...",
  "techniques_applied": ["..."],
  "evaluation": null,
  "outputs": {}
}

📦 Dependencies

Package Purpose
openai OpenAI API client

Install all dependencies:

pip install openai

📄 License

MIT License. Feel free to use, modify, and distribute.

About

Auto-optimize your LLM prompts with a multi-step AI pipeline: complexity scoring, analysis, rewriting, and execution. Built for personal use, open to all.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages