A web-based AI-powered tool that reviews Python code, detects bugs, suggests improvements, and generates human-friendly documentation using language models.
This project demonstrates the evolution of a code reviewer app through two versions:
- Version 1: Built with guidance from ChatGPT and tutorials. It uses Google's Gemini API and Streamlit for quick prototyping.
- Version 2: Fully self-written using Flask and a local Ollama LLM (
llama3.2) for offline, more customizable AI processing.
- Project Structure
- Version Highlights
- Tech Stack
- Setup & Installation
- How to Run
- Learnings & Evolution
- Screenshots (optional)
- License
.
├── version_1/ # Streamlit + Gemini API version
│ ├── app.py
│ └── .env (Google API key)
│
├── version_2/ # Flask + Ollama + Llama3.2 version
│ ├── app.py
│ ├── templates/
│ │ └── index.html
│ └── utils/
│ └── functions.py
│
├── README.md
└── requirements.txt # Optional, depending on environment-
Built using Streamlit for fast UI.
-
Uses Google Gemini API via
google.generativeai. -
Offers simple input/output layout for quick feedback.
-
Built using Flask from scratch.
-
Implements Ollama with a locally running
llama3.2model. -
Enhanced prompt engineering for better review quality.
-
Beautiful custom HTML/CSS UI for better user experience.
-
Contains robust instructions for context filtering (code-only reviews).
| Area | Tools Used |
|---|---|
| Frontend | Streamlit (v1), HTML/CSS (v2) |
| Backend | Python, Flask |
| LLM | Google Gemini API (v1), Ollama LLM (llama3.2) (v2) |
| Environment | dotenv (.env), virtualenv |
cd version_1
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install streamlit google.generativeai echo GOOGLE_API_KEY=your_key_here > .env
streamlit run app.pycd version_2
python -m venv venv
source venv/bin/activate
pip install flask ollama
# Make sure Ollama is installed and running
ollama run llama3.2
python app.py- taught me the basics of API calls, prompt structuring, and how to quickly prototype with Streamlit.
-
was my full-stack build using Flask, local LLMs, and custom UI — improving my understanding of backend routing, HTML templating, and deploying AI models locally.
-
Implemented stricter prompt rules and better error handling in v2.
