Put your wardrobe in rows. Snap. Organize. Wear.
Features • Quick Start • Deployment • Architecture • Contributing
Self-hosted wardrobe management with AI-powered outfit recommendations. Take photos of your clothes, let AI tag them, and get daily outfit suggestions based on weather and occasion.
- Photo-based wardrobe - Upload photos, AI extracts clothing details automatically
- Smart recommendations - Outfits matched to weather, occasion, and your preferences
- Scheduled notifications - Daily outfit suggestions via ntfy/Mattermost/email
- Family support - Manage wardrobes for household members
- Wear tracking - History, ratings, and outfit feedback
- Analytics - See what you wear, what you don't, color distribution
- Fully self-hosted - Your data stays on your hardware
- Works with any AI - OpenAI, Ollama, LocalAI, or any OpenAI-compatible API
| Wardrobe Grid | AI Analysis |
|---|---|
![]() |
![]() |
| Suggestions | History Calendar |
|---|---|
![]() |
![]() |
| Analytics | Pairing |
|---|---|
![]() |
![]() |
- Docker and Docker Compose installed
- At least 4GB of RAM available
- An AI service (Ollama recommended for free local AI, or OpenAI API key)
Option A: Using Ollama (Recommended - Free, runs locally)
# Install Ollama from https://ollama.ai
# Then pull required models:
ollama pull llava:7b # Vision model (for analyzing clothing images)
ollama pull gemma3 # Text model (for outfit recommendations)
# Verify it's running:
curl http://localhost:11434/api/tagsOption B: Using OpenAI (Paid API)
Get your API key from https://platform.openai.com/api-keys
# Clone the repository
git clone https://github.com/yourusername/wardrowbe.git
cd wardrowbe
# Copy environment template
cp .env.example .env
# IMPORTANT: Edit .env and configure AI settings
# For Ollama (default in .env.example):
# AI_BASE_URL=http://host.docker.internal:11434/v1
# AI_VISION_MODEL=llava:7b
# AI_TEXT_MODEL=gemma3:latest
#
# For OpenAI, uncomment and set:
# AI_BASE_URL=https://api.openai.com/v1
# AI_API_KEY=sk-your-api-key-here
# AI_VISION_MODEL=gpt-4o
# AI_TEXT_MODEL=gpt-4o
# Optional: Generate secure secrets for production
# SECRET_KEY=$(openssl rand -hex 32)
# NEXTAUTH_SECRET=$(openssl rand -hex 32)# Start all containers
docker compose up -d
# Wait for services to be healthy (30 seconds)
docker compose ps
# Run database migrations (REQUIRED)
docker compose exec backend alembic upgrade head
# Verify everything is working
curl http://localhost:8000/api/v1/health
# Should return: {"status":"healthy"}- Frontend: http://localhost:3000
- API Docs: http://localhost:8000/docs
- Login: Click "Login" - uses dev credentials by default (no password needed)
For hot reloading during development (auto-rebuilds on code changes):
# Start in dev mode
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d
# Run migrations (first time only)
docker compose exec backend alembic upgrade head
# View logs
docker compose logs -f frontend backendWardrowbe works with any OpenAI-compatible API. You need two types of models:
- Vision model: Analyzes clothing images to extract colors, patterns, styles
- Text model: Generates outfit recommendations and descriptions
Free, runs locally, no API key needed, works offline
-
Install Ollama
-
Pull models:
ollama pull llava:7b # Vision model (4.7GB) - analyzes images ollama pull gemma3:latest # Text model (3.4GB) - generates recommendations # Alternative text models you can use: # ollama pull llama3:latest # Good all-around model # ollama pull qwen2.5:latest # Fast and efficient # ollama pull mistral:latest # Great for creative text
-
Configure in
.env:AI_BASE_URL=http://host.docker.internal:11434/v1 AI_API_KEY=not-needed AI_VISION_MODEL=llava:7b AI_TEXT_MODEL=gemma3:latest
Note: Use host.docker.internal instead of localhost so Docker containers can reach your host's Ollama.
Paid API, requires internet connection
- Get API key from https://platform.openai.com/api-keys
- Configure in
.env:AI_BASE_URL=https://api.openai.com/v1 AI_API_KEY=sk-your-api-key-here AI_VISION_MODEL=gpt-4o AI_TEXT_MODEL=gpt-4o
Self-hosted OpenAI alternative
AI_BASE_URL=http://localai:8080/v1
AI_API_KEY=not-needed
AI_VISION_MODEL=gpt-4-vision-preview
AI_TEXT_MODEL=gpt-3.5-turboSome models can handle both vision and text (like qwen2-vl, llama3.2-vision):
AI_VISION_MODEL=llama3.2-vision:11b
AI_TEXT_MODEL=llama3.2-vision:11b # Same model for both tasks┌─────────────────────────────────────────────────────────────┐
│ Frontend │
│ (Next.js + React Query) │
└─────────────────────────┬───────────────────────────────────┘
│
┌─────────────────────────▼───────────────────────────────────┐
│ Backend │
│ (FastAPI + SQLAlchemy) │
└──────────┬──────────────┬──────────────────┬────────────────┘
│ │ │
┌──────▼──────┐ ┌─────▼─────┐ ┌──────▼──────┐
│ PostgreSQL │ │ Redis │ │ AI Service │
│ (Database) │ │ (Job Queue)│ │ (OpenAI/etc)│
└─────────────┘ └─────┬─────┘ └─────────────┘
│
┌──────────▼──────────┐
│ Background Worker │
│ (arq - AI Jobs) │
└─────────────────────┘
| Layer | Technology |
|---|---|
| Frontend | Next.js 14, TypeScript, TanStack Query, Tailwind CSS, shadcn/ui |
| Backend | FastAPI, SQLAlchemy (async), Pydantic, Python 3.11+ |
| Database | PostgreSQL 15 |
| Cache/Queue | Redis 7 |
| Background Jobs | arq |
| Authentication | NextAuth.js (supports OIDC, dev credentials) |
| AI | Any OpenAI-compatible API |
See docker-compose.prod.yml for production configuration.
docker compose -f docker-compose.prod.yml up -d
docker compose exec backend alembic upgrade headSee the k8s/ directory for Kubernetes manifests including:
- PostgreSQL and Redis with persistent storage
- Backend API and worker deployments
- Next.js frontend
- Ingress with TLS
- Network policies
| Variable | Description | Required |
|---|---|---|
DATABASE_URL |
PostgreSQL connection string | Yes |
SECRET_KEY |
Backend secret for JWT | Yes |
NEXTAUTH_SECRET |
NextAuth session encryption | Yes |
AI_BASE_URL |
AI service URL | Yes |
AI_API_KEY |
AI API key (if required) | Depends |
See .env.example for all options.
- Development Mode (default): Simple email/name login
- OIDC Mode: Authentik, Keycloak, Auth0, or any OIDC provider
- ntfy.sh: Free push notifications
- Mattermost: Team messaging webhook
- Email: SMTP-based
Uses Open-Meteo - free, no API key needed.
cd backend
pip install -r requirements.txt
# Run tests
pytest
# Run with hot reload
uvicorn app.main:app --reloadcd frontend
npm install
# Run dev server
npm run dev
# Run tests
npm test
# Build
npm run buildAvailable when running:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
# Check for type errors
cd frontend
npm install
npx tsc --noEmit
# If you see errors, please report them as a bug# 1. Check all services are healthy
docker compose ps
# 2. Check backend is responding
curl http://localhost:8000/api/v1/health
# 3. Verify migrations ran successfully
docker compose exec backend alembic current
# 4. Check logs for errors
docker compose logs backend frontend --tail=50
# 5. If migrations failed, run them manually
docker compose exec backend alembic upgrade head# For Ollama users:
# 1. Verify Ollama is running and accessible
curl http://localhost:11434/api/tags
# Should show your installed models
# 2. Check you have the vision model
ollama list | grep llava
# 3. Verify .env has correct configuration
cat .env | grep AI_
# 4. Check worker logs for AI errors
docker compose logs worker --tail=50
# For OpenAI users:
# 1. Verify API key is valid
curl https://api.openai.com/v1/models \
-H "Authorization: Bearer $AI_API_KEY"
# 2. Check you have credits remaining in your account# 1. Ensure you're using the dev compose file
docker compose -f docker-compose.yml -f docker-compose.dev.yml down
docker compose -f docker-compose.yml -f docker-compose.dev.yml build frontend
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d
# 2. Check frontend logs
docker compose logs frontend -f
# Alternative: Run frontend locally for development
cd frontend
npm install
npm run dev
# Then access at http://localhost:3000# Check what's using the ports
sudo lsof -i :3000 # Frontend
sudo lsof -i :8000 # Backend
sudo lsof -i :5432 # PostgreSQL
sudo lsof -i :6379 # Redis
# Stop conflicting services or change ports in .env# Check current migration version
docker compose exec backend alembic current
# View migration history
docker compose exec backend alembic history
# If migrations are corrupted, reset (WARNING: Deletes all data!)
docker compose down -v
docker compose up -d
sleep 10
docker compose exec backend alembic upgrade head# Check container logs
docker compose logs <service-name> --tail=100
# Common issues:
# - Database not ready: Wait 30 seconds and retry
# - Out of memory: Increase Docker memory limit
# - Permission errors: Check file permissions on volumes# Check resource usage
docker stats
# If backend is slow:
# 1. Check PostgreSQL performance
docker compose exec postgres psql -U wardrobe -c "SELECT * FROM pg_stat_activity;"
# 2. Check Redis connection
docker compose exec redis redis-cli ping
# If AI analysis is slow:
# - For Ollama: Use smaller models (llava:7b faster than llava:13b)
# - For OpenAI: Check your rate limitsIf you're still stuck:
- Check existing GitHub Issues
- Search Discussions
- Create a new issue with:
- Output of
docker compose ps - Relevant logs from
docker compose logs - Your .env configuration (redact secrets!)
- Steps to reproduce the problem
- Output of
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
This project is licensed under the MIT License - see the LICENSE file for details.
- Docker & Docker Compose
- ~4GB RAM (with local Ollama models)
- Storage for clothing photos
Works great on a Raspberry Pi 5!





