Infrastructure as Code for the Task Management application.
devops/
├── docker/
│ ├── docker-compose.yml # All infrastructure services
│ └── dockerfiles/ # Application Dockerfiles
├── kubernetes/
│ └── manifests/ # K8s deployment files
├── kong/
│ └── kong-setup.sh # Kong Gateway configuration
└── logstash/
├── config/
│ └── logstash.yml # Logstash configuration
└── pipeline/
└── logstash.conf # Log processing pipeline
| Service | Port(s) | Purpose |
|---|---|---|
| PostgreSQL | 5432 | Main database |
| pgAdmin | 5050 | Database management |
| Zookeeper | 2181 | Kafka coordination |
| Kafka | 9092, 29092 | Event streaming |
| Kafka UI | 8090 | Kafka monitoring |
| MinIO | 9000, 9001 | Object storage |
| Elasticsearch | 9200, 9300 | Log storage |
| Logstash | 5000, 9600 | Log processing |
| Kibana | 5601 | Log visualization |
| Kong Database | - | Kong's PostgreSQL |
| Kong Gateway | 8000, 8001, 8002 | API Gateway |
cd devops/docker
# Start all services
docker compose up -d
# Stop all services
docker compose down
# Stop and remove data
docker compose down -v
# View logs
docker compose logs -f [service-name]
# Restart specific service
docker compose restart [service-name]
# Check status
docker compose ps
# View resource usage
docker statsImage: postgres:16-alpine
Database: taskdb
User: taskuser
Password: taskpass
# Connect via psql
docker exec -it task-postgres psql -U taskuser -d taskdb
# Backup database
docker exec task-postgres pg_dump -U taskuser taskdb > backup.sql
# Restore database
docker exec -i task-postgres psql -U taskuser taskdb < backup.sqlImage: confluentinc/cp-kafka:7.5.0
Bootstrap Servers: localhost:9092
Internal: kafka:29092
# List topics
docker exec task-kafka kafka-topics --list --bootstrap-server localhost:9092
# Describe topic
docker exec task-kafka kafka-topics --describe --topic task-events --bootstrap-server localhost:9092
# Consume messages
docker exec task-kafka kafka-console-consumer --topic task-events --from-beginning --bootstrap-server localhost:9092
# Produce test message
docker exec -it task-kafka kafka-console-producer --topic task-events --bootstrap-server localhost:9092Image: minio/minio:latest
Console: http://localhost:9001
API: http://localhost:9000
Credentials: minioadmin / minioadmin
# List buckets
docker exec task-minio mc ls local/
# List objects in bucket
docker exec task-minio mc ls local/task-attachments/
# Download object
docker exec task-minio mc cp local/task-attachments/file.pdf /tmp/Image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
URL: http://localhost:9200
# Check cluster health
curl http://localhost:9200/_cluster/health?pretty
# List indices
curl http://localhost:9200/_cat/indices?v
# Search logs
curl -X GET "http://localhost:9200/application-logs-*/_search?pretty" -H 'Content-Type: application/json' -d'
{
"query": {
"match": {
"service": "task-service"
}
}
}'Image: kong:3.5
Proxy: http://localhost:8000
Admin API: http://localhost:8001
Manager: http://localhost:8002
# Check Kong status
curl http://localhost:8001/status
# List services
curl http://localhost:8001/services
# List routes
curl http://localhost:8001/routes
# List plugins
curl http://localhost:8001/pluginsLocation: devops/kong/kong-setup.sh
What it does:
- Cleans existing configuration
- Creates services for microservices
- Sets up routes with correct path handling
- Enables CORS plugin
- Configures rate limiting
- Verifies configuration
cd devops/kong
# Run setup
./kong-setup.sh
# Manual verification
curl http://localhost:8001/services
curl http://localhost:8001/routesTask Service:
- Name:
task-service - URL:
http://host.docker.internal:8080 - Route:
/api/tasks
Notification Service:
- Name:
notification-service - URL:
http://host.docker.internal:8081 - Route:
/api/notifications
CORS:
- Origins:
http://localhost:4200 - Methods: GET, POST, PUT, DELETE, PATCH, OPTIONS
- Credentials: true
Rate Limiting:
- Limit: 100 requests per minute
- Policy: local
# Reset Kong configuration
cd devops/kong
./kong-setup.sh
# Check Kong logs
docker logs task-kong
# Restart Kong
docker compose restart kong
# Test routing
curl -v http://localhost:8000/api/tasksConfiguration:
- Single-node cluster
- No security (development only)
- Heap: 512MB
Data persistence:
- Volume:
elasticsearch_data
Pipeline: devops/logstash/pipeline/logstash.conf
Input:
- TCP port 5000 (JSON lines)
- UDP port 5000 (JSON)
Filter:
- Adds service name
- Parses log levels
- Enriches with metadata
Output:
- Elasticsearch (index:
application-logs-YYYY.MM.dd) - Stdout (debugging)
Setup:
- Create data view:
- Pattern:
application-logs-* - Timestamp:
@timestamp
- Pattern:
- Go to Discover
- Filter logs:
service: "task-service"
Useful Queries:
# Task service errors
service: "task-service" AND level: "error"
# Kafka events
logger_name: *TaskEventProducer* OR logger_name: *TaskEventConsumer*
# File uploads
message: *upload*
# Last hour only
@timestamp >= now-1h
Type: Bridge
Driver: bridge
Connected Services:
- All infrastructure services
- (Future) Application containers
# Inspect network
docker network inspect taskmanagement_app-network
# View connected containers
docker network inspect taskmanagement_app-network | grep Name| Volume | Size | Purpose |
|---|---|---|
taskmanagement_postgres_data |
~100MB | Database |
taskmanagement_minio_data |
Variable | File storage |
taskmanagement_elasticsearch_data |
~500MB | Logs |
taskmanagement_kong_data |
~50MB | Kong config |
# List volumes
docker volume ls | grep taskmanagement
# Inspect volume
docker volume inspect taskmanagement_postgres_data
# Backup volume
docker run --rm -v taskmanagement_postgres_data:/data -v $(pwd):/backup alpine tar czf /backup/postgres-backup.tar.gz /data
# Restore volume
docker run --rm -v taskmanagement_postgres_data:/data -v $(pwd):/backup alpine sh -c "cd /data && tar xzf /backup/postgres-backup.tar.gz --strip 1"
# Remove all volumes (DANGEROUS!)
docker volume pruneCurrent setup (NOT for production):
- ❌ No authentication on services
- ❌ Default credentials
- ❌ HTTP only (no HTTPS)
- ❌ No network isolation
- ❌ Security features disabled
Must-have for production:
- ✅ Enable security in Elasticsearch
- ✅ Use strong passwords
- ✅ Enable HTTPS/TLS
- ✅ Network segmentation
- ✅ Secrets management (Vault, AWS Secrets Manager)
- ✅ Regular security updates
- ✅ Access control lists
- ✅ Firewall rules
# docker-compose.yml
services:
elasticsearch:
deploy:
resources:
limits:
memory: 1GB
reservations:
memory: 512MBenvironment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m" # Elasticsearch
- "LS_JAVA_OPTS=-Xms256m -Xmx256m" # LogstashAdjust based on load:
- PostgreSQL max connections: 100
- Kafka partitions: 3
- Kong workers: auto
# All services health
docker compose ps
# PostgreSQL
docker exec task-postgres pg_isready
# Elasticsearch
curl http://localhost:9200/_cluster/health
# Kafka
docker exec task-kafka kafka-broker-api-versions --bootstrap-server localhost:9092
# Kong
curl http://localhost:8001/status
# MinIO
curl http://localhost:9000/minio/health/live# Real-time stats
docker stats
# Disk usage
docker system df
# Clean up unused resources
docker system prune# Backup script
#!/bin/bash
DATE=$(date +%Y%m%d_%H%M%S)
docker exec task-postgres pg_dump -U taskuser taskdb > backup_${DATE}.sql# Backup all volumes
docker-compose down
tar czf backup.tar.gz /var/lib/docker/volumes/taskmanagement_*
docker-compose up -dPort conflicts:
# Find process using port
lsof -i :5432
# Change port in docker-compose.yml
ports:
- "5433:5432"Out of disk space:
# Check disk usage
df -h
docker system df
# Clean up
docker system prune -a --volumesServices not starting:
# Check logs
docker compose logs [service-name]
# Restart service
docker compose restart [service-name]
# Rebuild service
docker compose up -d --force-recreate [service-name]Network issues:
# Recreate network
docker compose down
docker network prune
docker compose up -d- Docker Compose Documentation
- Kong Gateway Documentation
- Elastic Stack Documentation
- Kafka Documentation
- Kubernetes deployment
- Prometheus & Grafana monitoring
- Distributed tracing (Jaeger)
- Service mesh (Istio)
- GitOps with ArgoCD
- CI/CD pipelines
- Infrastructure testing
- Disaster recovery procedures