A real-time, distributed collaborative drawing application designed to demonstrate the principles of Distributed Systems, High Availability, and GitOps-driven Cloud Orchestration.
This project evolved from a single-node VPS deployment into a professional, multi-AZ AWS cloud infrastructure. You can deploy it in two ways:
- Infrastructure: AWS EKS (Kubernetes), RDS (PostgreSQL), ElastiCache (Redis), Secrets Manager, ALB.
- Logic: Fully automated via Terraform (IaC) and GitHub Actions (CI/CD).
- Infrastructure: Lightweight
k3scluster, Docker Compose, or manual Node.js. - Logic: Standard
kubectl applymanifests. - Location: Legacy manifests are preserved in the
k83-vps/directory.
- AWS CLI configured with Admin permissions.
- Terraform 1.5+
- kubectl & Helm
Before deploying the main resources, initialize the remote S3 backend and DynamoDB state lock:
cd terraform/bootstrap
terraform init && terraform applyUpdate terraform/main/provider.tf with the S3 bucket name from the bootstrap step, then:
cd terraform/main
terraform init
terraform applyThis will provision a custom VPC, EKS Cluster, RDS Postgres, and ElastiCache Redis (~15-20 mins).
- Add your
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYto GitHub Repository Secrets. - Push a change to the
masterbranch. - The Deployment Workflow will automatically build the images, push to ECR, and deploy to EKS.
If you are deploying to a standard Linux VPS using k3s:
- Ensure k3s is installed:
curl -sfL https://get.k3s.io | sh - - Deploy Manifests:
kubectl apply -f k83-vps/
- Config: Update the
Ingressink83-vps/ingress.yamlwith your VPS IP or public domain.
| Component | AWS Technology | VPS/Local Technology |
|---|---|---|
| Kubernetes | Amazon EKS (Managed) | k3s (Lightweight) |
| Database | Amazon RDS (PostgreSQL) | In-cluster Postgres Pod |
| Cache | Amazon ElastiCache (Redis) | In-cluster Redis Pod |
| Ingress | AWS Load Balancer (ALB) | Traefik / Nginx |
| Secrets | AWS Secrets Manager | K8s Secrets (Base64) |
| Registry | Amazon ECR | Docker Hub |
| IaC | Terraform | Manual Manifests |
The system is built to breathe. We’ve included a "Bot Army" tester to verify Horizontal Pod Autoscaling (HPA).
- Install Tester:
cd tester npm install - Run "Starry Night" Attack:
# Simulates 250 concurrent bots drawing a Van Gogh painting CONCURRENCY=250 INTERVAL=10 node flood.js - Monitor Scaling:
kubectl get hpa paint-backend -w
This project is licensed under the MIT License - see the LICENSE file for details.