End-to-End DevOps Lab: Node.js + Kubernetes + Terraform
- Node.js & npm
- Docker
- Minikube & kubectl
- Terraform
- GitHub account (for CI/CD)
- Docker Hub account (for image registry)
# 1. Start Minikube
minikube start --driver=docker
# 2. Build Docker image inside Minikube
eval $(minikube -p minikube docker-env)
docker build -t kube-node-app:v1 .
eval $(minikube docker-env -u)
# 3. Deploy with Terraform
terraform init
terraform plan
terraform apply
# 4. Start Minikube tunnel (in separate terminal)
minikube tunnel
# 5. Get LoadBalancer IP and configure hosts
kubectl get svc my-awesome-node-app-service
echo "<EXTERNAL-IP> myapp.local" | sudo tee -a /etc/hosts
# 6. Access the application
# Visit http://myapp.local in your browserkube-node-app/
├── server.js # Node.js application
├── package.json # NPM dependencies
├── Dockerfile # Container definition
├── main.tf # Terraform infrastructure (Minikube)
├── variables.tf # Terraform variables
├── terraform.tfvars # Variable values
└── aws-infra/ # AWS K3s deployment
├── main.tf # EC2 + K3s setup
├── provider.tf # AWS provider config
├── variables.tf # AWS variables
├── terraform.tfvars
├── deployment.yaml # Kubernetes deployment
├── service.yaml # Kubernetes service
└── ingress.yaml # Ingress rules
└── .github/
└── workflows/
└── deploy.yaml # CI/CD pipeline
Create the project and install dependencies:
mkdir kube-node-app && cd kube-node-app
npm init -y
npm install expressCreate server.js with the application code (see repository).
Build the Docker image:
docker build -t kube-node-app:v1 .Start Minikube and load the image:
minikube start --driver=docker
minikube image load kube-node-app:v1Deploy infrastructure:
terraform init
terraform plan
terraform applyStart the Minikube tunnel to expose the LoadBalancer service:
# Start tunnel (run in separate terminal)
minikube tunnelGet the external IP and add it to your hosts file:
# Get the external IP
kubectl get svc my-awesome-node-app-service
# Add to hosts file (replace <EXTERNAL-IP> with actual IP)
echo "<EXTERNAL-IP> myapp.local" | sudo tee -a /etc/hostsAccess the application at http://myapp.local
- AWS account with access keys
- SSH key pair created in AWS (ap-southeast-1 region)
- AWS CLI configured (optional)
- Configure AWS credentials:
Create aws-infra/terraform.tfvars:
access_key = "your-aws-access-key"
secret_key = "your-aws-secret-key"- Deploy K3s cluster on EC2:
cd aws-infra
terraform init
terraform plan
terraform apply- Get server IP and configure access:
# Get the public IP
terraform output server_public_ip
# SSH into the server
ssh -i ~/.ssh/my-laptop-key.pem ubuntu@<SERVER_IP>
# Get kubeconfig
sudo cat /etc/rancher/k3s/k3s.yaml- Configure local kubectl:
# Copy k3s.yaml content to local machine
mkdir -p ~/.kube
# Edit the file and replace 127.0.0.1 with your server's public IP
vim ~/.kube/k3s-config
# Use it
export KUBECONFIG=~/.kube/k3s-config
kubectl get nodes- Deploy your application:
# Build and push to Docker Hub
docker build -t username/kube-node-app:v1 .
docker push username/kube-node-app:v1
# Update deployment.yaml with your Docker Hub username
# Then apply Kubernetes manifests
cd aws-infra
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
# Or apply all at once
kubectl apply -f .- Verify deployment:
# Check pods
kubectl get pods
# Check service
kubectl get svc node-app-service
# Check ingress
kubectl get ingress- Access via Ingress:
# Test with curl using Host header
curl -v -H "Host: visal.engineer" http://<SERVER_IP>
# Or configure local DNS
echo "<SERVER_IP> visal.engineer" | sudo tee -a /etc/hosts
# Access in browser
http://visal.engineer- Instance Type: t3.small (~$0.02/hour)
- OS: Ubuntu 22.04 LTS (Jammy)
- Kubernetes: K3s (lightweight Kubernetes)
- Ingress: Traefik (included with K3s)
- Security Groups: SSH (22), HTTP (80), HTTPS (443), K8s API (6443)
- Elastic IP: Static IP for consistent access
- Deployment: Kubernetes manifests (deployment.yaml, service.yaml, ingress.yaml)
# Remove Kubernetes resources
cd aws-infra
kubectl delete -f .
# Destroy infrastructure
terraform destroyCost Warning: Remember to destroy resources when not in use to avoid charges.
# Check deployment status
kubectl get deployments
kubectl get pods
kubectl get services
# View application logs
kubectl logs -l app=my-awesome-node-app
# Describe resources
kubectl describe pod <pod-name># Build standard image
docker build -t my-app:v1 .
# Build inside Minikube (no registry needed)
eval $(minikube -p minikube docker-env)
docker build -t my-app:v1 .
eval $(minikube docker-env -u)
# Push to Docker Hub
docker tag my-app:v1 username/my-app:v1
docker push username/my-app:v1# Start cluster
minikube start --driver=docker
# Get cluster IP
minikube ip
# Enable tunnel for LoadBalancer
sudo minikube tunnel
# Stop cluster
minikube stop# Initialize providers
terraform init
# Preview changes
terraform plan
# Apply changes
terraform apply
# Destroy infrastructure
terraform destroy# View all resources
kubectl get all
# View pods
kubectl get pods
# View logs
kubectl logs <pod-name>
# Check Ingress
kubectl get ingress
kubectl get pods -n ingress-nginx
# For K3s, check Traefik
kubectl get pods -n kube-system | grep traefikLocal (Minikube):
terraform destroy
minikube stopAWS (K3s):
cd aws-infra
kubectl delete -f .
terraform destroyTerraform state corrupted:
rm terraform.tfstate*
kubectl delete deployment node-app-deployment-tf
terraform applyImagePullBackOff error:
minikube image load kube-node-app:v1LoadBalancer pending external IP:
# Ensure tunnel is running
minikube tunnel
# Check service status
kubectl get svc my-awesome-node-app-serviceCannot access myapp.local:
# Verify hosts file entry
cat /etc/hosts | grep myapp.local
# Verify LoadBalancer IP matches
kubectl get svc my-awesome-node-app-serviceAWS: Ingress not routing:
# Check ingress status
kubectl get ingress
# Verify Traefik is running
kubectl get pods -n kube-system | grep traefik
# Test with Host header
curl -v -H "Host: visal.engineer" http://<SERVER_IP>GitHub Actions deployment failed:
# Check workflow logs in GitHub Actions tab
# Verify secrets are set correctly
# Settings → Secrets and variables → Actions
# Test SSH connection manually
ssh -i ~/.ssh/my-laptop-key.pem ubuntu@<SERVER_IP>
# Verify kubectl works on EC2
export KUBECONFIG=~/.kube/config
kubectl get nodesImage not updating after push:
# Check if image was pushed to Docker Hub
# Visit: https://hub.docker.com/r/your-username/kube-node-app
# Force pull new image on EC2
kubectl rollout restart deployment/node-app-deployment
# Check image version in deployment
kubectl describe deployment node-app-deployment | grep ImageThis project supports three deployment workflows:
- Local Development (Minikube) - Perfect for learning and testing locally
- Manual Cloud Deployment (AWS K3s) - Deploy to EC2 using kubectl commands
- Automated CI/CD (GitHub Actions) - Push to main branch and auto-deploy to AWS
The local setup uses minikube tunnel and /etc/hosts, while AWS provides a real public IP with K3s cluster. The CI/CD pipeline automates the entire build-push-deploy workflow.
Automated deployment pipeline that:
- Builds Docker image on every push to
mainbranch - Pushes image to Docker Hub with
latestand commit SHA tags - Deploys to EC2 K3s cluster via SSH
- Updates deployment with zero-downtime rollout
- Configure GitHub Secrets:
Go to repository Settings → Secrets and variables → Actions, add:
DOCKER_USERNAME=your-dockerhub-username
DOCKER_PASSWORD=your-dockerhub-password
AWS_HOST_IP=your-ec2-public-ip
AWS_SSH_USER=ubuntu
AWS_SSH_KEY=your-private-key-content
- GitHub Actions Workflow:
The .github/workflows/deploy.yaml file handles:
- Docker build and push to Docker Hub
- SSH connection to EC2
- Kubernetes deployment update with new image
- Rollout status verification
Trigger: Push to main branch
↓
Build Docker Image
↓
Tag: latest & commit-sha
↓
Push to Docker Hub
↓
SSH to EC2 Server
↓
Update Kubernetes Deployment
↓
Wait for Rollout Complete
↓
✅ Deployment Success# Make a change to your code
echo "console.log('Updated!');" >> server.js
# Commit and push
git add .
git commit -m "Update application"
git push origin main
# Watch the action in GitHub
# Visit: https://github.com/your-username/kube-node-app/actions
# Verify deployment on EC2
kubectl get pods
kubectl describe pod <pod-name># Watch rollout status
kubectl rollout status deployment/node-app-deployment
# Check rollout history
kubectl rollout history deployment/node-app-deployment
# Rollback if needed
kubectl rollout undo deployment/node-app-deployment- ✅ Automated builds on every code change
- ✅ Zero-downtime deployments with rolling updates
- ✅ Version tracking with commit SHA tags
- ✅ Instant rollback capability
- ✅ Consistent deployments across environments