Skip to content

End-to-End DevOps Lab: Node.js + Kubernetes + Terraform

Notifications You must be signed in to change notification settings

vizchamz/kube-node-app

Repository files navigation

kube-node-app

End-to-End DevOps Lab: Node.js + Kubernetes + Terraform

Prerequisites

  • Node.js & npm
  • Docker
  • Minikube & kubectl
  • Terraform
  • GitHub account (for CI/CD)
  • Docker Hub account (for image registry)

Quick Start

# 1. Start Minikube
minikube start --driver=docker

# 2. Build Docker image inside Minikube
eval $(minikube -p minikube docker-env)
docker build -t kube-node-app:v1 .
eval $(minikube docker-env -u)

# 3. Deploy with Terraform
terraform init
terraform plan
terraform apply

# 4. Start Minikube tunnel (in separate terminal)
minikube tunnel

# 5. Get LoadBalancer IP and configure hosts
kubectl get svc my-awesome-node-app-service
echo "<EXTERNAL-IP> myapp.local" | sudo tee -a /etc/hosts

# 6. Access the application
# Visit http://myapp.local in your browser

Project Structure

kube-node-app/
├── server.js           # Node.js application
├── package.json        # NPM dependencies
├── Dockerfile          # Container definition
├── main.tf            # Terraform infrastructure (Minikube)
├── variables.tf       # Terraform variables
├── terraform.tfvars   # Variable values
└── aws-infra/         # AWS K3s deployment
    ├── main.tf        # EC2 + K3s setup
    ├── provider.tf    # AWS provider config
    ├── variables.tf   # AWS variables
    ├── terraform.tfvars
    ├── deployment.yaml # Kubernetes deployment
    ├── service.yaml   # Kubernetes service
    └── ingress.yaml   # Ingress rules
└── .github/
    └── workflows/
        └── deploy.yaml # CI/CD pipeline

Step-by-Step Guide

1. Application Setup

Create the project and install dependencies:

mkdir kube-node-app && cd kube-node-app
npm init -y
npm install express

Create server.js with the application code (see repository).

2. Containerization

Build the Docker image:

docker build -t kube-node-app:v1 .

3. Minikube Setup

Start Minikube and load the image:

minikube start --driver=docker
minikube image load kube-node-app:v1

4. Terraform Deployment

Deploy infrastructure:

terraform init
terraform plan
terraform apply

5. Access Application

Start the Minikube tunnel to expose the LoadBalancer service:

# Start tunnel (run in separate terminal)
minikube tunnel

Get the external IP and add it to your hosts file:

# Get the external IP
kubectl get svc my-awesome-node-app-service

# Add to hosts file (replace <EXTERNAL-IP> with actual IP)
echo "<EXTERNAL-IP> myapp.local" | sudo tee -a /etc/hosts

Access the application at http://myapp.local

AWS Deployment (K3s on EC2)

Prerequisites

  • AWS account with access keys
  • SSH key pair created in AWS (ap-southeast-1 region)
  • AWS CLI configured (optional)

Setup

  1. Configure AWS credentials:

Create aws-infra/terraform.tfvars:

access_key = "your-aws-access-key"
secret_key = "your-aws-secret-key"
  1. Deploy K3s cluster on EC2:
cd aws-infra
terraform init
terraform plan
terraform apply
  1. Get server IP and configure access:
# Get the public IP
terraform output server_public_ip

# SSH into the server
ssh -i ~/.ssh/my-laptop-key.pem ubuntu@<SERVER_IP>

# Get kubeconfig
sudo cat /etc/rancher/k3s/k3s.yaml
  1. Configure local kubectl:
# Copy k3s.yaml content to local machine
mkdir -p ~/.kube
# Edit the file and replace 127.0.0.1 with your server's public IP
vim ~/.kube/k3s-config

# Use it
export KUBECONFIG=~/.kube/k3s-config
kubectl get nodes
  1. Deploy your application:
# Build and push to Docker Hub
docker build -t username/kube-node-app:v1 .
docker push username/kube-node-app:v1

# Update deployment.yaml with your Docker Hub username
# Then apply Kubernetes manifests
cd aws-infra
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml

# Or apply all at once
kubectl apply -f .
  1. Verify deployment:
# Check pods
kubectl get pods

# Check service
kubectl get svc node-app-service

# Check ingress
kubectl get ingress
  1. Access via Ingress:
# Test with curl using Host header
curl -v -H "Host: visal.engineer" http://<SERVER_IP>

# Or configure local DNS
echo "<SERVER_IP> visal.engineer" | sudo tee -a /etc/hosts

# Access in browser
http://visal.engineer

AWS Infrastructure Details

  • Instance Type: t3.small (~$0.02/hour)
  • OS: Ubuntu 22.04 LTS (Jammy)
  • Kubernetes: K3s (lightweight Kubernetes)
  • Ingress: Traefik (included with K3s)
  • Security Groups: SSH (22), HTTP (80), HTTPS (443), K8s API (6443)
  • Elastic IP: Static IP for consistent access
  • Deployment: Kubernetes manifests (deployment.yaml, service.yaml, ingress.yaml)

AWS Cleanup

# Remove Kubernetes resources
cd aws-infra
kubectl delete -f .

# Destroy infrastructure
terraform destroy

Cost Warning: Remember to destroy resources when not in use to avoid charges.

Common Commands

Verification

# Check deployment status
kubectl get deployments
kubectl get pods
kubectl get services

# View application logs
kubectl logs -l app=my-awesome-node-app

# Describe resources
kubectl describe pod <pod-name>

Docker

# Build standard image
docker build -t my-app:v1 .

# Build inside Minikube (no registry needed)
eval $(minikube -p minikube docker-env)
docker build -t my-app:v1 .
eval $(minikube docker-env -u)

# Push to Docker Hub
docker tag my-app:v1 username/my-app:v1
docker push username/my-app:v1

Minikube

# Start cluster
minikube start --driver=docker

# Get cluster IP
minikube ip

# Enable tunnel for LoadBalancer
sudo minikube tunnel

# Stop cluster
minikube stop

Terraform

# Initialize providers
terraform init

# Preview changes
terraform plan

# Apply changes
terraform apply

# Destroy infrastructure
terraform destroy

Kubernetes Debugging

# View all resources
kubectl get all

# View pods
kubectl get pods

# View logs
kubectl logs <pod-name>

# Check Ingress
kubectl get ingress
kubectl get pods -n ingress-nginx

# For K3s, check Traefik
kubectl get pods -n kube-system | grep traefik

Cleanup

Local (Minikube):

terraform destroy
minikube stop

AWS (K3s):

cd aws-infra
kubectl delete -f .
terraform destroy

Troubleshooting

Terraform state corrupted:

rm terraform.tfstate*
kubectl delete deployment node-app-deployment-tf
terraform apply

ImagePullBackOff error:

minikube image load kube-node-app:v1

LoadBalancer pending external IP:

# Ensure tunnel is running
minikube tunnel

# Check service status
kubectl get svc my-awesome-node-app-service

Cannot access myapp.local:

# Verify hosts file entry
cat /etc/hosts | grep myapp.local

# Verify LoadBalancer IP matches
kubectl get svc my-awesome-node-app-service

AWS: Ingress not routing:

# Check ingress status
kubectl get ingress

# Verify Traefik is running
kubectl get pods -n kube-system | grep traefik

# Test with Host header
curl -v -H "Host: visal.engineer" http://<SERVER_IP>

GitHub Actions deployment failed:

# Check workflow logs in GitHub Actions tab

# Verify secrets are set correctly
# Settings → Secrets and variables → Actions

# Test SSH connection manually
ssh -i ~/.ssh/my-laptop-key.pem ubuntu@<SERVER_IP>

# Verify kubectl works on EC2
export KUBECONFIG=~/.kube/config
kubectl get nodes

Image not updating after push:

# Check if image was pushed to Docker Hub
# Visit: https://hub.docker.com/r/your-username/kube-node-app

# Force pull new image on EC2
kubectl rollout restart deployment/node-app-deployment

# Check image version in deployment
kubectl describe deployment node-app-deployment | grep Image

Deployment Options

This project supports three deployment workflows:

  1. Local Development (Minikube) - Perfect for learning and testing locally
  2. Manual Cloud Deployment (AWS K3s) - Deploy to EC2 using kubectl commands
  3. Automated CI/CD (GitHub Actions) - Push to main branch and auto-deploy to AWS

The local setup uses minikube tunnel and /etc/hosts, while AWS provides a real public IP with K3s cluster. The CI/CD pipeline automates the entire build-push-deploy workflow.

CI/CD Pipeline (GitHub Actions)

Overview

Automated deployment pipeline that:

  1. Builds Docker image on every push to main branch
  2. Pushes image to Docker Hub with latest and commit SHA tags
  3. Deploys to EC2 K3s cluster via SSH
  4. Updates deployment with zero-downtime rollout

Setup

  1. Configure GitHub Secrets:

Go to repository Settings → Secrets and variables → Actions, add:

DOCKER_USERNAME=your-dockerhub-username
DOCKER_PASSWORD=your-dockerhub-password
AWS_HOST_IP=your-ec2-public-ip
AWS_SSH_USER=ubuntu
AWS_SSH_KEY=your-private-key-content
  1. GitHub Actions Workflow:

The .github/workflows/deploy.yaml file handles:

  • Docker build and push to Docker Hub
  • SSH connection to EC2
  • Kubernetes deployment update with new image
  • Rollout status verification

Pipeline Workflow

Trigger: Push to main branch
  
Build Docker Image
  
Tag: latest & commit-sha
  
Push to Docker Hub
  
SSH to EC2 Server
  
Update Kubernetes Deployment
  
Wait for Rollout Complete
  
✅ Deployment Success

Testing the Pipeline

# Make a change to your code
echo "console.log('Updated!');" >> server.js

# Commit and push
git add .
git commit -m "Update application"
git push origin main

# Watch the action in GitHub
# Visit: https://github.com/your-username/kube-node-app/actions

# Verify deployment on EC2
kubectl get pods
kubectl describe pod <pod-name>

Monitoring Deployments

# Watch rollout status
kubectl rollout status deployment/node-app-deployment

# Check rollout history
kubectl rollout history deployment/node-app-deployment

# Rollback if needed
kubectl rollout undo deployment/node-app-deployment

Benefits

  • Automated builds on every code change
  • Zero-downtime deployments with rolling updates
  • Version tracking with commit SHA tags
  • Instant rollback capability
  • Consistent deployments across environments

About

End-to-End DevOps Lab: Node.js + Kubernetes + Terraform

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors