A Node.js application deployed to AWS EKS with full CI/CD pipelines (Jenkins, GitHub Actions, CircleCI).
Developer → Git Push → CI/CD Pipeline → Docker Hub → EKS Cluster → LoadBalancer → Users
- AWS Account with IAM user (programmatic access)
- AWS CLI installed and configured
- Terraform installed (v1.3+)
- kubectl installed
- Docker installed
- Helm installed
- A Docker Hub account
git clone https://github.com/LandmakTechnology/devopsapp.git
cd devopsappBuild the image using your own Docker Hub account:
# Replace with your Docker Hub account and repo name
export DOCKER_REPO="your-dockerhub-username/devopsapp"
export IMAGE_TAG="v1"
# Build
docker build -t ${DOCKER_REPO}:${IMAGE_TAG} .
docker tag ${DOCKER_REPO}:${IMAGE_TAG} ${DOCKER_REPO}:latest
# Login
docker login -u your-dockerhub-username
# Push
docker push ${DOCKER_REPO}:${IMAGE_TAG}
docker push ${DOCKER_REPO}:latestProvision the VPC and EKS cluster with 2 x t3.medium nodes:
cd terraform
terraform init
terraform plan
terraform apply -auto-approveThis creates:
- VPC with 2 public subnets (tagged for ELB)
- Internet Gateway + Route Table
- EKS Cluster with IAM roles
- Node Group (2 x t3.medium)
aws eks update-kubeconfig --region us-east-1 --name landmark-eks-cluster
kubectl get nodesRequired for the LoadBalancer service to provision an ELB:
# 1. Create OIDC provider
eksctl utils associate-iam-oidc-provider --cluster landmark-eks-cluster --region us-east-1 --approve
# 2. Create IAM policy
curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.6.1/docs/install/iam_policy.json
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json
# 3. Create service account
eksctl create iamserviceaccount \
--cluster=landmark-eks-cluster \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--attach-policy-arn=arn:aws:iam::<ACCOUNT_ID>:policy/AWSLoadBalancerControllerIAMPolicy \
--approve
# 4. Install via Helm
helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=landmark-eks-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
# 5. Verify
kubectl get deployment -n kube-system aws-load-balancer-controllerReplace the image placeholder in the manifest with your actual image, then deploy:
# Replace the placeholder with your image
sed -i 's|ACCOUNT/REPO:TAG|your-dockerhub-username/devopsapp:v1|g' kubernetes/03-deployment/deployment.yaml
# Deploy
kubectl apply -f kubernetes/01-namespace/namespace.yaml
kubectl apply -f kubernetes/04-configmap/configmap.yaml
kubectl apply -f kubernetes/03-deployment/deployment.yaml
kubectl apply -f kubernetes/03-deployment/service.yaml# Get the LoadBalancer URL
kubectl get svc landmark-app-service -n landmark
# Or extract just the hostname
kubectl get svc landmark-app-service -n landmark -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'Open the URL in your browser on port 80. It may take 2-3 minutes for the ELB to become active.
Choose one of the following CI/CD tools to automate the build and deploy process.
All pipelines use a DOCKER_REPO environment variable (e.g., landmark/devopsapp). Update this in the pipeline file to match your Docker Hub account/repo. The pipelines automatically replace the ACCOUNT/REPO:TAG placeholder in the Kubernetes manifests at deploy time.
File: Jenkinsfile
- Deploy a Jenkins server (use the
jenkins/folder or a Docker container):docker run -d -p 8080:8080 jenkins/jenkins:latest
- Access Jenkins at
http://<JENKINS_IP>:8080 - Get the initial password:
docker exec <container_id> cat /var/jenkins_home/secrets/initialAdminPassword
- Install suggested plugins
| Credential ID | Type | Description |
|---|---|---|
DOCKER |
Username/Password | Docker Hub credentials |
AWS_ACCESS_KEY |
Secret text | AWS Access Key ID |
AWS_SECRET_ACCESS_KEY |
Secret text | AWS Secret Access Key |
- New Item → Pipeline
- Pipeline Definition → Pipeline script from SCM
- SCM: Git
- Repository URL:
https://github.com/LandmakTechnology/devopsapp.git - Branch:
*/main - Script Path:
Jenkinsfile - Save and Build
Git Checkout → Build Docker Image → Push to Docker Hub → Deploy to EKS
- In GitHub: Settings → Webhooks → Add webhook
- Payload URL:
http://<JENKINS_IP>:8080/github-webhook/ - Content type:
application/json
- Payload URL:
- In Jenkins: Pipeline → Configure → Build Triggers → Select "GitHub hook trigger for GITScm polling"
File: .github/workflows/deploy.yml
Add these in GitHub → Settings → Secrets and variables → Actions:
| Secret | Description |
|---|---|
DOCKER_USERNAME |
Docker Hub username |
DOCKER_PASSWORD |
Docker Hub password |
AWS_ACCESS_KEY_ID |
AWS Access Key ID |
AWS_SECRET_ACCESS_KEY |
AWS Secret Access Key |
- Triggers automatically on push to
mainbranch - Can also be triggered manually via "Run workflow" button
- Two jobs:
build-and-push→deploy
Checkout → Build & Push Image → Configure AWS → Update kubeconfig → Deploy to EKS → Print LB URL
File: .circleci/config.yml
Create these in CircleCI → Organization Settings → Contexts:
Context: docker-credentials
| Variable | Description |
|---|---|
DOCKER_USERNAME |
Docker Hub username |
DOCKER_PASSWORD |
Docker Hub password |
Context: aws-credentials
| Variable | Description |
|---|---|
AWS_ACCESS_KEY_ID |
AWS Access Key ID |
AWS_SECRET_ACCESS_KEY |
AWS Secret Access Key |
AWS_DEFAULT_REGION |
us-east-1 |
- Go to circleci.com and connect your GitHub repo
- Create the contexts above
- Push to
mainto trigger the pipeline
Build & Push Image → Manual Approval → Deploy to EKS
The manual approval gate prevents accidental deployments to production.
# Delete all Kubernetes resources
kubectl delete namespace landmark
# Destroy infrastructure
cd terraform
terraform destroy -auto-approve