Project Title: React Deployment Capstone Project
Application:- clone the below repositry and deploy the application (Run application in port 80)
Repo URL:- https://github.com/sriram-R-krishnan/devops-build
AWS: Launch t2.micro instance and deploy the create application. Configure SG as below: Whoever has the ip address can access the application Login to server can should be made only from your ip address
#write a terraform script to Launch t2.micro instance and deploy the create application. Configure SG as below: Whoever has the ip address can access the application Login to server can should be made only from your ip address.
create in terraform file
nano instance.tfprovider "aws" {
region = "ap-south-1"
}
# Create a security group allowing inbound SSH access from your IP address only
resource "aws_security_group" "jenkins_sg" {
name = "jenkins_sg"
description = "Security group for Jenkins instance"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["103.44.15.122/32"] # Restrict SSH to your IP address
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Allow inbound access on port 80 from anyone
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# Launching EC2 instance named "jenkins" with Ubuntu image, t2.micro instance type, and your specified key pair
resource "aws_instance" "jenkins_instance" {
ami = "ami-0f58b397bc5c1f2e8"
instance_type = "t2.micro"
key_name = "webserver" # Specify your existing key pair name here
tags = {
Name = "jenkins"
}
security_groups = [aws_security_group.jenkins_sg.name]
}
And intialize, plan, and apply
terraform init
terraform plan
terraform apply(or) Create Security Group: Go to the AWS Management Console and navigate to the EC2 service.
In the left navigation pane, under "Network & Security", select "Security Groups".
Click on the "Create security group" button.
Provide a name, description, and configure the inbound rule first inbound rule port no should be 80 and make it access any one.

And second inbound rule is to choose port 22 and login acces through our ip address
enter our ip address/32.

Click on the "Create security group" button to create the security group.

And go to EC2 dashboard, click on the "Launch Instance" button.
Choose an Amazon Machine Image (AMI), instance type, configure instance details, add storage configure security groups (select the one you just created), add tags, and configure any additional options as needed.

Review your configuration and click on the "Launch" button. Select an existing key pair or create a new one. Finally, click on the "Launch Instances" button to launch your instance.
login into the created server
To clone the these use
git clone https://github.com/sriram-R-krishnan/devops-buildDocker:
Dockerize the application by creating a Dockerfile.
Create a docker-compose file to use the above image.
install a docker on machine by using docker.sh file that i uploaded in repo located in installation directory. And copy the all the files from the cloned reposity to a directory called devops-Capstone
cp -r devops-build/build /root/devops-capstonecreate docker file
nano dockerfileheres the dockerfile content
# Use the official Ubuntu base image
FROM nginx:latest
COPY . /usr/share/nginx/html/And create docker compose file to use the image
nano docker-compose.yamlhere is the docker compose-file content
version: '3'
services:
app:
image: ${IMAGE_NAME} # here i utilized the environmental varible to get the image
ports:
- "80:80" Bash Scripting: Write 2 scripts
build.sh for building docker images.
deploy.sh for deploying the image to server.
create a build.sh file to build an image
nano build.shhere is the scripting file of bash.sh
#!/bin/bash
# Incremented count for image name
IMAGE_COUNT=$(($(docker images | grep -c "^react") + 1))
# Build and tag the Docker image without using cache
docker build --no-cache -t "react:${IMAGE_COUNT}" .
# Echo the image name
echo "Built Docker image: react:${IMAGE_COUNT}"create a deploy.sh file for deployment
nano deploy.shheres the deploy.sh file contnet
#!/bin/bash
# Docker Hub username
DOCKER_USERNAME="cjayanth"
# Check the argument passed
if [[ "$1" == "devchanged" ]]; then
echo "Tagging and pushing image to dev repository..."
IMAGE_COUNT="$2" # Use the count passed as an argument
docker tag "react:${IMAGE_COUNT}" "${DOCKER_USERNAME}/dev:Latest${IMAGE_COUNT}"
docker push "${DOCKER_USERNAME}/dev:Latest${IMAGE_COUNT}"
export IMAGE_NAME="react:${IMAGE_COUNT}" # Export before running docker-compose
docker compose up -d
elif [[ "$1" == "devmergedmaster" ]]; then
echo "Tagging and pushing image to prod repository..."
IMAGE_COUNT="$2" # Use the count passed as an argument
docker tag "react:${IMAGE_COUNT}" "${DOCKER_USERNAME}/prod:Latest${IMAGE_COUNT}"
docker push "${DOCKER_USERNAME}/prod:Latest${IMAGE_COUNT}"
docker push "${DOCKER_USERNAME}/dev:Latest${IMAGE_COUNT}"
export IMAGE_NAME="react:${IMAGE_COUNT}" # here Exported the image before running docker-compose
docker compose up -d
else
echo "Invalid argument. Please provide either 'devchanged' or 'devmergedmaster'."
exit 1
fi
Version Control: Push the code to github to dev branch (use dockerignore & gitignore file). Note: Use only CLI for related git commands.
clone the repositry using
git clone https://github.com/jayan/final-project.gitcopy all files and directories from devops-capstone to cloned repo And use git commands to push the code to the git hub now create a branch called dev and chekout to dev
git checkout -b devcreate .gitignore and .dockerignore file in case want ignore particular files to prevent the push to central repo and use gitcommands to push all the files directory into a central dev repo
git add .
git commit -m "commiting first time
git origin push masterDocker hub: Create 2 repos "dev" and "prod" to push images. "Prod" repo must be private and "dev" repo can be public.
#create 2 repos in docker hub go to the dockerhub account and click on create repositry
Create 2 repos "dev" and "prod" to push images.
"Prod" repo must be private and "dev" repo can be public.

Jenkins: Install and configure jenkins build step as per needs to build, push & deploy the application. Connect jenkins to the github repo with auto build trigger from both dev & master branch. If code pushed to dev branch, docker image must build and pushed to dev repo in docker hub. If dev merged to master, then docker image must be pushed to prod repo in docker hub.
#jenkins installation install jenkins on machine by using jenkins.sh file that was uploaded in my repo located in installation directory run the jenkins.sh file directly by using
bash jenkins.sh systemctl enable jenkins
systemctl restart jenkinsNow try to access the jenkins and login into the jenkins
#install the plugins according to the requirement
create a jenkinsfile on machine and define a workflow and push it to the github.
pipeline {
agent any
stages {
stage('Checkout') {
steps {
script {
echo "Branch Name: ${env.BRANCH_NAME}"
git branch: "${env.BRANCH_NAME}", url: 'https://github.com/jayan/capstone-devops.git'
}
}
}
stage('Build and Push (Conditional)') {
steps {
script {
echo "Branch Name: ${env.BRANCH_NAME}"
if (env.BRANCH_NAME == 'dev') {
sh 'chmod +x build.sh'
def buildOutput = sh(script: './build.sh', returnStdout: true).trim()
def imageCount = buildOutput.tokenize(':').last() // Extract the image count
echo "Image count: ${imageCount}"
sh 'chmod +x deploy.sh'
sh "./deploy.sh devchanged ${imageCount}" // Pass only the image count
} else if (env.BRANCH_NAME == 'master') {
def mergeCommit = sh(script: "git log --merges --first-parent -1 --pretty=format:\"%H\"", returnStdout: true).trim()
def isMerged = sh(script: "git branch --contains ${mergeCommit}", returnStdout: true).trim()
if (isMerged.contains('* dev')) {
echo "Dev branch has been merged to main, executing build and deploy..."
def buildOutput = sh(script: './build.sh', returnStdout: true).trim()
def imageCount = buildOutput.tokenize(':').last() // Extract the image count
echo "Image count: ${imageCount}"
sh 'chmod +x deploy.sh'
sh "./deploy.sh devmergedmain ${imageCount}" // Pass only the image count
} else {
echo "Dev branch has not been merged to main, skipping build and deploy."
}
} else {
echo "Skipping build and deploy for branch: ${env.BRANCH_NAME}"
}
}
}
}
}
}now we can create a multibranch pipeline to Connect jenkins to the github repo with auto build trigger from both dev & master branch
If code pushed to dev branch, docker image must build and pushed to dev repo in docker hub.
If dev merged to master, then docker image must be pushed to prod repo in docker hub.
Go to jenkins dashboard and creat new job select multibranch pipeline.

Branch source is git and enter git url and click on add and select filter by name (with wildcards).
photo1715673658 (5)
master* dev*And enter the path of the jenkinsfile from your github repositry path and select scanwebhook to triiger the build evrytime when ever the change will happaned in github repo In scan multibranch pipeline triggers select scan by webhook and enter
cloudcreate a webhook in a GitHub repository, go to the repository's settings, navigate to the "Webhooks" section, click "Add webhook," provide the jenkinsurl and configure other settings as needed, then click "Add webhook" to save.
Playload URL*
http://3.109.56.73:8080/multibranch-webhook-trigger/invoke?token=cloudafter created the webhook. go to jenkins and click on save and apply
now it will auto trigger from both dev and master branch If code pushed to dev branch, docker image must build and pushed to dev repo in docker hub here you can see below image the dev branch get started building and deploy it into the docker hub into dev branch
now go to the docker hub and check weather image was uploaded or not
here is the image that was uploaded. every time it get start building the image will be created with the name of react each time it build image the name will be change it starts increase the count like react1 react2 react3. and the tag name should also change Latest1 Latest2

if dev merged to master than the image will get pushed to prod in docker hub
here if dev merged to master than it will start trigger automatically and make it run build.sh and and deploy.sh and d will finally push it into the docker hub pro repo
latest7 was a tag name and image name will be react7

Monitoring: setup a monitoring system to check the health status of the application (open-source) sending the notifications only if application goes down is highly appreciable.
#To monitoring my application
Go to the docker file do some changes in it. these is my previous dockerfile
# Use the official Ubuntu base image
FROM nginx:latest
COPY . /usr/share/nginx/html/after changing
# Use the official Ubuntu base image
FROM nginx:latest
COPY . /usr/share/nginx/html/
COPY nginx/ /etc/nginx/
# Expose port 80 for your application
EXPOSE 80
# Expose port 8080 for your metrics
EXPOSE 8081and upload it into the github repo #Install Nginx Prometheus Exporter fetch all the available metrics for now. We'll use the Nginx prometheus exporter to do that. It's a Golang application that compiles to a single binary without external dependencies, which is very easy to install.
First of all, let's create a folder for the exporter and switch directory.
mkdir /opt/nginx-exporter
cd /opt/nginx-exportercreate a dedicated user for each application that you want to run. Let's call it an nginx-exporter user and a group.
sudo useradd --system --no-create-home --shell /bin/false nginx-exporterFrom the releases pages on GitHub, let's find the latest version and copy the link to the appropriate archive. In my case, it's a standard amd64 platform.
We can use curl to download the exporter on the Ubuntu machine.
curl -L https://github.com/nginxinc/nginx-prometheus-exporter/releases/download/v0.11.0/nginx-prometheus-exporter_0.11.0_linux_amd64.tar.gz -o nginx-prometheus-exporter_0.11.0_linux_amd64.tar.gzExtract the prometheus exporter from the archive.
tar -zxf nginx-prometheus-exporter_0.11.0_linux_amd64.tar.gzYou can also remove it to save some space.
rm nginx-prometheus-exporter_0.11.0_linux_amd64.tar.gzLet's make sure that we downloaded the correct binary by checking the version of the exporter.
./nginx-prometheus-exporter --versionIt's optional; let's update the ownership on the exporter folder.
chown -R nginx-exporter:nginx-exporter /opt/nginx-exporternano /etc/systemd/system/nginx-exporter.serviceMake sure you update the scrape-uri to the one you used in Nginx to expose basic metrics. Also, update the Linux user and the group to match yours in case you used different names.
nginx-exporter.service
[Unit]
Description=Nginx Exporter
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=0
[Service]
User=nginx-exporter
Group=nginx-exporter
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/opt/nginx-exporter/nginx-prometheus-exporter \
-nginx.scrape-uri=http://3.7.66.132:8081/status
[Install]
WantedBy=multi-user.targetEnable the service to automatically start the daemon on Linux restart.
systemctl enable nginx-exporterThen start the nginx prometheus exporter.
systemctl start nginx-exporterCheck the status of the service.
systemctl status nginx-exporterAnd install a Node Exporter used for monitoring and collecting metrics from Linux system
wget https://github.com/prometheus/node_exporter/releases/download/v1.8.0/node_exporter-1.8.0.linux-amd64.tar.gztar -zxf node_exporter-1.8.0.linux-amd64.tar.gz
rm -rf node_exporter-1.8.0.linux-amd64.tar.gz
mv node_exporter-1.8.0.linux-amd64 /etc/node_exportercreate node_exporter.service file
nano /etc/systemd/system/node_exporter.serviceenter the info in it
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/etc/node_exporter/node_exporter
Restart=always
[Install]
WantedBy=multi-user.targetsystemctl enable node_exporterThen start the nginx prometheus exporter.
systemctl start node_exporterCheck the status of the service.
systemctl status node_exporter#setup a monitoring system using terraform script to create an instance called prometheus these is my terraform script to create an instance
provider "aws" {
region = "ap-south-1"
}
# Create key pair resource
resource "aws_key_pair" "webserver" {
key_name = "webserver" # Name of the existing key pair
}
# Create security group resource
resource "aws_security_group" "prometheus_sg" {
name = "prometheus_sg"
description = "Security group for Prometheus instance"
# Allow SSH
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# Allow HTTP
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# Allow HTTPS
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# Allow port 3000
ingress {
from_port = 3000
to_port = 3000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# Allow port 9090
ingress {
from_port = 9090
to_port = 9090
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# Outbound traffic is allowed by default
}
# Launch EC2 instance
resource "aws_instance" "prometheus_instance" {
ami = "ami-0f58b397bc5c1f2e8" # Ubuntu 20.04 LTS AMI (replace with your desired Ubuntu AMI)
instance_type = "t2.micro" # Instance type can be adjusted as needed
key_name = aws_key_pair.webserver.key_name
security_groups = [aws_security_group.prometheus_sg.name]
tags = {
Name = "prometheus"
}
}
connect to the prometheus instance
#Install Prometheus¶ Now let's quickly install the latest version of prometheus on the same host.
Create a dedicated Linux user for Prometehus to scrap matric from the deployed application.
Let's check the latest version of Prometheus from the download page.
You can use the curl or wget command to download Prometheus.
curl -L https://github.com/prometheus/prometheus/releases/download/v2.41.0/prometheus-2.41.0.linux-amd64.tar.gz -o prometheus-2.41.0.linux-amd64.tar.gzThen, we need to extract all Prometheus files from the archive.
tar -xvf prometheus-2.41.0.linux-amd64.tar.gzmv prometheus-2.41.0.linux-amd64 /etc/prometheuscreate prometheus.service
nano /etc/systemd/system/prometheus.service
And wite
```bash
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/etc/prometheus/prometheus --config.file=/etc/prometheus/prometheus.yml
Restart=always
[Install]
WantedBy=multi-user.target
run
systemctl demon-reload
systemctl enable prometheus
systemctl start prometheusnano /etc/prometheus/prometheus.yml
add data in it
scrape_configs:
- job_name: "nginx-prometheus-exporter"
static_configs:
- targets: ["3.7.66.132:9113"]
- job_name: "nginx-fluentd"
static_configs:
- targets: ["3.7.66.132:9100"]
systemctl restart prometheusNow you can go to http://:9090/ to check if the prometheus is working.

Under the targets section, you should have a single nginx-prometheus-exporter target.
Now install grafana in the prometheus instance only. Grafana is used for visualizing and monitoring metrics and logs through interactive and customizable dashboards. go to the following document to install grafana https://grafana.com/docs/grafana/latest/setup-grafana/installation/debian/
After installation completes To automatically start the Grafana after reboot, enable the service.
systemctl enable grafana-serverThen start the Grafana.
systemctl start grafana-serverTo check the status of Grafana, run the following command
systemctl status grafana-server Now you can access Grafana on port http://:3000. The username is admin, and the password is admin as well.

First of all, let's add our Prometheus as a datasource.
For the URL, use http://localhost:9090 and click save and test.

Let's create a two dashboards
one is for application monitoring call it as nginx and another is for my machine monitoring call it as node exporter

I'm going to fill out the panels using the metrics that we retried from the status page. You can find this dashboard in my github repository.
these my nginx dashbord that monitors my application

these will monitor weather my application was up and running if in these dashbord nginx_up = 0 it means the application is down and if it shows nginx_up = 1 it means application is up and running

these is my node dashboard that monitor my instance

#To send notification if my application goes down
go to the alert rules click on create alertrule and follow the below image
here i written a condition that the given posql query if the value below 1 it will get a alert

And click on save it
After these go to the contact poit and click on add contact point. and configure the email id and save it

And go to the /etc/grafana/grafana.ini and configure these
[smtp]
enabled = true
host = your_smtp_server_address
port = 587
user = your_smtp_username
password = your_smtp_passwordand navigate to the contact point which you have create and click on start test notification to check weather its workking or not if you get mail it means its working
go to the notification policies and create new policy to make a connetion with alret rule and contact point
now here you see my application was up and running
if i go to my deployementent machine and stop the application i will get a nofification that my server was down
here i get the notification that my server was down
#my deployed site URL
link:- http://3.7.66.132/









































