diff --git a/2026/day-01/learning-plan.md b/2026/day-01/learning-plan.md new file mode 100644 index 0000000000..006a9d8521 --- /dev/null +++ b/2026/day-01/learning-plan.md @@ -0,0 +1,12 @@ +# My current level +I am not a complete fresher but also not intermediate.I have basic knowledge about linux commands, docker and ec2 instances. + +#Goals for next 90 days +To learn python along with this devops course, Focus on building projects, Showing up everyday no matter how I feel. + +# Core devops skills I want to build +Docker containerisation, Linux along with networking , Kubernetes. + +# Weekly time budget +4-5 hours/day on weekdays 6-7 hours on weekends + diff --git a/2026/day-02/linux-architecture.md b/2026/day-02/linux-architecture.md new file mode 100644 index 0000000000..b700963bba --- /dev/null +++ b/2026/day-02/linux-architecture.md @@ -0,0 +1,48 @@ +# Core components of linux + +i) Hardware layer +ii) Shell +iii) Kernel +iv) System libraries/USER/APLLICATIONS +v) System utilities(like,GNU) + +# How processes are created in linux + +*fork() System Call: A running "parent" process initiates the fork() system call to create a new, nearly identical "child" process. + The child process receives a copy of the parent's memory space, open file descriptors, and other resources +*exec() System Call: After the fork(), the child process typically uses an exec() system call (e.g., execve()) to replace its entire memory space with a new program's code and data. +*wait() System Call: The parent process often uses the wait() system call to pause its own execution until its child process finishes and exits, + allowing the parent to collect the child's exit status and prevent it from becoming a zombie process. + +# Process states + +A process transitions through several states during its lifecycle: + +1. Running (R): The process is either currently executing on the CPU or waiting in the run queue to be executed. +2. Sleeping/Waiting (S or D): The process is waiting for some event to occur (e.g., I/O completion, a signal). +3. Stopped (T): The process has been suspended by a job control signal (like Ctrl+Z). +5. Zombie (Z): The process has terminated, but its parent process has not yet collected its exit status, so its entry still exists in the process table. + +# What systemd does + +1. Initializes the System: It is the first user-space process to run during boot (PID 1) +2. Manages Services: It starts, stops, and restarts background services (daemons) efficiently using "unit files" which define how services should behave [2]. +3. Provides System Logging: It includes journald, a centralized logging management system [1]. +4. Manages Devices and Mount Points: It uses udev (as part of the suite) to manage device events and automatically handle device hot-plugging [1]. +5. Enables Parallelism: It uses socket and D-Bus activation to start services in parallel, significantly speeding up boot times [2]. + +# Why does it matter + +1. Standardization: It provides a consistent, standardized framework across many different Linux distributions, making system administration and development more uniform [2]. +2. Faster Boot Times: Its design allows for aggressive parallelization during startup, which dramatically decreases the time it takes for a system to become usable [2]. +3. Modern Features: It offers robust features essential for modern computing, such as cgroup management for resource control, on-demand service activation, and better security isolation for services [1, 2]. + +# List of 5 commands that I will be using daily + +1. cd +2. ls +3. pwd +4. touch +5. man + + diff --git a/2026/day-03/linux-commands-cheatsheet.md b/2026/day-03/linux-commands-cheatsheet.md new file mode 100644 index 0000000000..5dbeac887d --- /dev/null +++ b/2026/day-03/linux-commands-cheatsheet.md @@ -0,0 +1,35 @@ +# Commands focused on process management + +1. ps aux (lists running processes with detailed info.) +2. top (provides list of running processes) +3. htop (advanced version of top where user can see horiontally and vertically) +4. kill (sends a signal to terminate a process by its process id) +5. pkill (terminate a process by its name) + +# Use this command to see the linux distribution and version +* cat /etc/os-release + +# Very improtant command to know the usage of a command +* man [type the command you want to get details of and it will give each and every detail about it] + +# Commands focused on file system + +1. ls (list directory contents) +2. cd (change directory) +3. pwd (print working directory) +4. cp (copy files or directories) +5. rm (remove file or directory +6. head (display first few lines of a file) +7. tail (display last few lines of a file) +8. chmod (change file permissions i.e, rwx) +9. chown (change file ownership) +10. find (search for files ina directory hierarchy) +11. tar (archive files) +12. zip/unzip (compress and extract files) + +# Commands focused on networking and troubleshooting + +1. curl (transfer data from or to a server) +2. wget (download files from internet) +3. ssh (secure shell to a remote server) +4. ping (check connectivity to a host) diff --git a/2026/day-04/linux-practise.md b/2026/day-04/linux-practise.md new file mode 100644 index 0000000000..79c31be6f1 --- /dev/null +++ b/2026/day-04/linux-practise.md @@ -0,0 +1,79 @@ +# Outcome of ps +ps + PID TTY TIME CMD + 1181 pts/1 00:00:00 sudo + 1182 pts/1 00:00:00 su + 1183 pts/1 00:00:00 bash + 1688 pts/1 00:00:00 ps + +# Output of top +1 root 20 0 22496 13704 9480 S 0.0 1.4 0:01.55 + 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 + 3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 + 4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 + 5 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 + 6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 + +# Outcome of systemctl status +systemctl status 1180 +● session-2.scope - Session 2 of User ubuntu + Loaded: loaded (/run/systemd/transient/session-2.scope; transient) + Transient: yes + Active: active (running) since Thu 2026-01-29 06:37:11 UTC; 36min ago + Tasks: 9 + Memory: 93.7M (peak: 136.4M) + CPU: 5.584s + CGroup: /user.slice/user-1000.slice/session-2.scope + ├─ 875 "sshd: ubuntu [priv]" + ├─ 990 "sshd: ubuntu@pts/0" + ├─1027 -bash + ├─1180 sudo su + ├─1181 sudo su + ├─1182 su + ├─1183 bash + ├─1798 systemctl status 1180 + └─1799 less + +# Outcome of tail +tail -5 file +ssh .. +touch +vi +vim +nano + +# Outcome of crontab -l +crontab -l +# Edit this file to introduce tasks to be run by cron. +# +# Each task to run has to be defined through a single line +# indicating with different fields when the task will be run +# and what command to run for the task +# +# To define the time you can provide concrete values for +# minute (m), hour (h), day of month (dom), month (mon), +# and day of week (dow) or use '*' in these fields (for 'any'). +# +# Notice that tasks will be started based on the cron's system +# daemon's notion of time and timezones. +# +# Output of the crontab jobs (including errors) is sent through +# email to the user the crontab file belongs to (unless redirected). +# +# For example, you can run a backup of all your user accounts +# at 5 a.m every week with: +# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/ +# +# For more information see the manual pages of crontab(5) and cron(8) +# +# m h dom mon dow command + +0 3 * * * + + + +17 13 * * 4 echo "Weekend soon!" | mail -s "Reminder" gzeus5476@gmail.com + +# Outcome of journalctl +journalctl -u google.com +-- No entries -- diff --git a/2026/day-05/linux-troubleshooting-runbook.md b/2026/day-05/linux-troubleshooting-runbook.md new file mode 100644 index 0000000000..8b0d70fe4a --- /dev/null +++ b/2026/day-05/linux-troubleshooting-runbook.md @@ -0,0 +1,70 @@ +# uname -a +Linux ip-172-31-21-199 6.14.0-1018-aws #18~24.04.1-Ubuntu SMP Mon Nov 24 19:46:27 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux + +# cat /etc/os-release +PRETTY_NAME="Ubuntu 24.04.3 LTS" +NAME="Ubuntu" +VERSION_ID="24.04" +VERSION="24.04.3 LTS (Noble Numbat)" +VERSION_CODENAME=noble +ID=ubuntu +ID_LIKE=debian +HOME_URL="https://www.ubuntu.com/" +SUPPORT_URL="https://help.ubuntu.com/" +BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" +PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" +UBUNTU_CODENAME=noble +LOGO=ubuntu-logo + +# lsb_release -a +No LSB modules are available. +Distributor ID: Ubuntu +Description: Ubuntu 24.04.3 LTS +Release: 24.04 +Codename: noble + +# ps -o pid + PID + 1322 + 1323 + 1324 + 1485 + +# free -h + total used free shared buff/cache a vailable +Mem: 957Mi 333Mi 397Mi 888Ki 383Mi 623Mi +Swap: 0B 0B 0B + +# df -h +Filesystem Size Used Avail Use% Mounted on +/dev/root 27G 2.2G 24G 9% / +tmpfs 479M 0 479M 0% /dev/shm +tmpfs 192M 872K 191M 1% /run +tmpfs 5.0M 0 5.0M 0% /run/lock +/dev/xvda16 881M 89M 730M 11% /boot +/dev/xvda15 105M 6.2M 99M 6% /boot/efi +tmpfs 96M 12K 96M 1% /run/user/1000 + +# du -sh +8.0K . + +# +ps aux +USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND +root 1 0.0 1.3 22060 13348 ? Ss 10:23 0:01 /sbin +root 2 0.0 0.0 0 0 ? S 10:23 0:00 [kthr + + + + + + + + + + + + + + + diff --git a/2026/day-06/file-io-practise.md b/2026/day-06/file-io-practise.md new file mode 100644 index 0000000000..f3c5eca5dd --- /dev/null +++ b/2026/day-06/file-io-practise.md @@ -0,0 +1,12 @@ +# touch notes.txt +# echo "HEllo sabhi ko" > notes.txt +# echo "Hope you all are doing good" >> notes.txt +# echo "Have a nice day" | tee -a notes.txt +Have a nice day +# head -n 2 notes.txt +HEllo sabhi ko +Hope you all are doing good +# tail -n 2 notes.txt +Hope you all are doing good +Have a nice day + diff --git a/2026/day-07/day-07-linux-fs-and-scenarios.md b/2026/day-07/day-07-linux-fs-and-scenarios.md new file mode 100644 index 0000000000..3b53d333a0 --- /dev/null +++ b/2026/day-07/day-07-linux-fs-and-scenarios.md @@ -0,0 +1,22 @@ +# du -sh /var/log 2>/dev/null | sort -n | tail -n 5 +132M /var/log + +# journalctl -u nginx | tail -n 1 +Feb 03 09:28:05 ip-172-31-21-199 systemd[1]: Started nginx.service - A high performance web server and a reverse proxy server. + +# cat /etc/hostname +ip-172-31-21-199 + +# systemctl is-enabled nginx +enabled + +# systemctl list-unit-files | tail -3 +xfs_scrub_all.timer disabled enabled + +410 unit files listed. + +# ps aux --sort=-%cpu | head -3 +USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND +root 1 0.3 1.3 22092 13420 ? Ss :27 0:04 /sbin/init +ubuntu 1335 0.2 0.7 14996 7140 ? S 09:31 0:01 sshd: ubuntu@pts/0 + diff --git a/2026/day-08/day-08-cloud-deployment.md b/2026/day-08/day-08-cloud-deployment.md new file mode 100644 index 0000000000..969b3a7985 --- /dev/null +++ b/2026/day-08/day-08-cloud-deployment.md @@ -0,0 +1,61 @@ +# LIST OF COMMANDS THAT I USED + + 1 ls + 2 systemctl is-enabled nginx + 3 sudo apt-get install nginx + 4 systemctl is-enabled nginx + 5 systemctl status nginx + 6 cat /etc/hostname + 7 scp -i downloads/nginx.pem ubuntu@54.87.49.42:~/nginx-logs.txt . + 8 sudo scp -i downloads/nginx.pem ubuntu@54.87.49.42:~/nginx-logs.txt . + 9 scp -i nginx.pem ubuntu@54.87.49.42:~/nginx-logs.txt . + 11 systemctl is-enabled nginx + 15 history + 427 scp -i downloads/nginx.pem ubuntu@52.91.197.11:~/nginx-logs.txt . + 428 chmod 600 downloads/nginx.pem + 429 scp -i downloads/nginx.pem ubuntu@52.91.197.11:~/nginx-logs.txt . + 430 ls -l downloads/nginx.pem + 431 sudo chmod 600 downloads/nginx.pem + 432 ls -l downloads/nginx.pem + 433 sudo chmod 600 downloads/nginx.pem + 434 ls -l downloads/nginx.pem + 435 mv downloads/nginx.pem nginxx.pem + 436 ls + 437 ls nginx.pem + 438 cat nginxx.pem + 439 ls -l + 440 sudo chmod 600 nginxx.pem + 441 ls -l + 442 cd ~ + 443 ls + 444 mkdir -p ~/.ssh + 445 ls + 446 ls -l + 447 ls -a + 448 cp /mnt/c/Users/dell/downloads/nginx.pem ~/.ssh/nginx.pem + 449 cp /mnt/c/Users/dell/Downloads/nginx.pem ~/.ssh/nginx.pem + 450 cd /mnt/c/Users/dell + 451 ls + 452 cp /mnt/c/Users/dell/nginxx.pem ~/.ssh/nginx.pem + 453 cd ~ + 454 ls + 455 ls -l ~/.ssh/nginxx.pem + 456 sudo ls -l ~/.ssh/nginxx.pem + 457 cd .ssh + 458 ls + 459 cd .. + 460 ls + 461 ls -l ~/.ssh/nginx.pem + 462 chmod 600 ~/.ssh/nginx.pem + 463 ls -l ~/.ssh/nginx.pem + 464 scp -i ~/.ssh/nginx.pem ubuntu@52.91.197.11:/var/log/nginx/access.log . + 465 ls + 466 cat access.log + + + +# PROBLEM THAT I FACED WAS THAT I WAS RUNNING THAT (scp) COMMAND FROM MY SSH INSTANCE RATHER THAN MY LOCAL MACHINE SO IT TOOK ME A LOT OF TIME BUT NOW IT'S CLEAR + +# WHAT I LEARNED +I learned how to copy log files from another server. + diff --git a/2026/day-08/nginx-logs.txt b/2026/day-08/nginx-logs.txt new file mode 100644 index 0000000000..eca4c8e482 --- /dev/null +++ b/2026/day-08/nginx-logs.txt @@ -0,0 +1,15 @@ +152.58.157.30 - - [03/Feb/2026:10:06:02 +0000] "GET / HTTP/1.1" 200 409 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/144.0.0.0 Safari/537.36" +152.58.157.30 - - [03/Feb/2026:10:06:02 +0000] "GET /favicon.ico HTTP/1.1" 404 196 "http://54.87.49.42/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/144.0.0.0 Safari/537.36" +27.147.191.231 - - [03/Feb/2026:10:10:01 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36" +195.178.110.39 - - [03/Feb/2026:10:22:25 +0000] "\x16\x03\x01\x00\xEE\x01\x00\x00\xEA\x03\x03\x9B\xB64\xBC\xED\x1EA\x17\x94D.PChV_\x0B\xF1\x83\xEFR\xBA\xAB\x09Q{\xB4\xD0\xDA\xB3`S " 400 166 "-" "-" +13.89.125.26 - - [05/Feb/2026:05:32:31 +0000] "GET / HTTP/1.1" 200 409 "-" "Mozilla/5.0 zgrab/0.x" +185.16.39.146 - - [05/Feb/2026:05:32:55 +0000] "GET / HTTP/1.1" 200 615 "-" "Wget" +185.16.39.146 - - [05/Feb/2026:05:39:40 +0000] "GET / HTTP/1.1" 200 615 "-" "Wget" +152.58.156.224 - - [05/Feb/2026:05:40:01 +0000] "GET / HTTP/1.1" 200 409 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/144.0.0.0 Safari/537.36" +152.58.156.224 - - [05/Feb/2026:05:40:01 +0000] "GET /favicon.ico HTTP/1.1" 404 196 "http://52.91.197.11/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/144.0.0.0 Safari/537.36" +185.16.39.146 - - [05/Feb/2026:05:46:38 +0000] "GET / HTTP/1.1" 200 615 "-" "Wget" +202.40.178.238 - - [05/Feb/2026:05:50:11 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36" +185.16.39.146 - - [05/Feb/2026:05:52:53 +0000] "GET / HTTP/1.1" 200 615 "-" "Wget" +185.16.39.146 - - [05/Feb/2026:06:02:26 +0000] "GET / HTTP/1.1" 200 615 "-" "Wget" +204.76.203.219 - - [05/Feb/2026:06:06:53 +0000] "GET / HTTP/1.1" 200 409 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36 Edg/90.0.818.46" +185.16.39.146 - - [05/Feb/2026:06:11:34 +0000] "GET / HTTP/1.1" 200 615 "-" "Wget" diff --git a/2026/day-09/day-09-user-management.md b/2026/day-09/day-09-user-management.md new file mode 100644 index 0000000000..c21d72f271 --- /dev/null +++ b/2026/day-09/day-09-user-management.md @@ -0,0 +1,28 @@ +# Users created +jessi, hank, nairobi, tokyo + +# Groups created +developers, admins, project-team + +# Group Assignments +walt: walt developers +jessi: jessi developers admins +hank: hank admins +nairobi: nairobi admins project-team +tokyo: tokyo + +# Directories created +/opt/dev-project +/opt/dev-project +jessi +hank +nairobi +tokyo + +# commands used +useradd, mkdir, chgrp, chmod, groupadd, man, groups + +# WHAT I LEARNED +I learned how to add groups and users while making their own directories +Also how to assign users to different groups +And to get lst of users in a group diff --git a/2026/day-10/day-10-file-permissions.md b/2026/day-10/day-10-file-permissions.md new file mode 100644 index 0000000000..b803a0606a --- /dev/null +++ b/2026/day-10/day-10-file-permissions.md @@ -0,0 +1,15 @@ +# Files Created +devops.txt, notes.txt, script.sh + +# Permission changes +filename before after +devops.txt 664 444 +notes.txt 664 640 +script.sh 664 775 + +# Commands used +cat, vim, ls, touch, chmod, head, tail, + +# What I learned +1> I learned how to execute a file and how to make it executable +2> Learned how to make a file read only, diff --git a/2026/day-11/day-11-file-ownership.md b/2026/day-11/day-11-file-ownership.md new file mode 100644 index 0000000000..297eb47dae --- /dev/null +++ b/2026/day-11/day-11-file-ownership.md @@ -0,0 +1,18 @@ +# Files and Directories created +files - project-config.yml, devops-file.txt, gold.txt, strategy.conf, access-codes, blueprints.pdf, escape-plan.txt, +directories - heist-project, vault, plans, bank-heist + +# Ownership changes +file before(owner,group) after(owner,group) +project-config.yml ubuntu:ubuntu walt:heist-team +access-codes.txt ubuntu:ubuntu walt:vault-team +blueprints.pdf ubuntu:ubuntu jessi:tech-team +escape-plan ubuntu:ubuntu nairobi:vault-team + +# commands used +mkdir, chown, chgrp, touch, ls, cd, history, pwd + +# What i learned +I learned how to change file ownerships(like, user and group) + + diff --git a/2026/day-12/day-12-revision.md b/2026/day-12/day-12-revision.md new file mode 100644 index 0000000000..9bfce5c3f3 --- /dev/null +++ b/2026/day-12/day-12-revision.md @@ -0,0 +1,17 @@ +# Commands that save me the most +1> systemctl (to check if a service is running or not) +2> ls (to see what all files and directories i have created) +3> pwd (to see in which directry I m working as I usually forget) + +# To check if a service is healthy +1> systemctl is-enabled (to see if a service is started/running or not) +2> journalctl -u ( to see the logs of a service) +3> systemctl status + +# To change ownership and permission + of a file named file.txt + 1> sudo chown file.txt + 2> sudo chmod 764 file.txt + + # What I will focus on next 3 days? + Next 3 days , I will focus on giving more and more time for developing my skills. diff --git a/2026/day-13/day13-lvm.md b/2026/day-13/day13-lvm.md new file mode 100644 index 0000000000..0177771594 --- /dev/null +++ b/2026/day-13/day13-lvm.md @@ -0,0 +1,13 @@ +# List of commands used +- lsblk +- pvcreate +- pvs ( to see created physical volume) +- vgcreate +- vgs (to see list of volume groups) +- lvcreate +- mkdir +- mkfs +- mount +- lvextend +- resize2fs +- df -h diff --git a/2026/day-13/volume-management.jpeg b/2026/day-13/volume-management.jpeg new file mode 100644 index 0000000000..0c2892d1d6 Binary files /dev/null and b/2026/day-13/volume-management.jpeg differ diff --git a/2026/day-16/day-16-shell-scripting.md b/2026/day-16/day-16-shell-scripting.md new file mode 100644 index 0000000000..ef1e27293f --- /dev/null +++ b/2026/day-16/day-16-shell-scripting.md @@ -0,0 +1,78 @@ +# TASK-1(Script code) +-#!/bin/bash +-echo "Hello , Devops!" + +# OUTPUT +Hello , Devops! + +# TASK-2(Script) +- #!/bin/bash +-read -p "Type your name:" name +-read -p "Type your role:" role +-echo "Hello my name is $name and my role is is $role" + +# OUTPUT +-Type your name:uttam +-Type your role:teacher +-Hello my name is uttam and my role is is teacher + +# TASK-3(Script) +-#!/bin/bash +-read -p "Type your name:" name +-read -p "Type your fav. tool:" tool +-echo "Hello my name is $name and my favourite tool is $tool" + +# OUTPUT +-Type your name:uttam +-Type your fav. tool:docker +-Hello my name is uttam and my favourite tool is docker + +# TASK-4(Script) +-#!/bin/bash +-read -p "Enter your number:" a +-if [ $a -gt 0 ];then +- echo "Given number is positive" +-elif [ $a -eq 0 ];then +- echo "Given number is is exactly zero" +-else +- echo "Given number is negative" +fi + +# OUTPUT + +-Enter your number:0 +-Given number is is exactly zero + +# TASK-5(Script) +-#!/bin/bash + +-read -p "Enter service name:" service_name +-read -p "Do you want to check service status(y/n)" +-if [ y ];then +- echo "service is active" +- systemctl status $service_name +-else +- echo "SKipped" +-fi + +# OUTPUT + +-Enter service name:nginx +-Do you want to check service status(y/n)y +-service is active +-● nginx.service - A high performance web server and a reverse proxy server +- Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; preset: enable> +- Active: active (running) since Fri 2026-02-13 07:09:53 UTC; 4h 38min ago +- Docs: man:nginx(8) +- Main PID: 1708 (nginx) +- Tasks: 5 (limit: 2131) +- Memory: 3.7M (peak: 8.3M) +- CPU: 96ms +- CGroup: /system.slice/nginx.service +- ├─1708 "nginx: master process /usr/sbin/nginx -g daemon on; master_pro> +- ├─1711 "nginx: worker process" +- ├─1712 "nginx: worker process" +- ├─1713 "nginx: worker process" +- +- └─1714 "nginx: worker process" + diff --git a/2026/day-17/day-17-scripting.md b/2026/day-17/day-17-scripting.md new file mode 100644 index 0000000000..3811984f32 --- /dev/null +++ b/2026/day-17/day-17-scripting.md @@ -0,0 +1,82 @@ +# for_loop.sh + +#!/bin/bash + +# Define an array of 5 fruits +fruits=("Apple" "Banana" "Orange" "Grape" "Mango") + +# Loop through each fruit in the array +for fruit in "${fruits[@]}"; do + echo "Fruit: $fruit" +done + +# count.sh + +#!/bin/bash + +for i in {1..10}; +do +echo "$i" +done + +# countdown.sh + +read -p "Enter a number:" number + +while [ $number -ge 0 ]; do + echo "Number is $number" + ((number--)) +done + +# greet.sh + +#!/bin/bash + + +echo "Hello,$1" + +# args_demo.sh + +#!/bin/bash + +echo "$#" +echo "$@" +echo "$0" + +# install_packages.sh + +#!/bin/bash + +PACKAGES=("nginx" "curl" "wget") +echo "Updating package lists.." +sudo apt-get update -qq + +for PKG in "${PACKAGES[@]}"; do + + if dpkg -s "$PKG" >/dev/null 2>&1; then + echo "[SKIP] $PKG is already installed" + else + echo "[MISSING] $PKG is not installed.Installing now.." + + if sudo apt-get install -y "$PKG" >/dev/null 2>&1; then + echo "[SUCCESS] $PKG has been installed" + else + echo "[ERROE] failed to install $PKG ." + fi + fi +done + +# safe_script.sh + +#!/bin/bash + + +set -e + +mkdir -p /tmp/devops-test || echo "Directory already exists" +cd /tmp/devops-test +echo "I'm in $(pwd)" +touch empty.text +exit + + diff --git a/2026/day-18/day-18-scripting.md b/2026/day-18/day-18-scripting.md new file mode 100644 index 0000000000..24398a2f44 --- /dev/null +++ b/2026/day-18/day-18-scripting.md @@ -0,0 +1,130 @@ +## function.sh for greeting user and printing sum of two numbers +#!/bin/bash + +greet_user () { + read -p "Enter a number:" a + read -p "Enter another number:" b + sum=$((a + b)) + echo "Hello $1! " + echo "Sum of the numbers is : $sum" +} +greet_user "$1" + +## disk_check.sh + +#!/bin/bash + + +check_disk() { + echo "====== Root disk usage =====" + df -h / + echo +} + +check_memory() { + echo "===== Memory usage =====" + free -h + echo +} + + +check_disk +check_memory + + +## strict_demo.sh + +#!/bin/bash + + +set -euo pipefail +read -p "Input which function to call (1/2/3): " input + +undefined_variable () +{ + echo "hello learners" + echo "hope you are doing $well" +} +command_failure () +{ + echo "The given command is" + ls /etrin +} +pipe_failure () +{ + cat "PIPE" | awk #23 + echo "The piefail has occured beacuse a part of script has failed" +} + +if [ $input == 1 ] +then + undefined_variable + echo "Done" +elif [ $input == 2 ] +then + command_failure + echo "Donw" +elif [ $input == 3 ] +then + pipe_failure + echo "Done" +fi + +# local_demo.sh + +#!/bin/bash + + +local_variable_store () { + local x=10 + echo "Local variable value inside function is : $x" +} +global_variable_store () { + y=20 + echo "Value of global variable inside function is : $y" +} +local_variable_store +echo "Value of local variable outside function is : $x" +echo "===========LOCAL VARIABLE CAN'T BE ACCESSED OUTSIDE FUNCTION======================" +global_variable_store +echo "Value of global variable outside function is : $y" + +# system_info.sh + +#!/bin/bash + + +set -euo pipefail + +hostname_info () { + cat /etc/os-release + hostname +} +uptime () { + /usr/bin/uptime -p +} +disk_usage () { + df -h | sort -h | head -n 6 +} +memory_usage () { + free -h +} +cpu_cons_proc () { + ps aux --sort=-%cpu | head -n 6 +} +main_function () { + echo " ========== Hostname and OS info are ============================" + hostname_info +echo "=============== Uptime of the system is =======================" +uptime +echo "==================TOP 5 DISK USAGES ===================" +disk_usage +echo "================== MEMORY USAGE ===================" +memory_usage +echo "=========================== TOP 5 CPU CONSUMING PROCESSES =======================" +cpu_cons_proc +} +main_function + + + diff --git a/2026/day-19/day-19-project.md b/2026/day-19/day-19-project.md new file mode 100644 index 0000000000..e609bae849 --- /dev/null +++ b/2026/day-19/day-19-project.md @@ -0,0 +1,112 @@ +# TO CREATE LOG ROTATION + +#!/bin/bash + +if [ $# -ne 1 ]; then + echo "Usage: $0 " + exit 1 +fi + +LOG_DIR="$1" + +[ -d "$LOG_DIR" ] || { echo "Error: Directory does not exist."; exit 1; } + +# Count & compress .log files older than 7 days +compressed=$(find "$LOG_DIR" -type f -name "*.log" -mtime +7 -exec gzip {} \; -printf '.' | wc -c) + +# Count & delete .gz files older than 30 days +deleted=$(find "$LOG_DIR" -type f -name "*.gz" -mtime +30 -delete -printf '.' | wc -c) + +echo "Compressed $compressed file(s)." +echo "Deleted $deleted old compressed file(s)." + + + + +# TO CREATE SERVER BACKUP SCRIPT + +#!/bin/bash +set -euo pipefail + +<< readme +This is a script for backup +Usage: +./backup.sh +readme + + +display_usage() { + echo "Usage: +./backup.sh +" +} + +if [ $# -eq 0 ]; then + display_usage +fi + +source_dir=$1 +timestamp=$(date '+%Y-%m-%d-%H-%M-%S') +backup_dir=$2 + +create_backup() { + zip -r "${backup_dir}/backup_${timestamp}.zip" "${source_dir}" >/dev/null + if [ $? -eq 0 ]; then + echo "Backup generated successfully for ${timestamp}" + echo "BACKUP_FILE_NAME======${backup_dir}/backup_${timestamp}.zip" + fi +} +create_backup + +create_delete() { + # Count & delete .gz files older than 30 days +deleted=$(find "$backup_dir" -type f -name "*.zip" -mtime +14 -delete -printf '.' | wc -c) +} +create_delete + + +# CRON_JOB + +0 2 * * * /home/dell-2004/bash_scripts/log_rotate2.sh >> /home/dell-2004/cron.log 2>&1 +0 3 * * 7 /home/dell-2004/bash_scripts/backup.sh >> /home/dell-2004/cron.log 2>&1 + + +# MAINTENANCE.SH + + +#!/bin/bash + + +maintenance() { + date=$(date '+%Y-%m-%y-%H-%M-%S') + echo "$dt" + + source ./backup.sh /home/dell-2004/bash_scripts /home/dell-2004/backups + + if [ $? -eq 0 ]; then + echo "backup taken" + else + echo "backup failed" + fi + + + source ./log_rotate2.sh /home/dell-2004/log_practise + + if [ $? -eq 0 ]; then + echo "log move successfully" + else + echo "logfiles didn't move" + fi +} >> /var/log/maintenance.log + +maintenance + +## cat /var/log/maintenance.log + +Backup generated successfully for 2026-02-23-17-01-42 +BACKUP_FILE_NAME======/home/dell-2004/backups/backup_2026-02-23-17-01-42.zip +backup taken +Compressed 0 file(s). +Deleted 0 old compressed file(s). +log move successfully + diff --git a/2026/day-20/day-20-solution.md b/2026/day-20/day-20-solution.md new file mode 100644 index 0000000000..838ea3afd0 --- /dev/null +++ b/2026/day-20/day-20-solution.md @@ -0,0 +1,61 @@ +######### + +####### MY BASH SCRIPT FOR LOG ANALYSER AND SAMPLE REPORT + +#!/bin/bash +set -euo pipefail +TIMESTAMP=$(date '+%Y-%m-%y-%H-%M-%S') +error_check() { +if [ $# -eq 0 ]; then + echo "NO ARGUMENTS PROVIDED" >&2 + echo "USAGE: $0 " >&2 + exit 1 +fi + + +LOG_FILE="$1" +TOTAL_LINES=$(wc -l < "$LOG_FILE") + + +if [ ! -f "$LOG_FILE" ]; then + echo "Error: FILE does not exist: $LOG_FILE" >&2 + exit 1 +fi + echo "Logs found" +} +error_check "$@" + +error_count() { + awk '/DEBUG/ { print NR $0 }' $LOG_FILE +} + + +critical_events() { + CRITICAL=$(awk '/CRITICAL/ { print NR,$0 }' $LOG_FILE) + if [ $? -eq 0 ]; then + echo "--------------CRITICAL EVENTS------------------" + echo "$CRITICAL" + fi +} + + + +top_error() { + GREP=$(grep "ERROR" $LOG_FILE | awk '{$1=$2=$3=""; print}' | sort | uniq -c | sort -rn | head -2) +echo "---------------TOP 2 ERROR MESSAGES-------------------" +echo "$GREP" +} + +Summary_report() { + echo "TIMESTAMP: $TIMESTAMP" + echo "$LOG_FILE" + echo "Total lines processed: $TOTAL_LINES" + error_count + top_error + echo "----------TOP 2 ERROR MESSAGES COUNT------------" + top_error | wc -l + critical_events + + +} >> "log_report_${LOG_FILE}_${TIMESTAMP}.txt" +Summary_report diff --git a/2026/day-20/day20.png b/2026/day-20/day20.png new file mode 100644 index 0000000000..5f984a8b85 Binary files /dev/null and b/2026/day-20/day20.png differ diff --git a/2026/day-22/git-commands.md b/2026/day-22/git-commands.md new file mode 100644 index 0000000000..4fbe083a04 --- /dev/null +++ b/2026/day-22/git-commands.md @@ -0,0 +1,13 @@ +#### git-commands.md + +# LIST OF GIT COMMANDS I USED + +1> git init {IT INITIALIZES THE LOCAL REPO AS GIT REPO}\ +#IF YOU WANT TO TURN A DIRECTORY INTO GIT DIRECTORY WHERE YOU CAN ADD OR COMMIT A FILE +2> git config --global user.name { USED TO CHANGE USER NAME} +TO SET USERNAME +3> git config --global user.email { USED TO CHANGE USER EMAIL} +TO SET USER EMAIL-ID +4> git add . { IT MOVES YOUR FILE TO STAGING AREA } +5> git commit -m "" { COMMITS YOUR FILE(S) +6> git log {shows commit history } diff --git a/2026/day-23/git-commands.md b/2026/day-23/git-commands.md new file mode 100644 index 0000000000..362e7ea104 --- /dev/null +++ b/2026/day-23/git-commands.md @@ -0,0 +1,26 @@ +# LIST OF GIT COMMANDS I USED + +1> git init {IT INITIALIZES THE LOCAL REPO AS GIT REPO}\ +#IF YOU WANT TO TURN A DIRECTORY INTO GIT DIRECTORY WHERE YOU CAN ADD OR COMMIT A FILE + +2> git config --global user.name { USED TO CHANGE USER NAME} +TO SET USERNAME + +3> git config --global user.email { USED TO CHANGE USER EMAIL} +TO SET USER EMAIL-ID + +4> git add . { IT MOVES YOUR FILE TO STAGING AREA } + +5> git commit -m "" { COMMITS YOUR FILE(S) + +6> git log {shows commit history } + +7> git branch (To see which all branches the user have and header shows in which branch the user is currently working at) + +8> git checkout -b (makes a new branch and takes you there at the same time) + +9> git switch ( takes you to an existing branch) + +10> git branch -d feature-2 ( To delete a local branch) + + diff --git a/2026/day-24/day-24-notes.md b/2026/day-24/day-24-notes.md new file mode 100644 index 0000000000..a0d7d0fbde --- /dev/null +++ b/2026/day-24/day-24-notes.md @@ -0,0 +1,54 @@ +## git merge +# What is a fast forward merge? +A git merge that occurs ina direct,linear path and; leaves no merge commit and it's automatic when a linear path is available. + +# When does Git create merge commit? +WHen git needs to combine two different histories commit histories. + +# What is a merge conflict? +When users on different branch try to change the same file, conflict occurs. + +## git rebase +# What does git rebase actually do to your commits? +It creates a linear commit history. + +# How is the history different from a merge? +It doesn't show where a new branch is merged. + +# Why should you never rebase commits that have been pushed and shared with others? +Because it will change commit history and without branching others will not be able to locate the branch which have been changed and their local version of history doesn't matches the server's version. + +# When would you rebase vs merge? +Use rebase to sync your private branch with main and clean up pending commits. +Use merge to sync your shared branch with main and after finishing a feature to move it to main. + +## git squash +# What does squash merging do? +It combines all commits into one and puts it in staging area to be committed. + +# When would you use squash merge vs regular merge? +When you have many small query or appending commits , use squash merge. +Use regular merge when your commit contains important and distinct architectural steps. + +# What is the trade-off of squashing? +Clutter:: Very Low as there is only commit per feature) +Debugging:: Harder as all the Changes are bundled. + +## git stash +# What is the difference between "git stash pop" and "git stash apply"? +While using git stash pop it removes your work from stash, and deletes the stash entry immediately. +While in the case of git stash apply, it gives copy of a file meaning keeping the original in stash list. + +# When would you stash in a real-world workflow? +Only when my file/work isn't ready to be committed. + +## cherry Picking +# What does cherry-pick do? +It can pick a single commit to merge instead of merging whole commmit history. + +# When would you cherry-pick in a reak project? +If only some commits are good to be merged into main. + +# What can go wrong with cheer-picking? +If you merge two branches, same commit can appear two times. + diff --git a/2026/day-25/day-25-notes.md b/2026/day-25/day-25-notes.md new file mode 100644 index 0000000000..f58b1c8faf --- /dev/null +++ b/2026/day-25/day-25-notes.md @@ -0,0 +1,58 @@ +# Task-1 (Git reset--Hands on) + +## What is the difference between --soft, --mixed, and --hard? +--soft → moves HEAD/branch, keeps staging area and working directory unchanged +--mixed (default) → moves HEAD/branch, resets staging area, keeps working directory unchanged +--hard → moves HEAD/branch, resets staging area and overwrites working directory to match the target commit +## Which one is destructive and why? +--hard is destructive because it permanently discards uncommitted changes in both the staging area and working directory (overwrites files on disk). +## When would you use each one? +--soft → when you want to uncommit but keep changes staged (e.g. edit last commit, split commits) +--mixed → when you want to uncommit and unstage changes but keep the files modified in your working directory +--hard → when you want to completely throw away all uncommitted changes and go back to a clean state at a specific commit +## Should you ever use git reset on commits that are already pushed? +Almost never on shared branches — use git revert instead to avoid rewriting public history and breaking teammates' work. + +# Task 2: (Git Revert — Hands-On) + +## How is git revert different from git reset? +git revert creates a new commit that undoes changes while keeping history intact; git reset moves the branch pointer and can discard commits from history. +## Why is revert considered safer than reset for shared branches? + Revert preserves public history so collaborators can pull safely without conflicts or lost work; reset rewrites history and requires force push that breaks others' branches. +## When would you use revert vs reset? +Use revert for already-pushed/shared commits to avoid breaking history; use reset for local-only commits you haven't pushed yet or on personal branches. + +# Task 4: (Branching Strategies) + +## 1) GitFlow + +How it works: Long-lived main (prod) + develop; feature → develop, release → main, hotfix → main & develop. +Flow: main ← release ← develop ← feature ; hotfix → main + develop +Used: Enterprises with planned versioned releases. +Pros/Cons: Clear structure & stable releases; but heavy, slow, merge-conflict prone. + +## 2) GitHub Flow + +How it works: Single main; short feature branches → PR → merge to main → deploy. +Flow: main ← feature (PR) +Used: SaaS, CI/CD environments deploying continuously. +Pros/Cons: Simple & fast; but weak for complex release/version control. + +## 3) Trunk-Based Development + +How it works: Developers commit directly to main (trunk) or very short-lived branches; heavy CI + feature flags. +Flow: devs → main (daily merges) +Used: High-velocity teams (e.g., big tech CI-driven orgs). +Pros/Cons: Minimal merge pain & fast integration; requires strong discipline and automation. + +## Which strategy would you use for a startup shipping fast? + +Startup shipping fast: GitHub Flow or Trunk-Based (speed > structure). + +## Which strategy would you use for a large team with scheduled releases? + +Large team with scheduled releases: GitFlow (controlled release cycles). + +## Which one does your favorite open-source project use? (check any repo on GitHub) + +Open-source example: Kubernetes uses a trunk-based style with main + release branches. diff --git a/2026/day-26/day-26-notes.md b/2026/day-26/day-26-notes.md new file mode 100644 index 0000000000..20cd58e334 --- /dev/null +++ b/2026/day-26/day-26-notes.md @@ -0,0 +1,27 @@ +# Task 1: Install and Authenticate +## What authentication methods does gh support? +(HTTPS) and (SSH) + + +# Task 2: Issues +## How could you use gh issue in a script or automation? +You can use gh issue in scripts to automatically create issues when errors or failures occur in a project. +It can also be used to list or close issues automatically during CI/CD workflows on GitHub. +This helps teams track problems without manually managing issues. + +# Task 3: Pull Requests + +## What merge methods does gh pr merge support? +gh pr merge supports merge, squash, and rebase merge methods on GitHub. + +## How would you review someone else's PR using gh? +You can review a PR by viewing it with gh pr view and checking the changes using gh pr diff . + +# Task 4: Github Actions & Workflows (Preview) + +## How could gh run and gh workflow be useful in a CI/CD pipeline? +gh run and gh workflow in GitHub CLI help manage GitHub Actions from the terminal. + +They can be used in CI/CD pipelines to trigger workflows, monitor workflow runs, and check logs without opening the GitHub website. This helps automate builds, deployments, and debugging directly from scripts or automation tools. + + diff --git a/2026/day-27/day-27-notes.md b/2026/day-27/day-27-notes.md new file mode 100644 index 0000000000..19612c4487 --- /dev/null +++ b/2026/day-27/day-27-notes.md @@ -0,0 +1,2 @@ +### I forgot to take screenshot before editing my github profile as I was editing at thesame time as I was reading the .md file. +### So I will be uploading this file and a screenshot of my current profie which is significantly different from how it was. diff --git a/2026/day-27/github-profile.png b/2026/day-27/github-profile.png new file mode 100644 index 0000000000..d9467bec59 Binary files /dev/null and b/2026/day-27/github-profile.png differ diff --git a/2026/day-28/day-28-notes.md b/2026/day-28/day-28-notes.md new file mode 100644 index 0000000000..0a4d759e6c --- /dev/null +++ b/2026/day-28/day-28-notes.md @@ -0,0 +1,4 @@ +# ### Git Branching (Explained for a Non-Developer) + +Git branching is a way to work on different versions of the same project without breaking the main one. Imagine you’re writing a book: the **main branch** is the official version everyone reads. If you want to experiment with a new chapter or change the ending, you make a **branch**, which is like a separate copy where you can try thi + diff --git a/2026/day-29/Dockerfile b/2026/day-29/Dockerfile new file mode 100644 index 0000000000..2f12aa7656 --- /dev/null +++ b/2026/day-29/Dockerfile @@ -0,0 +1,8 @@ + +FROM ubuntu + +WORKDIR /app + +RUN echo "HELLO DOSTO" + + diff --git a/2026/day-29/day-29-notes.md b/2026/day-29/day-29-notes.md new file mode 100644 index 0000000000..ea4a9edbb3 --- /dev/null +++ b/2026/day-29/day-29-notes.md @@ -0,0 +1,16 @@ +# Task:1 What is Docker? + +## What is a Container? +A container packages your app + its dependencies into one isolated unit. It runs the same everywhere — no more "works on my machine" issues. + +## Containers vs Virtual Machines +ContainerVMOSShares host kernelOwn full OSSizeMBsGBsStartupMillisecondsMinutesIsolationProcess-levelHardware-level +Key point: VMs virtualize hardware. Containers virtualize the OS. Containers are faster and lighter. + +## Docker Architecture + +Client — Your CLI (docker run, docker build), sends commands to the daemon +Daemon (dockerd) — Background engine that builds and runs containers +Image — Read-only blueprint, built from a Dockerfile +Container — A running instance of an image +Registry — Image storage hub (e.g. Docker Hub) diff --git a/2026/day-30/day-30-notes.md b/2026/day-30/day-30-notes.md new file mode 100644 index 0000000000..d2b165ef52 --- /dev/null +++ b/2026/day-30/day-30-notes.md @@ -0,0 +1,9 @@ +# What are layers? + +Layers are snapshots of filesystem changes, stacked on top of each other to form a complete image. Each Dockerfile instruction that touches files creates a new layer. + +# Why does Docker use them? + +Speed — rebuild only what changed, cache the rest +Efficiency — shared layers aren't duplicated on disk +Transparency — docker image history shows exactly what built the image and how much each step costs in size diff --git a/2026/day-31/day-31-notes.md b/2026/day-31/day-31-notes.md new file mode 100644 index 0000000000..f12c90deee --- /dev/null +++ b/2026/day-31/day-31-notes.md @@ -0,0 +1,20 @@ +# CMD vs ENTRYPOINT + +## Use CMD when: + +The container can reasonably run different commands depending on context +You want a helpful default but full flexibility +Example: a base Ubuntu/Python image where users might run bash, python, or anything else + +## Use ENTRYPOINT when: + +Your container has one clear, dedicated purpose +You're shipping a tool and the container is that tool +Example: a container that wraps ffmpeg, curl, or your own app — users only pass flags/args, not a whole new command + +## Use both together when: + +You have a fixed executable but want sensible default arguments that are easy to swap +Example: ENTRYPOINT ["python", "app.py"] + CMD ["--port", "8080"] — the app always runs, but the port is overridable + + diff --git a/2026/day-31/my-first-image/.dockerignore b/2026/day-31/my-first-image/.dockerignore new file mode 100644 index 0000000000..3c65d57533 --- /dev/null +++ b/2026/day-31/my-first-image/.dockerignore @@ -0,0 +1,2 @@ + +.git diff --git a/2026/day-31/my-first-image/Dockerfile b/2026/day-31/my-first-image/Dockerfile new file mode 100644 index 0000000000..21e2933c78 --- /dev/null +++ b/2026/day-31/my-first-image/Dockerfile @@ -0,0 +1,58 @@ + +# Base image +#FROM ubuntu:latest AS builder + +# Installing curl + +#RUN apt-get update -y && apt-get install curl -y + +# Default command + +#CMD echo "Hello from my custom image!" + + +# First we tell the file which base image we want to use + +#FROM ubuntu:latest AS builder + +# Executing command during build + +#RUN apt-get update -y + +# To set working directory + +#WORKDIR /app + +# To copy files from host to image + +#COPY . . + +# Any port we want to tell the user to expose + +#EXPOSE 80 + +# Default command that will run during running container +#ENTRYPOINT echo "You're getting better" + + +# Dockerfile to run , port and access through browser + +# Base image + +FROM nginx:alpine AS builder + +# Setting workdirectory +WORKDIR /home + +# Copying my file + +COPY . /usr/share/nginx/html + +# The port to expose + +EXPOSE 80 + +# Command to run + +CMD ["nginx", "-g" , "daemon off;"] + diff --git a/2026/day-31/my-first-image/index.html b/2026/day-31/my-first-image/index.html new file mode 100644 index 0000000000..7911ed0e29 --- /dev/null +++ b/2026/day-31/my-first-image/index.html @@ -0,0 +1,35 @@ + + + + + + Hello World + + + +
+

Hello, World! 👋

+

This is a simple static HTML page.

+
+ + + diff --git a/2026/day-32/data/index.html b/2026/day-32/data/index.html new file mode 100644 index 0000000000..0f3eb7f540 --- /dev/null +++ b/2026/day-32/data/index.html @@ -0,0 +1,447 @@ + + + + + + NGINX — Server Online + + + + +
+ + +
+ +
+
+ operational +
+
+ + +
+
// http server running
+

+ IT
+ WORKS. +

+

Your NGINX web server is live and serving requests. Replace this file with your own index.html located at the server root.

+
+ + +
+
+
Status
+
200
+
HTTP OK
+
+
+
Server
+
NGINX
+
latest / stable
+
+
+
Protocol
+
HTTP
+
port 80 / 443
+
+
+ + +
+
+
+
+
+
bash — nginx container
+
+
+
$docker run -d -p 80:80 -v $(pwd):/usr/share/nginx/html nginx
+
✔ Container started successfully
+
$curl http://localhost
+
✔ 200 OK — index.html served
+
$nginx -t
+
nginx: configuration file /etc/nginx/nginx.conf test is successful
+
$
+
+
+ + +
+
+
📁
+
Server Root
+
Place your files at /usr/share/nginx/html/ to serve them. This file is index.html.
+
+
+
⚙️
+
Config Location
+
Edit your NGINX config at /etc/nginx/nginx.conf or drop files in /etc/nginx/conf.d/.
+
+
+
📋
+
View Logs
+
Access logs: /var/log/nginx/access.log
Error logs: /var/log/nginx/error.log
+
+
+
🔄
+
Reload Config
+
After editing config, run nginx -s reload inside the container — no restart needed.
+
+
+ + + + +
+ + diff --git a/2026/day-32/day-32-volumes-networking.md b/2026/day-32/day-32-volumes-networking.md new file mode 100644 index 0000000000..b82bd11ce6 --- /dev/null +++ b/2026/day-32/day-32-volumes-networking.md @@ -0,0 +1,12 @@ +# What is the difference between a named volume and a bind mount? + +### A Named Volume is fully managed by Docker — you just give it a name, Docker decides where to store it on the host, and it handles all the internals. You don't need to know or care about the actual folder path. It's safe, portable, and ideal for persistent data like databases. + + +### A Bind Mount, on the other hand, maps a specific folder from your host machine directly into the container. You're in full control of the path, and any changes on either side reflect instantly. It's perfect for development when you want to edit code on your host and see changes live inside the container. + +# Why does custom networking allow name-based communication but the default bridge doesn't? + +### When we use default bridge, docker just connects container on a network,and doesn't assign any DNS server.So, it just understands another container by IP not by name. + +### But when we create custom bridge, docker automatically spins up an internal DNS server for the network.S, when we ping it understands not only ip but also name. diff --git a/2026/day-33/compose-basics/docker-compose.yml b/2026/day-33/compose-basics/docker-compose.yml new file mode 100644 index 0000000000..926d243e0b --- /dev/null +++ b/2026/day-33/compose-basics/docker-compose.yml @@ -0,0 +1,27 @@ +services: + mysql: + image: mysql:8.0 + container_name: mysql + environment: + MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} + MYSQL_DATABASE: ${MYSQL_DATABASE} + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + volumes: + - mysql_data:/var/lib/mysql + wordpress: + image: wordpress:latest + ports: + - "8080:80" + environment: + WORDPRESS_DB_HOST: ${WORDPRESS_DB_HOST} + WORDPRESS_DB_NAME: ${WORDPRESS_DB_NAME} + WORDPRESS_DB_USER: ${WORDPRESS_DB_USER} + WORDPRESS_DB_PASSWORD: ${WORDPRESS_DB_PASSWORD} + depends_on: + - mysql + +volumes: + mysql_data: + + diff --git a/2026/day-34/app-stack/Dockerfile b/2026/day-34/app-stack/Dockerfile new file mode 100644 index 0000000000..863b5a71db --- /dev/null +++ b/2026/day-34/app-stack/Dockerfile @@ -0,0 +1,16 @@ +# Base image +FROM python:3.9 + +# Working directory +WORKDIR /app + +# Copy all files +COPY . . + +# To install requirements + +RUN pip install -r requirements.txt + +# Command to execute + +CMD ["python", "app.py"] diff --git a/2026/day-34/app-stack/README.md b/2026/day-34/app-stack/README.md new file mode 100644 index 0000000000..379e7da0a0 --- /dev/null +++ b/2026/day-34/app-stack/README.md @@ -0,0 +1,255 @@ +# 🐳 Docker Compose App Stack + +A 3-service application stack built with Docker Compose as part of a DevOps learning journey. + +## Stack + +| Service | Technology | Purpose | +|--------|------------|---------| +| `web` | Python Flask | Web application | +| `db` | MySQL 8.0 | Database | +| `cache` | Redis (Alpine) | Caching layer | + +--- + +## Project Structure + +``` +. +├── app.py # Flask application +├── requirements.txt # Python dependencies +├── Dockerfile # Docker image for Flask app +├── docker-compose.yml # Multi-container setup +└── .env # Environment variables (not committed) +``` + +--- + +## Files + +### app.py +Simple Flask web app that runs on port 5000. + +### Dockerfile +```dockerfile +# Base image +FROM python:3.9 + +# Working directory +WORKDIR /app + +# Copy all files +COPY . . + +# Install requirements +RUN pip install -r requirements.txt + +# Run the app +CMD ["python", "app.py"] +``` + +### .env +Create a `.env` file in the root directory with these variables: +``` +MYSQL_ROOT_PASSWORD=your_root_password +MYSQL_USER=your_user +MYSQL_PASSWORD=your_password +``` + +--- + +## docker-compose.yml + +```yaml +services: + web: + build: . + ports: + - "8080:5000" + networks: + - backend + depends_on: + db: + condition: service_healthy + cache: + condition: service_started + labels: + app: "myapp" + environment: "development" + + db: + image: mysql:8.0 + restart: on-failure + container_name: mysql + networks: + - backend + environment: + MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + MYSQL_DATABASE: mysqldb + healthcheck: + test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "--password=password"] + interval: 10s + timeout: 5s + retries: 5 + start_period: 10s + volumes: + - mysql_data:/var/lib/mysql + labels: + app: "myapp" + environment: "development" + + cache: + image: redis:alpine + networks: + - backend + labels: + app: "myapp" + environment: "development" + +volumes: + mysql_data: + +networks: + backend: +``` + +--- + +## Task 2: Healthchecks & depends_on + +The `db` service has a healthcheck using `mysqladmin ping`. The `web` service uses `condition: service_healthy` so it only starts after MySQL is confirmed healthy — not just started. + +| Condition | Meaning | +|-----------|---------| +| `service_healthy` | Wait for healthcheck to pass | +| `service_started` | Wait for container to just start | + +To verify healthcheck status: +```bash +docker-compose ps # Shows (healthy) next to db +docker inspect mysql | grep -A 10 Health +``` + +--- + +## Task 3: Restart Policies + +| Policy | When to use | +|--------|-------------| +| `no` | Development — don't auto restart while debugging | +| `always` | Critical production services that must run 24/7 | +| `on-failure` | Apps that should restart on error but not on manual stop | +| `unless-stopped` | Like always — but respects manual stops | + +--- + +## Task 5: Named Networks, Volumes & Labels + +### Networks +Explicit networks give you control over which services can talk to each other. Defined at the bottom of compose file and attached to each service: +```yaml +networks: + - backend +``` + +### Named Volumes +Persist data even after containers are removed: +```yaml +volumes: + - mysql_data:/var/lib/mysql +``` + +### Labels +Metadata tags for better organization — don't affect container behavior: +```yaml +labels: + app: "myapp" + environment: "development" +``` + +--- + +## Task 6: Scaling + +Scale web app to 3 replicas: +```bash +docker-compose up --scale web=3 -d +``` + +### What breaks with port mapping? +If 3 containers all try to bind to port `8080` on your machine — only one can use it. It causes a conflict. + +To scale properly you need to: +- Remove `container_name` from the service +- Remove `ports` from the service +- Add a **Load Balancer** (like Nginx) in front to distribute traffic + +--- + +## Usage + +### Start the stack +```bash +docker-compose up -d +``` + +### View running services +```bash +docker-compose ps +``` + +### View logs +```bash +docker-compose logs # All services +docker-compose logs web # Specific service +``` + +### Stop without removing +```bash +docker-compose stop +``` + +### Remove everything +```bash +docker-compose down +``` + +### Rebuild after code changes +```bash +docker-compose up --build +``` + +### Scale web service +```bash +docker-compose up --scale web=3 -d +``` + +--- + +## Access + +Once running, open your browser and visit: +``` +http://localhost:8080 +``` + +--- + +## Key Concepts Learned + +- **Multi-container setup** with Docker Compose +- **Custom Dockerfile** for a Python Flask app +- **Named volumes** for data persistence +- **Environment variables** via `.env` file +- **depends_on** with healthcheck conditions +- **Restart policies** for container recovery +- **Explicit networks** for service isolation +- **Labels** for better organization +- **Scaling** and why port mapping breaks it +- **Redis** as a caching layer + +--- + +*Built as part of a DevOps learning journey* 🚀 diff --git a/2026/day-34/app-stack/app.py b/2026/day-34/app-stack/app.py new file mode 100644 index 0000000000..067e49540e --- /dev/null +++ b/2026/day-34/app-stack/app.py @@ -0,0 +1,14 @@ +from flask import Flask + + +app = Flask(__name__) + + +@app.route('/') +def home(): + return("Hello from flask") + + +if __name__ == '__main__': + app.run(host='0.0.0.0', port=5000) + diff --git a/2026/day-34/app-stack/docker-compose.yml b/2026/day-34/app-stack/docker-compose.yml new file mode 100644 index 0000000000..a1f1998586 --- /dev/null +++ b/2026/day-34/app-stack/docker-compose.yml @@ -0,0 +1,48 @@ +services: + web: + build: . + networks: + - backend + depends_on: + db: + condition: service_healthy + cache: + condition: service_started + labels: + app: "myapp" + environment: "development" + + db: + image: mysql:8.0 + restart: on-failure + container_name: mysql + networks: + - backend + environment: + MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + MYSQL_DATABASE: mysqldb + healthcheck: + test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "--password=password"] + interval: 10s + timeout: 5s + retries: 5 + start_period: 10s + volumes: + - mysql_data:/var/lib/mysql + labels: + app: "myapp" + environment: "development" + cache: + image: redis:alpine + networks: + - backend + labels: + app: "myapp" + environment: "development" + +volumes: + mysql_data: +networks: + backend: diff --git a/2026/day-34/app-stack/requirements.txt b/2026/day-34/app-stack/requirements.txt new file mode 100644 index 0000000000..7e1060246f --- /dev/null +++ b/2026/day-34/app-stack/requirements.txt @@ -0,0 +1 @@ +flask diff --git a/2026/day-34/day-34-notes.md b/2026/day-34/day-34-notes.md new file mode 100644 index 0000000000..05d6fac175 --- /dev/null +++ b/2026/day-34/day-34-notes.md @@ -0,0 +1,12 @@ +# When would you use each restart policy? + +### restart: no +Use during development when you want to debug why a container crashed — you don't want it auto restarting before you can see the error. +### restart: always +Use for critical production services like databases, web servers — anything that must keep running 24/7 even after a system reboot. + +### restart: on-failure +Use for background jobs or scripts that might fail due to an error but shouldn't restart if you manually stop them. + +### unless-stopped +Use when you want always behavior but with one exception — if YOU manually stopped it, don't restart it. Good for services you sometimes need to temporarily turn off. diff --git a/2026/day-35/day-35-multistage-hub.md b/2026/day-35/day-35-multistage-hub.md new file mode 100644 index 0000000000..e7815fb569 --- /dev/null +++ b/2026/day-35/day-35-multistage-hub.md @@ -0,0 +1,6 @@ +# Why is the multi-stage image so much smaller? + +Stage 1 (build) → has everything needed to build the app — Node.js, npm, package files, build tools. This is heavy! +Stage 2 (deployer) → only copies the final built app — no npm, no build tools, no unnecessary files + +So the final image only contains what's needed to run the app, not what was needed to build it! diff --git a/2026/day-35/node-app/Dockerfile b/2026/day-35/node-app/Dockerfile new file mode 100644 index 0000000000..0ca49f482c --- /dev/null +++ b/2026/day-35/node-app/Dockerfile @@ -0,0 +1,31 @@ +# Base image +FROM node:20.11.0-bookworm-slim AS build + +# Working directory +WORKDIR /app + +# Copy files +COPY package*.json ./ + +# Installing dependencies + +RUN npm install + +COPY . . + +RUN useradd -m appuser + +FROM gcr.io/distroless/nodejs20-debian12 AS deployer + +COPY --from=build /app /app + +WORKDIR /app + +EXPOSE 3000 + +COPY --from=build /etc/passwd /etc/passwd +USER appuser + + +CMD ["app.js"] + diff --git a/2026/day-35/node-app/app.js b/2026/day-35/node-app/app.js new file mode 100644 index 0000000000..167681aaa4 --- /dev/null +++ b/2026/day-35/node-app/app.js @@ -0,0 +1,10 @@ +const express = require('express') +const app = express() + +app.get('/', (req, res) => { + res.send('Hello from Node.js!') +}) + +app.listen(3000, () => { + console.log('Server running on port 3000') +}) diff --git a/2026/day-35/node-app/package-lock.json b/2026/day-35/node-app/package-lock.json new file mode 100644 index 0000000000..25230c193d --- /dev/null +++ b/2026/day-35/node-app/package-lock.json @@ -0,0 +1,758 @@ +{ + "name": "node-app", + "lockfileVersion": 3, + "requires": true, + "packages": { + "": { + "dependencies": { + "express": "^5.2.1" + } + }, + "node_modules/accepts": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/accepts/-/accepts-2.0.0.tgz", + "integrity": "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng==", + "dependencies": { + "mime-types": "^3.0.0", + "negotiator": "^1.0.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/body-parser": { + "version": "2.2.2", + "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-2.2.2.tgz", + "integrity": "sha512-oP5VkATKlNwcgvxi0vM0p/D3n2C3EReYVX+DNYs5TjZFn/oQt2j+4sVJtSMr18pdRr8wjTcBl6LoV+FUwzPmNA==", + "dependencies": { + "bytes": "^3.1.2", + "content-type": "^1.0.5", + "debug": "^4.4.3", + "http-errors": "^2.0.0", + "iconv-lite": "^0.7.0", + "on-finished": "^2.4.1", + "qs": "^6.14.1", + "raw-body": "^3.0.1", + "type-is": "^2.0.1" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/bytes": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz", + "integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/call-bind-apply-helpers": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz", + "integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==", + "dependencies": { + "es-errors": "^1.3.0", + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/call-bound": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.4.tgz", + "integrity": "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg==", + "dependencies": { + "call-bind-apply-helpers": "^1.0.2", + "get-intrinsic": "^1.3.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/content-disposition": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-1.0.1.tgz", + "integrity": "sha512-oIXISMynqSqm241k6kcQ5UwttDILMK4BiurCfGEREw6+X9jkkpEe5T9FZaApyLGGOnFuyMWZpdolTXMtvEJ08Q==", + "engines": { + "node": ">=18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/content-type": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.5.tgz", + "integrity": "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/cookie": { + "version": "0.7.2", + "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.2.tgz", + "integrity": "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/cookie-signature": { + "version": "1.2.2", + "resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.2.2.tgz", + "integrity": "sha512-D76uU73ulSXrD1UXF4KE2TMxVVwhsnCgfAyTg9k8P6KGZjlXKrOLe4dJQKI3Bxi5wjesZoFXJWElNWBjPZMbhg==", + "engines": { + "node": ">=6.6.0" + } + }, + "node_modules/debug": { + "version": "4.4.3", + "resolved": "https://registry.npmjs.org/debug/-/debug-4.4.3.tgz", + "integrity": "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==", + "dependencies": { + "ms": "^2.1.3" + }, + "engines": { + "node": ">=6.0" + }, + "peerDependenciesMeta": { + "supports-color": { + "optional": true + } + } + }, + "node_modules/depd": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz", + "integrity": "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/dunder-proto": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz", + "integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==", + "dependencies": { + "call-bind-apply-helpers": "^1.0.1", + "es-errors": "^1.3.0", + "gopd": "^1.2.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/ee-first": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz", + "integrity": "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==" + }, + "node_modules/encodeurl": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-2.0.0.tgz", + "integrity": "sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/es-define-property": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz", + "integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-errors": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz", + "integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-object-atoms": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz", + "integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==", + "dependencies": { + "es-errors": "^1.3.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/escape-html": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/escape-html/-/escape-html-1.0.3.tgz", + "integrity": "sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow==" + }, + "node_modules/etag": { + "version": "1.8.1", + "resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz", + "integrity": "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/express": { + "version": "5.2.1", + "resolved": "https://registry.npmjs.org/express/-/express-5.2.1.tgz", + "integrity": "sha512-hIS4idWWai69NezIdRt2xFVofaF4j+6INOpJlVOLDO8zXGpUVEVzIYk12UUi2JzjEzWL3IOAxcTubgz9Po0yXw==", + "dependencies": { + "accepts": "^2.0.0", + "body-parser": "^2.2.1", + "content-disposition": "^1.0.0", + "content-type": "^1.0.5", + "cookie": "^0.7.1", + "cookie-signature": "^1.2.1", + "debug": "^4.4.0", + "depd": "^2.0.0", + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "etag": "^1.8.1", + "finalhandler": "^2.1.0", + "fresh": "^2.0.0", + "http-errors": "^2.0.0", + "merge-descriptors": "^2.0.0", + "mime-types": "^3.0.0", + "on-finished": "^2.4.1", + "once": "^1.4.0", + "parseurl": "^1.3.3", + "proxy-addr": "^2.0.7", + "qs": "^6.14.0", + "range-parser": "^1.2.1", + "router": "^2.2.0", + "send": "^1.1.0", + "serve-static": "^2.2.0", + "statuses": "^2.0.1", + "type-is": "^2.0.1", + "vary": "^1.1.2" + }, + "engines": { + "node": ">= 18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/finalhandler": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-2.1.1.tgz", + "integrity": "sha512-S8KoZgRZN+a5rNwqTxlZZePjT/4cnm0ROV70LedRHZ0p8u9fRID0hJUZQpkKLzro8LfmC8sx23bY6tVNxv8pQA==", + "dependencies": { + "debug": "^4.4.0", + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "on-finished": "^2.4.1", + "parseurl": "^1.3.3", + "statuses": "^2.0.1" + }, + "engines": { + "node": ">= 18.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/forwarded": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz", + "integrity": "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/fresh": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/fresh/-/fresh-2.0.0.tgz", + "integrity": "sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/function-bind": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz", + "integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==", + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/get-intrinsic": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz", + "integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==", + "dependencies": { + "call-bind-apply-helpers": "^1.0.2", + "es-define-property": "^1.0.1", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.1.1", + "function-bind": "^1.1.2", + "get-proto": "^1.0.1", + "gopd": "^1.2.0", + "has-symbols": "^1.1.0", + "hasown": "^2.0.2", + "math-intrinsics": "^1.1.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/get-proto": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz", + "integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==", + "dependencies": { + "dunder-proto": "^1.0.1", + "es-object-atoms": "^1.0.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/gopd": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz", + "integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/has-symbols": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz", + "integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/hasown": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz", + "integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==", + "dependencies": { + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/http-errors": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-2.0.1.tgz", + "integrity": "sha512-4FbRdAX+bSdmo4AUFuS0WNiPz8NgFt+r8ThgNWmlrjQjt1Q7ZR9+zTlce2859x4KSXrwIsaeTqDoKQmtP8pLmQ==", + "dependencies": { + "depd": "~2.0.0", + "inherits": "~2.0.4", + "setprototypeof": "~1.2.0", + "statuses": "~2.0.2", + "toidentifier": "~1.0.1" + }, + "engines": { + "node": ">= 0.8" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/iconv-lite": { + "version": "0.7.2", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.7.2.tgz", + "integrity": "sha512-im9DjEDQ55s9fL4EYzOAv0yMqmMBSZp6G0VvFyTMPKWxiSBHUj9NW/qqLmXUwXrrM7AvqSlTCfvqRb0cM8yYqw==", + "dependencies": { + "safer-buffer": ">= 2.1.2 < 3.0.0" + }, + "engines": { + "node": ">=0.10.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/inherits": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz", + "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==" + }, + "node_modules/ipaddr.js": { + "version": "1.9.1", + "resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz", + "integrity": "sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==", + "engines": { + "node": ">= 0.10" + } + }, + "node_modules/is-promise": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/is-promise/-/is-promise-4.0.0.tgz", + "integrity": "sha512-hvpoI6korhJMnej285dSg6nu1+e6uxs7zG3BYAm5byqDsgJNWwxzM6z6iZiAgQR4TJ30JmBTOwqZUw3WlyH3AQ==" + }, + "node_modules/math-intrinsics": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz", + "integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/media-typer": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/media-typer/-/media-typer-1.1.0.tgz", + "integrity": "sha512-aisnrDP4GNe06UcKFnV5bfMNPBUw4jsLGaWwWfnH3v02GnBuXX2MCVn5RbrWo0j3pczUilYblq7fQ7Nw2t5XKw==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/merge-descriptors": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-2.0.0.tgz", + "integrity": "sha512-Snk314V5ayFLhp3fkUREub6WtjBfPdCPY1Ln8/8munuLuiYhsABgBVWsozAG+MWMbVEvcdcpbi9R7ww22l9Q3g==", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/mime-db": { + "version": "1.54.0", + "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.54.0.tgz", + "integrity": "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/mime-types": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-3.0.2.tgz", + "integrity": "sha512-Lbgzdk0h4juoQ9fCKXW4by0UJqj+nOOrI9MJ1sSj4nI8aI2eo1qmvQEie4VD1glsS250n15LsWsYtCugiStS5A==", + "dependencies": { + "mime-db": "^1.54.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/ms": { + "version": "2.1.3", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz", + "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==" + }, + "node_modules/negotiator": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-1.0.0.tgz", + "integrity": "sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/object-inspect": { + "version": "1.13.4", + "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.4.tgz", + "integrity": "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/on-finished": { + "version": "2.4.1", + "resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.4.1.tgz", + "integrity": "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg==", + "dependencies": { + "ee-first": "1.1.1" + }, + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/once": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz", + "integrity": "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==", + "dependencies": { + "wrappy": "1" + } + }, + "node_modules/parseurl": { + "version": "1.3.3", + "resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz", + "integrity": "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/path-to-regexp": { + "version": "8.3.0", + "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-8.3.0.tgz", + "integrity": "sha512-7jdwVIRtsP8MYpdXSwOS0YdD0Du+qOoF/AEPIt88PcCFrZCzx41oxku1jD88hZBwbNUIEfpqvuhjFaMAqMTWnA==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/proxy-addr": { + "version": "2.0.7", + "resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.7.tgz", + "integrity": "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==", + "dependencies": { + "forwarded": "0.2.0", + "ipaddr.js": "1.9.1" + }, + "engines": { + "node": ">= 0.10" + } + }, + "node_modules/qs": { + "version": "6.15.0", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.15.0.tgz", + "integrity": "sha512-mAZTtNCeetKMH+pSjrb76NAM8V9a05I9aBZOHztWy/UqcJdQYNsf59vrRKWnojAT9Y+GbIvoTBC++CPHqpDBhQ==", + "dependencies": { + "side-channel": "^1.1.0" + }, + "engines": { + "node": ">=0.6" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/range-parser": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz", + "integrity": "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/raw-body": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-3.0.2.tgz", + "integrity": "sha512-K5zQjDllxWkf7Z5xJdV0/B0WTNqx6vxG70zJE4N0kBs4LovmEYWJzQGxC9bS9RAKu3bgM40lrd5zoLJ12MQ5BA==", + "dependencies": { + "bytes": "~3.1.2", + "http-errors": "~2.0.1", + "iconv-lite": "~0.7.0", + "unpipe": "~1.0.0" + }, + "engines": { + "node": ">= 0.10" + } + }, + "node_modules/router": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/router/-/router-2.2.0.tgz", + "integrity": "sha512-nLTrUKm2UyiL7rlhapu/Zl45FwNgkZGaCpZbIHajDYgwlJCOzLSk+cIPAnsEqV955GjILJnKbdQC1nVPz+gAYQ==", + "dependencies": { + "debug": "^4.4.0", + "depd": "^2.0.0", + "is-promise": "^4.0.0", + "parseurl": "^1.3.3", + "path-to-regexp": "^8.0.0" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/safer-buffer": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", + "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==" + }, + "node_modules/send": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/send/-/send-1.2.1.tgz", + "integrity": "sha512-1gnZf7DFcoIcajTjTwjwuDjzuz4PPcY2StKPlsGAQ1+YH20IRVrBaXSWmdjowTJ6u8Rc01PoYOGHXfP1mYcZNQ==", + "dependencies": { + "debug": "^4.4.3", + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "etag": "^1.8.1", + "fresh": "^2.0.0", + "http-errors": "^2.0.1", + "mime-types": "^3.0.2", + "ms": "^2.1.3", + "on-finished": "^2.4.1", + "range-parser": "^1.2.1", + "statuses": "^2.0.2" + }, + "engines": { + "node": ">= 18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/serve-static": { + "version": "2.2.1", + "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-2.2.1.tgz", + "integrity": "sha512-xRXBn0pPqQTVQiC8wyQrKs2MOlX24zQ0POGaj0kultvoOCstBQM5yvOhAVSUwOMjQtTvsPWoNCHfPGwaaQJhTw==", + "dependencies": { + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "parseurl": "^1.3.3", + "send": "^1.2.0" + }, + "engines": { + "node": ">= 18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/setprototypeof": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.2.0.tgz", + "integrity": "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==" + }, + "node_modules/side-channel": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.1.0.tgz", + "integrity": "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw==", + "dependencies": { + "es-errors": "^1.3.0", + "object-inspect": "^1.13.3", + "side-channel-list": "^1.0.0", + "side-channel-map": "^1.0.1", + "side-channel-weakmap": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-list": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/side-channel-list/-/side-channel-list-1.0.0.tgz", + "integrity": "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA==", + "dependencies": { + "es-errors": "^1.3.0", + "object-inspect": "^1.13.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-map": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/side-channel-map/-/side-channel-map-1.0.1.tgz", + "integrity": "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA==", + "dependencies": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.5", + "object-inspect": "^1.13.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-weakmap": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/side-channel-weakmap/-/side-channel-weakmap-1.0.2.tgz", + "integrity": "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A==", + "dependencies": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.5", + "object-inspect": "^1.13.3", + "side-channel-map": "^1.0.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/statuses": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.2.tgz", + "integrity": "sha512-DvEy55V3DB7uknRo+4iOGT5fP1slR8wQohVdknigZPMpMstaKJQWhwiYBACJE3Ul2pTnATihhBYnRhZQHGBiRw==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/toidentifier": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/toidentifier/-/toidentifier-1.0.1.tgz", + "integrity": "sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==", + "engines": { + "node": ">=0.6" + } + }, + "node_modules/type-is": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/type-is/-/type-is-2.0.1.tgz", + "integrity": "sha512-OZs6gsjF4vMp32qrCbiVSkrFmXtG/AZhY3t0iAMrMBiAZyV9oALtXO8hsrHbMXF9x6L3grlFuwW2oAz7cav+Gw==", + "dependencies": { + "content-type": "^1.0.5", + "media-typer": "^1.1.0", + "mime-types": "^3.0.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/unpipe": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz", + "integrity": "sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/vary": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/vary/-/vary-1.1.2.tgz", + "integrity": "sha512-BNGbWLfd0eUPabhkXUVm0j8uuvREyTh5ovRa/dyow/BqAbZJyC+5fU+IzQOzmAKzYqYRAISoRhdQr3eIZ/PXqg==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/wrappy": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", + "integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==" + } + } +} diff --git a/2026/day-35/node-app/package.json b/2026/day-35/node-app/package.json new file mode 100644 index 0000000000..0e37769612 --- /dev/null +++ b/2026/day-35/node-app/package.json @@ -0,0 +1,16 @@ +{ + "dependencies": { + "express": "^5.2.1" + }, + "name": "app", + "version": "1.0.0", + "main": "index.js", + "devDependencies": {}, + "scripts": { + "test": "echo \"Error: no test specified\" && exit 1" + }, + "keywords": [], + "author": "", + "license": "ISC", + "description": "" +} diff --git a/2026/day-36/flask-todo/Dockerfile b/2026/day-36/flask-todo/Dockerfile new file mode 100644 index 0000000000..1aef666a9d --- /dev/null +++ b/2026/day-36/flask-todo/Dockerfile @@ -0,0 +1,27 @@ + +# Base image +FROM python:3.9-slim AS builder + +# Working directory + +WORKDIR /app + +# Copying data + +COPY . . + +# Adding non-root user +RUN useradd -m appuser + +# running python command + +RUN pip install -r requirements.txt + +# Defining user +USER appuser + +# Running command +CMD ["python","app.py"] + +# Exposing port +EXPOSE 5000 diff --git a/2026/day-36/flask-todo/README.md b/2026/day-36/flask-todo/README.md new file mode 100644 index 0000000000..4296353095 --- /dev/null +++ b/2026/day-36/flask-todo/README.md @@ -0,0 +1,191 @@ +# 🐳 Dockerized Flask Todo + +A fully containerized Todo web application built with Python Flask and MySQL, orchestrated with Docker Compose. + +> Built by **Uttam Tripathi** as part of a DevOps learning journey. + +--- + +## Preview + +A glassmorphism-themed Todo app where you can: +- ✅ Add todos +- ❌ Delete todos +- 💾 Data persists in MySQL even after container restarts + +--- + +## Tech Stack + +| Layer | Technology | +|-------|------------| +| Web App | Python Flask | +| Database | MySQL 8.0 | +| Containerization | Docker | +| Orchestration | Docker Compose | +| Styling | Glassmorphism CSS | + +--- + +## Project Structure + +``` +flask-todo/ +├── app.py # Flask application +├── requirements.txt # Python dependencies +├── Dockerfile # Docker image for Flask app +├── docker-compose.yml # Multi-container setup +└── .env # Environment variables (not committed) +``` + +--- + +## Getting Started + +### 1. Clone the repo +```bash +git clone +cd flask-todo +``` + +### 2. Create .env file +``` +MYSQL_ROOT_PASSWORD=your_root_password +MYSQL_USER=your_user +MYSQL_PASSWORD=your_password +MYSQL_DATABASE=flaskdb +``` + +### 3. Start the stack +```bash +docker-compose up -d +``` + +### 4. Create the todos table +```bash +docker exec -it mysql mysql -u root -p +``` + +Then inside MySQL: +```sql +USE flaskdb; +CREATE TABLE todos ( + id INT AUTO_INCREMENT PRIMARY KEY, + task VARCHAR(255) NOT NULL +); +``` + +### 5. Access the app +Open your browser at: +``` +http://localhost:8080 +``` + +--- + +## Dockerfile + +```dockerfile +# Base image +FROM python:3.9-slim + +# Working directory +WORKDIR /app + +# Copy and install dependencies +COPY requirements.txt . +RUN pip install -r requirements.txt + +# Copy app files +COPY . . + +# Run the app +CMD ["python", "app.py"] +``` + +--- + +## docker-compose.yml + +```yaml +services: + web: + build: . + container_name: python-flask + ports: + - "8080:5000" + environment: + DB_HOST: db + DB_USER: ${MYSQL_USER} + DB_PASSWORD: ${MYSQL_PASSWORD} + DB_NAME: ${MYSQL_DATABASE} + networks: + - mynetwork + depends_on: + db: + condition: service_healthy + + db: + image: mysql:8.0 + container_name: mysql + environment: + MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + MYSQL_DATABASE: ${MYSQL_DATABASE} + volumes: + - myvolume:/var/lib/mysql + networks: + - mynetwork + healthcheck: + test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] + interval: 10s + timeout: 5s + retries: 5 + start_period: 30s + restart: on-failure + +volumes: + myvolume: + +networks: + mynetwork: +``` + +--- + +## Docker Hub + +Pull the image directly: +```bash +docker pull uttamtripathi-p/flask-todo:v1.0 +``` + +--- + +## Key Concepts Applied + +- **Multi-container setup** with Docker Compose +- **Custom Dockerfile** for Flask app +- **Named volumes** for MySQL data persistence +- **Environment variables** via .env file +- **Healthchecks** — app waits for DB to be truly ready +- **Restart policy** — MySQL restarts on failure +- **Custom network** for service isolation + +--- + +## Commands + +```bash +docker-compose up -d # Start stack +docker-compose down # Stop and remove containers +docker-compose up --build -d # Rebuild after code changes +docker-compose logs web # View Flask logs +docker-compose ps # View running services +docker system df # Check disk usage +``` + +--- + +*Built as part of a DevOps learning journey 🚀* diff --git a/2026/day-36/flask-todo/app.py b/2026/day-36/flask-todo/app.py new file mode 100644 index 0000000000..4685ff7275 --- /dev/null +++ b/2026/day-36/flask-todo/app.py @@ -0,0 +1,157 @@ +from flask import Flask, jsonify, request, render_template_string +import mysql.connector +import os + +app = Flask(__name__) + +def get_db(): + return mysql.connector.connect( + host=os.getenv('DB_HOST', 'db'), + user=os.getenv('DB_USER', 'flaskuser'), + password=os.getenv('DB_PASSWORD', 'password'), + database=os.getenv('DB_NAME', 'flaskdb') + ) + + +HTML = ''' + + + + Dockerized Flask Todo + + + +
+

🐳 Dockerized Flask Todo

+

by Uttam Tripathi  •  Flask + MySQL + Docker Compose

+
+
+
+

New Todo

+
+ + +
+
+
+

My Todos {{ todos|length }}

+ {% if todos %} + {% for todo in todos %} +
+ {{ todo[1] }} +
+ +
+
+ {% endfor %} + {% else %} +

No todos yet — add one above!

+ {% endif %} +
+
+ + +''' +@app.route('/') +def home(): + conn = get_db() + cursor = conn.cursor() + cursor.execute("SELECT * FROM todos") + todos = cursor.fetchall() + return render_template_string(HTML, todos=todos) + +@app.route('/todos', methods=['POST']) +def add_todo(): + task = request.form.get('task') + conn = get_db() + cursor = conn.cursor() + cursor.execute("INSERT INTO todos (task) VALUES (%s)", (task,)) + conn.commit() + return home() + +@app.route('/todos//delete', methods=['POST']) +def delete_todo(id): + conn = get_db() + cursor = conn.cursor() + cursor.execute("DELETE FROM todos WHERE id = %s", (id,)) + conn.commit() + return home() + +if __name__ == '__main__': + app.run(host='0.0.0.0', port=5000) diff --git a/2026/day-36/flask-todo/docker-compose.yml b/2026/day-36/flask-todo/docker-compose.yml new file mode 100644 index 0000000000..175d244405 --- /dev/null +++ b/2026/day-36/flask-todo/docker-compose.yml @@ -0,0 +1,41 @@ +services: + web: + build: . + container_name: python-flask + environment: + DB_HOST: db + DB_USER: ${MYSQL_USER} + DB_PASSWORD: ${MYSQL_PASSWORD} + DB_NAME: ${MYSQL_DATABASE} + ports: + - "8080:5000" + networks: + - mynetwork + + + db: + image: mysql:8.0 + container_name: mysql + environment: + MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + MYSQL_DATABASE: ${MYSQL_DATABASE} + volumes: + - myvolume:/var/lib/mysql + networks: + - mynetwork + healthcheck: + test: ["CMD", "mysql", "--user=root", "--password=your_root_password", "--silent", "--execute", "SELECT 1;"] + interval: 10s + timeout: 5s + retries: 5 + start_period: 30s + restart: on-failure + +volumes: + myvolume: + +networks: + mynetwork: + diff --git a/2026/day-36/flask-todo/requirements.txt b/2026/day-36/flask-todo/requirements.txt new file mode 100644 index 0000000000..12bbdcbccf --- /dev/null +++ b/2026/day-36/flask-todo/requirements.txt @@ -0,0 +1,2 @@ +flask +mysql-connector-python diff --git a/2026/day-37/day-37-revision.md b/2026/day-37/day-37-revision.md new file mode 100644 index 0000000000..95d705e079 --- /dev/null +++ b/2026/day-37/day-37-revision.md @@ -0,0 +1,52 @@ +## Container commands + +run= starts a container from a dockerfile +ps= lists all running containers +stop= stops the instructed container +rm= removes the container +exec= with this you can enter inside a container +logs= shows logs of the instructed container(docker logs ) + +## Image commands — + +build= use to build an image from a Dockerfile +pull= used to pull an image from docker-hub +push= used to push image from your local to docker-hub +tag= gives tag to an image before pushing it to docker-hub +ls= shows available images at your local +rmi= ued to remove marked image(docker rmi ) + +## Volume commands — + +create= used to create a new volume +ls= shows all volumes +inspect= inspects a volume , shows info like created-on, mountpoint etc.. +rm= removes an existing volume +## Network commands — +create= creates a new network +ls= shows all available networks +inspect= inspects a network, shows info like attached-containers, created-on and config info. +connect= connects a container to instructed container + +## Compose commands — + +up= build images and runs container for assigned services in compose file +down= stops and removes all compose running containers and networks +ps= shows all running containers started from compose file +logs= shows logs of file and its services +build= only builds images for services and doesn't start the container + +## Cleanup commands — + +prune= removes all unsused objects whether it is image,container,network,volume eg(docker image prune; removes all system images) a single command to remove all unused objects is (docker system prune) +system df= display information regarding the amount of disk space consumed by the Docker daemon + +## Dockerfile instructions — + +FROM= tells our dockerfile which base image to use +RUN= the command to be executed while image build process +WORKDIR= sets the working directory(which opens first when you open a container) +EXPOSE= tells the user which port to expose(it didn't do anything) +COPY= copies files from local system to the image +CMD=sets the default command that runs when the container starts, and it can be overridden at runtime (command is written in pieces) +ENTRYPOINT= you can write the full command, but the key difference from CMD is that ENTRYPOINT is harder to override at runtime, making it better for defining the main executable of a container diff --git a/2026/day-37/docker-cheatsheet.md b/2026/day-37/docker-cheatsheet.md new file mode 100644 index 0000000000..4b86804e8c --- /dev/null +++ b/2026/day-37/docker-cheatsheet.md @@ -0,0 +1,102 @@ +# 🐳 Docker Revision Notes + +--- + +## 🔲 Container Commands + +| Command | Description | +|--------|-------------| +| `docker run` | Starts a container from an image | +| `docker ps` | Lists all running containers | +| `docker stop ` | Stops the specified container | +| `docker rm ` | Removes the specified container | +| `docker exec` | Enter inside a running container | +| `docker logs ` | Shows logs of the specified container | + +--- + +## 🖼️ Image Commands + +| Command | Description | +|--------|-------------| +| `docker build` | Builds an image from a Dockerfile | +| `docker pull` | Pulls an image from Docker Hub | +| `docker push` | Pushes a local image to Docker Hub | +| `docker tag` | Tags an image before pushing to Docker Hub | +| `docker image ls` | Shows all locally available images | +| `docker rmi ` | Removes the specified image | + +--- + +## 💾 Volume Commands + +| Command | Description | +|--------|-------------| +| `docker volume create` | Creates a new volume | +| `docker volume ls` | Lists all volumes | +| `docker volume inspect` | Shows volume info (created-on, mountpoint, etc.) | +| `docker volume rm` | Removes an existing volume | + +--- + +## 🌐 Network Commands + +| Command | Description | +|--------|-------------| +| `docker network create` | Creates a new network | +| `docker network ls` | Lists all available networks | +| `docker network inspect` | Shows network info (attached containers, config, etc.) | +| `docker network connect` | Connects a container to a specified network | + +--- + +## 🧩 Compose Commands + +| Command | Description | +|--------|-------------| +| `docker compose up` | Builds images and starts containers for all services in the compose file | +| `docker compose down` | Stops and removes all compose containers and networks | +| `docker compose ps` | Lists all running containers started from the compose file | +| `docker compose logs` | Shows logs for the compose file and its services | +| `docker compose build` | Only builds images for services — does NOT start containers | + +--- + +## 🧹 Cleanup Commands + +| Command | Description | +|--------|-------------| +| `docker image prune` | Removes all unused images | +| `docker container prune` | Removes all stopped containers | +| `docker network prune` | Removes all unused networks | +| `docker volume prune` | Removes all unused volumes | +| `docker system prune` | Removes ALL unused objects (images, containers, networks, volumes) in one command | +| `docker system df` | Shows disk space consumed by the Docker daemon | + +--- + +## 📄 Dockerfile Instructions + +| Instruction | Description | +|------------|-------------| +| `FROM` | Specifies the base image to use | +| `RUN` | Executes a command during the **image build** process | +| `COPY` | Copies files from your local system into the image | +| `WORKDIR` | Sets the working directory (opened by default when you enter a container) | +| `EXPOSE` | Documents which port the app listens on — does **not** actually publish the port (use `-p` in `docker run` for that) | +| `CMD` | Sets the **default command** when the container starts — can be **overridden** at runtime — written as a JSON array e.g. `["node", "app.js"]` | +| `ENTRYPOINT` | Defines the **main executable** — harder to override at runtime — better for fixed entrypoints | + +### CMD vs ENTRYPOINT + +| | `CMD` | `ENTRYPOINT` | +|--|-------|--------------| +| Purpose | Default command | Main executable | +| Overridable at runtime? | ✅ Yes, easily | ❌ Only with `--entrypoint` flag | +| Often used together? | ✅ Yes | ✅ Yes | + +> **Tip:** Use `ENTRYPOINT` for the fixed command and `CMD` for default arguments that can be overridden. + +--- + +*Happy Dockering! 🚀* diff --git a/2026/day-38/day-38-yaml.md b/2026/day-38/day-38-yaml.md new file mode 100644 index 0000000000..9f2cb2f131 --- /dev/null +++ b/2026/day-38/day-38-yaml.md @@ -0,0 +1,16 @@ +# Two ways to write a list in yml + +## Block Style (multi-line) +### Each item on its own line +### Starts with a dash - and a space +### More readable, preferred for longer lists + +## Inline / Flow Style +### All items in a single line inside [ ] +### Separated by commas +### More compact, preferred for short lists + + +# When would you use | vs >? +## | will be used when want to run different commands in diff. lines +## > will be used when having user script or summary. diff --git a/2026/day-38/person.yml b/2026/day-38/person.yml new file mode 100644 index 0000000000..c8c5b159fe --- /dev/null +++ b/2026/day-38/person.yml @@ -0,0 +1,12 @@ +name: uttam +role: linux and devops roles +experience_years: fresher +learning: true +tools: + - docker containerization and compose + - linux + - kubernetes + - Github actions CI/CD + - git/github +hobbies: [Debugging, Teaching] + diff --git a/2026/day-38/server.yml b/2026/day-38/server.yml new file mode 100644 index 0000000000..8144da1d98 --- /dev/null +++ b/2026/day-38/server.yml @@ -0,0 +1,34 @@ +server: + name: + - nginx + - apache + ip: + - 172.273.232.0 + - 282.211.322.3 + port: + - 80 + - 5000 +database: + host: + - mysql + - flask + name: + - etcd + - sqld + credentials: + user: + - uttam + - tripathi + password: + - uttam123 + - tripathi123 + +startup_script: + run: | + It will keep + every line separate + runs: > + It will print + whole output + at once + diff --git a/2026/day-39/cicdpipeline.png b/2026/day-39/cicdpipeline.png new file mode 100644 index 0000000000..350bdef897 Binary files /dev/null and b/2026/day-39/cicdpipeline.png differ diff --git a/2026/day-39/day-39-cicd-concepts.md b/2026/day-39/day-39-cicd-concepts.md new file mode 100644 index 0000000000..73673ba03c --- /dev/null +++ b/2026/day-39/day-39-cicd-concepts.md @@ -0,0 +1,27 @@ +# Think about a team of 5 developers all pushing code to the same repo manually deploying to production. +## What can go wrong? +Maybe the code from one developer fails and the whole project collapses or the push conflict might occur. + +## What does "it works on my machine" mean and why is it a real problem? +Because one can have different dependencies which might not be installed in others.So, this is a real problem. + +## How many times a day can a team safely deploy manually? +At max, 2-3 times. + +# Pipeline Anatomy +A pipeline has these parts — + +## Trigger — It tells the pipeline when to start (like; someone pushes code, someone creates a pull req,etc..) +## Stage — This is just a logical phase where build,test and deployment happens +## Job — The task/work to be executed +## Step — a single command or action inside a job +## Runner — the machine that executes the job (like; virtual/local machine) +## Artifact - output produced by a job + + +## CI/CD/CD refers to three related but distinct practices in modern software development: +Continuous Integration (CI) is the practice of frequently merging developer code changes into a shared repository — often multiple times a day. Each merge triggers an automated build and test pipeline to catch integration bugs early. The goal is to detect problems as soon as they're introduced rather than at the end of a long development cycle +. +## Continuous Delivery (CD) extends CI by automatically preparing every passing build for release to a staging or production-like environment. The code is always in a deployable state, but an actual deployment to production requires a manual approval step. This gives teams control over when to release while ensuring the software is ready to release at any time. + +## Continuous Deployment (CD) goes one step further — every change that passes all automated tests is deployed to production automatically, with no human intervention. This is the most advanced practice and requires a very mature test suite and high confidence in automation. diff --git a/2026/day-39/pipeline-dev.jpeg b/2026/day-39/pipeline-dev.jpeg new file mode 100644 index 0000000000..850ac5e419 Binary files /dev/null and b/2026/day-39/pipeline-dev.jpeg differ diff --git a/2026/day-40/.github/workflows/hello.yml b/2026/day-40/.github/workflows/hello.yml new file mode 100644 index 0000000000..ddeb51208d --- /dev/null +++ b/2026/day-40/.github/workflows/hello.yml @@ -0,0 +1,25 @@ +name: hello +on: + push: + branch: +` - master +jobs: + greet: + runs-on: ubuntu-latest + steps: + - name: checkout code + uses: actions/checkout@v4 + - name: to print hello + run: | + echo "Hello from github actions" + echo "$(date)" + echo "Branch name: ${{ github.ref_name }}" + - name: Checkout repo + uses: actions/checkout@v3 + - name: List files + run: ls -la + - name: printing operating system + run: hostnamectl + + + diff --git a/2026/day-40/day-40-first-workflow.md b/2026/day-40/day-40-first-workflow.md new file mode 100644 index 0000000000..3775b8ac77 --- /dev/null +++ b/2026/day-40/day-40-first-workflow.md @@ -0,0 +1,11 @@ +# Why a pipeline fails +## A pipeline fails when any step returns a non-zero exit code. Linux/bash convention is simple: + +## 0 = success +## anything else = failure +## When a step fails, GitHub Actions stops that job immediately and marks everything after it as skipped + +# How do you read the error? +## The mental order — always read bottom to top +## The last line tells you what failed, but the lines above tell you why. +## Most people stare at the bottom and miss the actual reason sitting a few lines up diff --git a/2026/day-40/first-pipeline.png b/2026/day-40/first-pipeline.png new file mode 100644 index 0000000000..71aa24ffb3 Binary files /dev/null and b/2026/day-40/first-pipeline.png differ diff --git a/2026/day-40/gh-ac-key.jpeg b/2026/day-40/gh-ac-key.jpeg new file mode 100644 index 0000000000..4c814664db Binary files /dev/null and b/2026/day-40/gh-ac-key.jpeg differ diff --git a/2026/day-40/pipeline-dev.jpeg b/2026/day-40/pipeline-dev.jpeg new file mode 100644 index 0000000000..850ac5e419 Binary files /dev/null and b/2026/day-40/pipeline-dev.jpeg differ diff --git a/2026/day-41/.github/workflows/hello.yml b/2026/day-41/.github/workflows/hello.yml new file mode 100644 index 0000000000..3e08b835a0 --- /dev/null +++ b/2026/day-41/.github/workflows/hello.yml @@ -0,0 +1,23 @@ +name: hello +on: + schedule: + - cron: "0 0 * * *" +jobs: + greet: + runs-on: ubuntu-latest + steps: + - name: checkout code + uses: actions/checkout@v4 + - name: to print hello + run: | + echo "Hello from github actions" + echo "$(date)" + echo "Branch name: ${{ github.ref_name }}" + + - name: List files + run: ls -la + - name: printing operating system + run: hostnamectl + + + diff --git a/2026/day-41/.github/workflows/manual.yml b/2026/day-41/.github/workflows/manual.yml new file mode 100644 index 0000000000..244b339080 --- /dev/null +++ b/2026/day-41/.github/workflows/manual.yml @@ -0,0 +1,19 @@ +name: manual trigger + +on: + workflow_dispatch: + inputs: + environment: + description: 'Select environment to deploy to' + required: true + type: choice + options: + - staging + - production + +jobs: + deploy: + runs-on: ubuntu-latest + steps: + - name: Print selected environment + run: echo "Deploying to ${{ github.event.inputs.environment }}" diff --git a/2026/day-41/.github/workflows/matrix.yml b/2026/day-41/.github/workflows/matrix.yml new file mode 100644 index 0000000000..0652907e4d --- /dev/null +++ b/2026/day-41/.github/workflows/matrix.yml @@ -0,0 +1,25 @@ +name: matrix build +on: + push: + branches: [main] + +jobs: + py_matrix: + runs-on: ${{ matrix.os }} + strategy: + fail-fast: false + matrix: + os: [ubuntu-latest] + python-version: ["3.10", "3.11", "3.12"] + exclude: + - os: ubuntu-latest + python-version: "3.10" + + steps: + - uses: actions/checkout@v2 + - name: Set up Python ${{ matrix.python-version }} + uses: actions/setup-python@v5 + with: + python-version: ${{ matrix.python-version }} + + diff --git a/2026/day-41/.github/workflows/pr_check.yml b/2026/day-41/.github/workflows/pr_check.yml new file mode 100644 index 0000000000..8a5279c024 --- /dev/null +++ b/2026/day-41/.github/workflows/pr_check.yml @@ -0,0 +1,16 @@ +name: pull request check +on: + pull_request: + branches: + - main + types: + - opened + - synchronize + +jobs: + pr_check: + runs-on: ubuntu-latest + steps: + - name: pr check running + run: | + echo "PR check running for branch: ${{ github.head_ref }}" diff --git a/2026/day-41/day-41-(1).png b/2026/day-41/day-41-(1).png new file mode 100644 index 0000000000..67102ce72d Binary files /dev/null and b/2026/day-41/day-41-(1).png differ diff --git a/2026/day-41/day-41-(2).png b/2026/day-41/day-41-(2).png new file mode 100644 index 0000000000..10287b77fd Binary files /dev/null and b/2026/day-41/day-41-(2).png differ diff --git a/2026/day-41/day-41-triggers.md b/2026/day-41/day-41-triggers.md new file mode 100644 index 0000000000..ecaa7a8bd7 --- /dev/null +++ b/2026/day-41/day-41-triggers.md @@ -0,0 +1,8 @@ +# What does fail-fast: true (the default) do vs false? +## fail-fast: true +### Doesn't execute another matrix if one of them fails +## fail-fast: false +### runs another matrix even when one fails + +# What is the cron expression for every Monday at 9 AM? +## 0 9 * * 1 diff --git a/2026/day-42/day-42(2).png b/2026/day-42/day-42(2).png new file mode 100644 index 0000000000..40600f2dcb Binary files /dev/null and b/2026/day-42/day-42(2).png differ diff --git a/2026/day-42/day-42-runners.md b/2026/day-42/day-42-runners.md new file mode 100644 index 0000000000..d757e56dd3 --- /dev/null +++ b/2026/day-42/day-42-runners.md @@ -0,0 +1,11 @@ +# Why does it matter that runners come with tools pre-installed? +## Because through pre-installed tools we can run various commands and use tools such as docker and can even run a docker command inside the runner we want; such as running a containerwithout setting up. + +# Why are labels useful when you have multiple self-hosted runners? +## When you have many self-hosted runners, labels let you target the right machine for the right job. + +# What is a GitHub-hosted runner? Who manages it? +## A GitHub-hosted runner is a virtual machine provided and fully managed by GitHub — including maintenance, updates, and scaling. + +# Why does it matter that runners come with tools pre-installed? +## t saves setup time since tools like Docker, Node.js, and Git are ready to use directly in your workflow without any installation steps. \ No newline at end of file diff --git a/2026/day-42/day-42.png b/2026/day-42/day-42.png new file mode 100644 index 0000000000..7ad2c23124 Binary files /dev/null and b/2026/day-42/day-42.png differ diff --git a/2026/day-42/self-hosted-runner-idle.png b/2026/day-42/self-hosted-runner-idle.png new file mode 100644 index 0000000000..3c983fb370 Binary files /dev/null and b/2026/day-42/self-hosted-runner-idle.png differ diff --git a/2026/day-43/day-43(2).png b/2026/day-43/day-43(2).png new file mode 100644 index 0000000000..2c01980e45 Binary files /dev/null and b/2026/day-43/day-43(2).png differ diff --git a/2026/day-43/day-43(3).png b/2026/day-43/day-43(3).png new file mode 100644 index 0000000000..1f7f90ee31 Binary files /dev/null and b/2026/day-43/day-43(3).png differ diff --git a/2026/day-43/day-43(4).png b/2026/day-43/day-43(4).png new file mode 100644 index 0000000000..814f27d25a Binary files /dev/null and b/2026/day-43/day-43(4).png differ diff --git a/2026/day-43/day-43-runners.md b/2026/day-43/day-43-runners.md new file mode 100644 index 0000000000..5d0c9e6a95 --- /dev/null +++ b/2026/day-43/day-43-runners.md @@ -0,0 +1,7 @@ +# Why would you pass outputs between jobs? +## Because jobs run in isolated environments — they cannot directly share variables or data with each other. +## So if Job A calculates something (e.g., version number, date, build status), and Job B needs that value — you must explicitly pass it via outputs. + +# A step with continue-on-error: true — what does this do? +## By default, if a step fails → workflow stops. +## With continue-on-error: true → step can fail but workflow keeps running normally. \ No newline at end of file diff --git a/2026/day-43/day-43.png b/2026/day-43/day-43.png new file mode 100644 index 0000000000..4a5fec639d Binary files /dev/null and b/2026/day-43/day-43.png differ diff --git a/2026/day-43/multi-job.yml b/2026/day-43/multi-job.yml new file mode 100644 index 0000000000..fb2a2888e3 --- /dev/null +++ b/2026/day-43/multi-job.yml @@ -0,0 +1,57 @@ +name: multi-job +on: workflow_dispatch +env: + APP_NAME: myapp +jobs: + build: + runs-on: ubuntu-latest + env: + ENVIRONMENT: staging + steps: + - name: build + env: + VERSION: 1.0.0 + run: | + echo "Building the app" + echo "The app name is $APP_NAME" + echo "The environment is $ENVIRONMENT" + echo "The version is $VERSION" + + test: + runs-on: ubuntu-latest + needs: build + steps: + - name: test + run: echo "Running tests" + - name: Run a specific script only on main branch + if: github.ref == 'refs/heads/main' + run: | + echo "Running additional tests for main branch" + deploy: + runs-on: ubuntu-latest + needs: test + steps: + - name: deploy + run: echo "Deploying" + outputs: + runs-on: ubuntu-latest + steps: + - name: set output + id: set_output + run: echo "date=$(date)" >> $GITHUB_OUTPUT + - name: read output + run: echo "The current date is ${{ steps.set_output.outputs.date}}" + - name: main step + id: main_step + run: echo "doing something" + + - name: This runs only if previous step failed + if: failure() + run: echo "Previous step failed, nothing to do" + - name: This step gets ignored if it failed and other runs normally + continue-on-error: true + run: | + echo "This step might fail but it won't affect the rest of the workflow" + - name: This step only runs on push events not on pull request + if: github.event_name == 'push' + run: echo "This runs only on push events" \ No newline at end of file diff --git a/2026/day-43/smart-pipeline.yml b/2026/day-43/smart-pipeline.yml new file mode 100644 index 0000000000..ffdc42b27e --- /dev/null +++ b/2026/day-43/smart-pipeline.yml @@ -0,0 +1,38 @@ +name: smart pipeline for lint +on: + push: + branches: + +jobs: + lint: + runs-on: ubuntu-latest + steps: + - name: check out source repository + uses: actions/checkout@v6 + - name: set up python environment + uses: actions/setup-python@v6 + with: + python-version: '3.10' + - name: install linter dependencies + run: | + python -m pip install --upgrade pip + pip install flake8 + - name: run linter + run: | + flake8 . + echo "Linting completed successfully" + + test: + runs-on: ubuntu-latest + steps: + - name: This is a test step + run: echo "running tests" + summary: + needs: [lint, test] + runs-on: ubuntu-latest + steps: + - name: This is a summary step + run: | + echo "This push was made on branch ${{ github.ref }}" + echo "The commit message was ${{ github.event.head_commit.message }}" + \ No newline at end of file diff --git a/2026/day-44/day-44(2).png b/2026/day-44/day-44(2).png new file mode 100644 index 0000000000..de632e1e2a Binary files /dev/null and b/2026/day-44/day-44(2).png differ diff --git a/2026/day-44/day-44-secrets-artifacts.md b/2026/day-44/day-44-secrets-artifacts.md new file mode 100644 index 0000000000..9ee2ee34a3 --- /dev/null +++ b/2026/day-44/day-44-secrets-artifacts.md @@ -0,0 +1,7 @@ +# Why should you never print secrets in CI logs? +## Secret information like passwords, API keys, tokens, etc. can get leaked if printed in logs. + +# When would you use artifacts in a real pipeline? +## In a real pipeline, artifacts are used to pass build outputs between jobs — for example, compiling code in a build job and passing the binary to a test or deploy job without rebuilding it. They're also useful for storing reports like test results, code coverage, or security scan outputs so you can download and review them after the pipeline finishes. +# What are Secrets? +## Secrets are sensitive values like passwords, API keys, tokens etc. that should never be hardcoded directly in your code or workflow files. diff --git a/2026/day-44/day-44.png b/2026/day-44/day-44.png new file mode 100644 index 0000000000..e313075fc7 Binary files /dev/null and b/2026/day-44/day-44.png differ diff --git a/2026/day-44/secrets.yml b/2026/day-44/secrets.yml new file mode 100644 index 0000000000..c08d1bcd9e --- /dev/null +++ b/2026/day-44/secrets.yml @@ -0,0 +1,47 @@ +name: secrets +on: workflow-dispatch + +jobs: + secret: + runs-on: ubuntu-latest + steps: + - name: This step tells if secret exists or not + run: | + echo "The secret is set: ${{ secrets.MY_SECRET_MESSAGE != '' }}" + - name: passing secret in a environment variale + env: + MY_SECRET: ${{ secrets.MY_SECRET_MESSAGE }} + run: | + echo $MY_SECRET + - name: Login to Docker Hub + uses: docker/login-action@v3 + with: + username: ${{ secrets.DOCKER_USERNAME }} + password: ${{ secrets.DOCKER_TOKEN }} + + artifact: + runs-on: ubuntu-latest + steps: + - name: file creation step + run: | + echo "The test report is successfull" >> main.test + + - name: saving my test file + uses: actions/upload-artifact@v4 + with: + name: main-test + path: main.test + + print: + runs-on: ubuntu-latest + needs: artifact + steps: + - name: Download arifact + uses: actions/download-artifact@v4 + with: + name: main-test + path: main.test + + - name: print the artifact + run: | + cat main.test/main.test diff --git a/2026/day-44/tests_ci.yml b/2026/day-44/tests_ci.yml new file mode 100644 index 0000000000..6dc4056a4f --- /dev/null +++ b/2026/day-44/tests_ci.yml @@ -0,0 +1,34 @@ +name: real tests +on: + push: + branches: [main] + +jobs: + running_script: + runs-on: ubuntu-latest + steps: + - name: checkout code + uses: actions/checkout@v4 + + - name: Running the code + run: | + chmod +x ./disk_check.sh + ./disk_check.sh + + + caching: + runs-on: ubuntu-latest + steps: + - name: checkout code + uses: actions/checkout@v4 + - name: cache node modules + uses: actions/cache@v4 + with: + path: ~/.npm + key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }} + restored-keys: + ${{ runner.os }}-node- + + - name: Install dependencies + run: npm install + diff --git a/2026/day-45/Dockerfile b/2026/day-45/Dockerfile new file mode 100644 index 0000000000..7f13fd54a0 --- /dev/null +++ b/2026/day-45/Dockerfile @@ -0,0 +1,13 @@ +FROM python:3.11-slim + +WORKDIR /app + +COPY . . + +RUN pip install -r requirements.txt + +EXPOSE 5000 + +CMD ["python","app.py"] + + diff --git a/2026/day-45/app.py b/2026/day-45/app.py new file mode 100644 index 0000000000..7a814f7774 --- /dev/null +++ b/2026/day-45/app.py @@ -0,0 +1,40 @@ +from flask import Flask, jsonify +import psutil +import platform +from datetime import datetime + +app = Flask(__name__) + +@app.route("/") +def home(): + return jsonify({ + "message": "System Stats API is running", + "timestamp": datetime.now().strftime("%Y-%m-%d %H:%M:%S") + }) + +@app.route("/health") +def health(): + return jsonify({"status": "healthy"}), 200 + +@app.route("/stats") +def stats(): + return jsonify({ + "cpu_percent": psutil.cpu_percent(interval=1), + "memory": { + "total_mb": round(psutil.virtual_memory().total / 1024 / 1024, 2), + "used_mb": round(psutil.virtual_memory().used / 1024 / 1024, 2), + "percent": psutil.virtual_memory().percent + }, + "disk": { + "total_gb": round(psutil.disk_usage('/').total / 1024 / 1024 / 1024, 2), + "used_gb": round(psutil.disk_usage('/').used / 1024 / 1024 / 1024, 2), + "percent": psutil.disk_usage('/').percent + }, + "platform": platform.system(), + "python_version": platform.python_version() + }) + +if __name__ == "__main__": + app.run(host="0.0.0.0", port=5000) + + diff --git a/2026/day-45/docker_build-push.yml b/2026/day-45/docker_build-push.yml new file mode 100644 index 0000000000..5e0ebaf024 --- /dev/null +++ b/2026/day-45/docker_build-push.yml @@ -0,0 +1,30 @@ +name: Docker build and push +on: + push: + branches: [main] + +jobs: + build_and_push: + runs-on: ubuntu-latest + steps: + - name: code checkout + uses: actions/checkout@v4 + + - name: Set up Docker + uses: docker/setup-docker-action@v5 + + - name: Logging in docker hub + uses: docker/login-action@v4 + with: + username: ${{ secrets.DOCKER_USERNAME }} + password: ${{ secrets.DOCKER_TOKEN }} + + + - name: Build and push + uses: docker/build-push-action@v7 + with: + push: ${{ github.ref == 'refs/heads/main' }} + tags: | + uttamtripathi/auto_build_push:latest + uttamtripathi/auto_build_push:${{ github.sha }} + diff --git a/2026/day-45/requirements.txt b/2026/day-45/requirements.txt new file mode 100644 index 0000000000..4a930b6379 --- /dev/null +++ b/2026/day-45/requirements.txt @@ -0,0 +1,2 @@ +flask +psutil diff --git a/2026/day-46/action.yml b/2026/day-46/action.yml new file mode 100644 index 0000000000..043660b8a1 --- /dev/null +++ b/2026/day-46/action.yml @@ -0,0 +1,40 @@ +name: Setup and Greet +description: A composite action that greets a user + +inputs: + name: + required: true + description: "Name of the person to greet" + language: + default: "en" + description: "Language of the greeting" + +outputs: + greeted: + description: "Whether the greeting was done" + value: ${{ steps.set-output.outputs.greeted }} + +runs: + using: composite + steps: + - name: Print greeting + shell: bash + run: | + if [ "${{ inputs.language }}" == "en" ]; then + echo "Hello, ${{ inputs.name }}!" + elif [ "${{ inputs.language }}" == "hi" ]; then + echo "Namaste, ${{ inputs.name }}!" + else + echo "Hey, ${{ inputs.name }}!" + fi + + - name: Print date and OS + shell: bash + run: | + echo "Current date: $(date)" + echo "Runner OS: ${{ runner.os }}" + + - name: Set greeted output + id: set-output + shell: bash + run: echo "greeted=true" >> $GITHUB_OUTPUT diff --git a/2026/day-46/call-build.yml b/2026/day-46/call-build.yml new file mode 100644 index 0000000000..e63853ebd8 --- /dev/null +++ b/2026/day-46/call-build.yml @@ -0,0 +1,19 @@ +name: call build +on: + push: + branches: [main] + +jobs: + build: + uses: ./.github/workflows/reusable-build.yml + with: + app_name: "my-webapp" + environment: "production" + secrets: + docker_token: ${{ secrets.DOCKER_TOKEN }} + test: + needs: build + runs-on: ubuntu-latest + steps: + - name: Print build version + run: echo ${{ needs.build.outputs.VERSION }} diff --git a/2026/day-46/day-46-notes.md b/2026/day-46/day-46-notes.md new file mode 100644 index 0000000000..72de08078b --- /dev/null +++ b/2026/day-46/day-46-notes.md @@ -0,0 +1,12 @@ +# What is a reusable workflow? +## A workflow that can be called and reused by other workflows instead of repeating the same steps in every workflow file. + +# What is the workflow_call trigger? +## It's the trigger that marks a workflow as reusable — meaning it can be called by another workflow instead of running on its own like push or pull request. + +# How is calling a reusable workflow different from using a regular action (uses:)? +## A regular action runs a single step — like login, build, checkout. A reusable workflow runs an entire job with multiple steps inside it. Think of action as one task and reusable workflow as a full pipeline. + +# Where must a reusable workflow file live? +## It must be inside the .github/workflows/ folder. And the repository must be either public or in the same organization to be called from another workflow. + diff --git a/2026/day-46/day-46.png b/2026/day-46/day-46.png new file mode 100644 index 0000000000..0bb0d33fa1 Binary files /dev/null and b/2026/day-46/day-46.png differ diff --git a/2026/day-46/reusable-build.yml b/2026/day-46/reusable-build.yml new file mode 100644 index 0000000000..534ca7d5d7 --- /dev/null +++ b/2026/day-46/reusable-build.yml @@ -0,0 +1,37 @@ +name: reusable workflow +on: + workflow_call: + inputs: + app_name: + description: "App name" + required: true + type: string + environment: + description: "Environment" + required: true + default: "staging" + type: string + secrets: + docker_token: + required: true + outputs: + VERSION: + description: "output" + value: ${{ jobs.reusable.outputs.VERSION }} + +jobs: + reusable: + runs-on: ubuntu-latest + outputs: + VERSION: ${{ steps.version.outputs.VERSION }} + steps: + - name: code checkout + uses: actions/checkout@v4 + - name: building app + run: | + echo "building ${{ inputs.app_name }} for ${{ inputs.environment }}" + echo "docker_token is set: ${{ secrets.docker_token != ''}}" + - name: Generate version + id: version + run: echo "VERSION=v1.0-$(echo ${{ github.sha }} | cut -c1-7)" >> $GITHUB_OUTPUT + diff --git a/2026/day-46/test-composite.yml b/2026/day-46/test-composite.yml new file mode 100644 index 0000000000..0fcea41f5f --- /dev/null +++ b/2026/day-46/test-composite.yml @@ -0,0 +1,21 @@ +name: Test Composite Action +on: + push: + branches: [main] + +jobs: + greet: + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v4 + + - name: Run composite action + id: greet + uses: ./.github/actions/setup-and-greet + with: + name: "Uttam" + language: "hi" + + - name: Print output + run: echo "Greeted status = ${{ steps.greet.outputs.greeted }}" diff --git a/2026/day-47/day-47-advanced-triggers.md b/2026/day-47/day-47-advanced-triggers.md new file mode 100644 index 0000000000..f3673272fa --- /dev/null +++ b/2026/day-47/day-47-advanced-triggers.md @@ -0,0 +1,23 @@ +# Why GitHub says scheduled workflows may be delayed or skipped on inactive repos? + +## Becasue GitHub runs scheduled workflows on shared infrastructure — millions of repos use it. +## So GitHub prioritizes active repos over inactive ones. + +# When would you use paths vs paths-ignore? +## Use paths when want to run on a file/folder with specific pattern, use paths-ignore when want to ignre file/folder of a pattern. + +# When would an external system (like a Slack bot or monitoring tool) trigger a pipeline? + +## If it hits the repo's API +## A Slack bot has a /deploy command → someone types it → bot hits GitHub API → workflow runs +## A monitoring tool detects server is down → automatically hits GitHub API → workflow runs to restart server + +# Explanation of workflow_run vs workflow_call +## workflow_run runs a workflow when the desired state or conditions are fulfilled. +## workflow_call it makes a workflow reusable like a function. + +# The cron expression for: every weekday at 9 AM IST +## '30 3 * * 1-5' + +# The cron expression for: first day of every month at midnight +## 0 0 1 * * diff --git a/2026/day-47/day-47.png b/2026/day-47/day-47.png new file mode 100644 index 0000000000..57a415eaf7 Binary files /dev/null and b/2026/day-47/day-47.png differ diff --git a/2026/day-47/deploy-after-tests.yml b/2026/day-47/deploy-after-tests.yml new file mode 100644 index 0000000000..19e3d3d426 --- /dev/null +++ b/2026/day-47/deploy-after-tests.yml @@ -0,0 +1,21 @@ +name: deploy +on: + workflow_run: + workflows: ["Run Tests"] + types: [completed] + +jobs: + condition: + runs-on: ubuntu-latest + steps: + - name: condition to fulfill + run: | + + if [ "${{ github.event.workflow_run.conclusion }}" == 'success' ]; then + echo "The workflow is running successfully" + else + echo "Trigger Workflow failed" + exit 1 + fi + - name: deploy-tests + run: echo "This message is printed when workflow is triggered after successfull completion of another workflow" \ No newline at end of file diff --git a/2026/day-47/external-trigger.yml b/2026/day-47/external-trigger.yml new file mode 100644 index 0000000000..3c5c0a17ac --- /dev/null +++ b/2026/day-47/external-trigger.yml @@ -0,0 +1,20 @@ +name: Repository dispatch trigger +on: + repository_dispatch: + types: ["deploy-request"] + + +jobs: + external_trigger: + runs-on: ubuntu-latest + steps: + - name: Respond to event type + run: echo "${{ github.event.client_payload.environment }}" + + + + + + + + diff --git a/2026/day-47/pr-checks.yml b/2026/day-47/pr-checks.yml new file mode 100644 index 0000000000..545063fb37 --- /dev/null +++ b/2026/day-47/pr-checks.yml @@ -0,0 +1,60 @@ +name: Pr validation workflow +on: + pull_request: + branches: [main] + + +jobs: + file-size-check: + runs-on: ubuntu-latest + steps: + - name: checkout code + uses: actions/checkout@v4 + - name: quick fail if file> 1mb + run: | + echo "Checking file sizes in PR" + + for file in $(find . -type f);do + size=$(stat -c%s "$file") + if [ "$size" -gt 1048576 ]; then + echo "X $file is larger than 1MB" + exit 1 + fi + done + + echo "✅ All files are less than 1mb" + + branch-name-check: + runs-on: ubuntu-latest + steps: + - name: branch name check + run: echo "${{ github.head_ref }}" + - name: failing if branch is not in recognized pattern + env: + BRANCH: "${{ github.head_ref }}" + run: | + + if [[ "$BRANCH" != feature/* && \ + "$BRANCH" != fix/* && \ + "$BRANCH" != docs/* ]]; then + echo "((FAILED)) '$BRANCH' does not match allowed patterns." + echo "Allowed= feature/*, fix/*, docs/*" + exit 1 + fi + + echo "✅ branch pattern is verified" + + + pr-body-check: + runs-on: ubuntu-latest + steps: + - name: Reads the PR body + run: | + if [ -z "${{ github.event.pull_request.body }}" ]; then + echo "<<>>" + echo "The file is empty" + else + echo "${{ github.event.pull_request.body }}" + fi + + diff --git a/2026/day-47/pr-lifecycle.yml b/2026/day-47/pr-lifecycle.yml new file mode 100644 index 0000000000..341f5ba32e --- /dev/null +++ b/2026/day-47/pr-lifecycle.yml @@ -0,0 +1,20 @@ +name: pull request events +on: + pull_request: + types: [opened, synchronize, reopened, closed] + + +jobs: + log-event: + runs-on: ubuntu-latest + steps: + - name: print trigger action + run: echo "${{ github.event.action }}" + - name: print pr title + run: echo "${{ github.event.pull_request.title }}" + - name: print pull request author + run: echo "${{ github.event.pull_request.user.login }}" + - name: print source branch + run: echo "${{ github.head_ref }}" + - name: print target branch + run: echo "${{ github.base_ref }}" diff --git a/2026/day-47/scheduled-tasks.yml b/2026/day-47/scheduled-tasks.yml new file mode 100644 index 0000000000..4decbf6476 --- /dev/null +++ b/2026/day-47/scheduled-tasks.yml @@ -0,0 +1,28 @@ +name: scheduled workflows with cronjob +on: + workflow_dispatch: + schedule: + - cron: '30 2 * * 1' + - cron: '0 */6 * * *' + +jobs: + schedule_triggered: + runs-on: ubuntu-latest + steps: + - name: schedule trigger used + run: | + if [ -z "${{ github.event.schedule }}" ]; then + echo "Triggered manually via workflow_dispatch" + else + echo "Triggered by cron: ${{ github.event.schedule }}" + fi + - name: Health check + run: | + response=$(curl -s -o /dev/null -w "%{http_code}" -L https://google.com ) + if [ "$response" -ne 200 ]; then + echo "Health check failed! Response code: $response" + exit 1 + else + echo "Health check passed!" + fi + diff --git a/2026/day-47/smart-triggers-ignore.yml b/2026/day-47/smart-triggers-ignore.yml new file mode 100644 index 0000000000..fff3c7ae86 --- /dev/null +++ b/2026/day-47/smart-triggers-ignore.yml @@ -0,0 +1,17 @@ +name: files to ignore +on: + push: + branches: + - main + - release/* + + paths-ignore: + + - '*.md' + - 'docs/**' +jobs: + path_to_ignore: + runs-on: ubuntu-latest + steps: + - name: branch where is workflow + run: echo "${{ github.ref_name }}" \ No newline at end of file diff --git a/2026/day-47/smart-triggers.yml b/2026/day-47/smart-triggers.yml new file mode 100644 index 0000000000..17cbee2916 --- /dev/null +++ b/2026/day-47/smart-triggers.yml @@ -0,0 +1,15 @@ +name: smart triggers +on: + push: + branches: + - main + - release/* + paths: + - 'src/**' + - 'app/**' +jobs: + path_to_trigger: + runs-on: ubuntu-latest + steps: + - name: branch where is workflow + run: echo "${{ github.ref_name }}" \ No newline at end of file diff --git a/2026/day-47/tests.yml b/2026/day-47/tests.yml new file mode 100644 index 0000000000..502843bf89 --- /dev/null +++ b/2026/day-47/tests.yml @@ -0,0 +1,11 @@ +name: Run Tests +on: + push: + branches: + +jobs: + tests: + runs-on: ubuntu-latest + steps: + - name: Running test + run: echo "The tests are running" \ No newline at end of file diff --git a/2026/day-48/day-48(1).png b/2026/day-48/day-48(1).png new file mode 100644 index 0000000000..9f043219b0 Binary files /dev/null and b/2026/day-48/day-48(1).png differ diff --git a/2026/day-48/day-48-actions-project.md b/2026/day-48/day-48-actions-project.md new file mode 100644 index 0000000000..ae41cb0b09 --- /dev/null +++ b/2026/day-48/day-48-actions-project.md @@ -0,0 +1,270 @@ +# pipeline architecture + +# workflow files(.yml) +## main-pipeline.yml +```yaml + name: main branch pipeline + + on: + push: + branches: [master] + +jobs: + build-test: + uses: ./.github/workflows/reusable-build-test.yml + with: + run_tests: true + + prep: + runs-on: ubuntu-latest + outputs: + short_sha: ${{ steps.vars.outputs.short_sha }} + steps: + - id: vars + run: echo "short_sha=$(echo $GITHUB_SHA | cut -c1-7)" >> $GITHUB_OUTPUT + + build-push: + uses: ./.github/workflows/reusable-docker.yml + needs: [ prep,build-test ] + with: + image_name: ${{ github.event.repository.name }} + tag: ${{ needs.prep.outputs.short_sha }} + secrets: + docker_username: ${{ secrets.DOCKER_USERNAME }} + docker_token: ${{ secrets.DOCKER_TOKEN }} + + deploy: + runs-on: ubuntu-latest + needs: [build-push, prep] + environment: environment + steps: + - name: deploy message + run: | + echo "Deploying image: ${{ secrets.DOCKER_USERNAME }}/github-actions-capstone:${{ needs.prep.outputs.short_sha }}" + - name: environment info + run: | + echo "The environment being used is : ${{ vars.SITE }}" + - name: success message + if: success() +run: echo "SUCCESSFULL" +``` +## health-check.yml + +```yaml +name: health check +on: + workflow_dispatch: + schedule: + - cron: '0 */12 * * *' + +jobs: + pull_image: + runs-on: ubuntu-latest + outputs: + health_check_result: ${{ steps.summary.outputs.github_output }} + steps: + - name: pull image + run: | + docker pull ${{ secrets.DOCKER_USERNAME }}/github-actions-capstone:latest + + - name: run container + run: | + docker rm -f health_check_container || true + docker run -d -p 5000:5000 --name health_check_container \ + ${{ secrets.DOCKER_USERNAME }}/github-actions-capstone:latest + + - name: healthcheck after waiting 5 seconds + run: | + sleep 5 + if curl -sf http://localhost:5000/health; then + echo "Health check passed" + else + echo "Health check failed" + exit 1 + fi + + - name: cleanup + if: always() # ✅ always runs + run: | + docker rm -f health_check_container || true + + - name: summary step + id: summary + if: always() # ✅ always runs + run: | + if [ "${{ job.status }}" == "success" ]; then + STATUS="PASSED ✅" + else + STATUS="FAILED ❌" + fi + echo "## Health Check Report" >> $GITHUB_STEP_SUMMARY + echo "- Image: ${{ secrets.DOCKER_USERNAME }}/github-actions-capstone:latest" >> $GITHUB_STEP_SUMMARY + echo "- Status: $STATUS" >> $GITHUB_STEP_SUMMARY + echo "- Time: $(date)" >> $GITHUB_STEP_SUMMARY + echo "github_output=$STATUS" >> $GITHUB_OUTPUT +``` +## pr-pipeline.yml + +```yaml +name: pull requests pipeline +on: + pull_request: + branches: [master] + types: [ opened , synchronize] + +jobs: + pr-pipeline: + uses: ./.github/workflows/reusable-build-test.yml + with: + run_tests: true + pr-comment: + runs-on: ubuntu-latest + needs: pr-pipeline + steps: + - name: pr checks + run: | + echo "PR checks passed for branch: ${{ github.ref }}" +``` + +## reusable-build-test.yml + +```yaml +name: reusable worfklow build & test +on: + workflow_call: + inputs: + python_version: + description: "python version to use" + default: "3.13" + required: false + type: string + run_tests: + description: "Tests to run" + type: boolean + default: true + required: false + outputs: + test-result: + description: "Test value passed or failed" + value: ${{ jobs.build-and-test.outputs.test_result }} + +jobs: + build-and-test: + runs-on: ubuntu-latest + outputs: + test_result: ${{ steps.set_result.outputs.test_result }} + steps: + - name: code checkout + uses: actions/checkout@v4 + - name: setup language runtime + uses: actions/setup-python@v5 + with: + python-version: ${{ inputs.python_version }} + - name: installing dependencies + run: | + pip install -r requirements.txt + pip install -r requirements-cicd.txt + - name: run tests + id: run_tests + if: ${{ inputs.run_tests }} + run: | + flake8 app.py + - name: set output + id: set_result + if: always() + run: | + if [[ "${{ steps.run_tests.outcome }}" == "success" || "${{ steps.run_tests.outcome }}" == "skipped" ]]; then + echo "test_result=passed" >> $GITHUB_OUTPUT + else + echo "test_result=failed" >> $GITHUB_OUTPUT + fi +``` +## reuable-docker.yml +```yaml +name: reusable workflow docker build & push +on: + workflow_call: + inputs: + image_name: + description: "name of image" + required: true + type: string + tag: + description: "tag of the image" + required: true + type: string + outputs: + image_url: + description: "full image path" + value: ${{ jobs.build-and-push.outputs.image_url }} + + secrets: + docker_username: + description: "dockerhub username" + required: true + docker_token: + description: "dockerhub secret token" + required: true + + +jobs: + build-and-push: + runs-on: ubuntu-latest + outputs: + image_url: ${{ steps.image_url.outputs.image_url }} + steps: + - name: checkout code + uses: actions/checkout@v4 + + - name: set lowercase image name # ✅ add this step + id: image + run: | + echo "name=$(echo '${{ inputs.image_name }}' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT + + - name: login to docker hub + uses: docker/login-action@v3 + with: + username: ${{ secrets.docker_username }} + password: ${{ secrets.docker_token }} + + - name: build and push + uses: docker/build-push-action@v6 + with: + context: . + push: true + tags: | + ${{ secrets.docker_username }}/${{ steps.image.outputs.name }}:latest + ${{ secrets.docker_username }}/${{ steps.image.outputs.name }}:${{ inputs.tag }} + + - name: image url output + id: image_url + if: success() + run: | + echo "image_url=${{ secrets.docker_username }}/${{ steps.image.outputs.name }}:${{ inputs.tag }}" >> $GITHUB_OUTPUT +``` + +# Screenshot of a PR running the test-only pipeline + + +![alt text](day-48(1).png) + + + +# Screenshot of a main branch push running the full pipeline + + +![alt text](day-48.png) + + +# Docker Hub link to my pushed image +## https://hub.docker.com/repository/docker/uttamtripathi/github-actions-capstone + + +# What I'll improve next + +## Currently my app deploys even without my manual approval +## So next time I'll improve this + + + + diff --git a/2026/day-48/day-48.png b/2026/day-48/day-48.png new file mode 100644 index 0000000000..695de0d087 Binary files /dev/null and b/2026/day-48/day-48.png differ diff --git a/2026/day-50/.gitignore b/2026/day-50/.gitignore new file mode 100644 index 0000000000..4680e38f01 --- /dev/null +++ b/2026/day-50/.gitignore @@ -0,0 +1,2 @@ +minikube-linux-amd64 +kubectl diff --git a/2026/day-50/day-50-k8s-setup.md b/2026/day-50/day-50-k8s-setup.md new file mode 100644 index 0000000000..0297afb0a0 --- /dev/null +++ b/2026/day-50/day-50-k8s-setup.md @@ -0,0 +1,16 @@ +# Architecture diagram +## ![alt text](k8s-arch.jpeg) + +# kubectl get nodes & kubectl get pods +## ![alt text](kubectl.png) + +# Why was Kubernetes created? What problem does it solve that Docker alone cannot? +## Kubernetes was created because Docker alone cannot manage hundreds of containers across multiple hosts, cannot auto-heal failed containers, cannot do load balancing, and cannot scale them automatically based on traffic/load. + +# Who created Kubernetes and what was it inspired by? +## Kubernetes was created by Google, inspired by their internal system called Borg (which Google used to manage their own infrastructure), and was open-sourced in 2014 so that others could also contribute. + +# What does the name "Kubernetes" mean? +## It means Helmsman or Pilot (the person who steers a ship) in Greek. That's why the Kubernetes logo is a ship's wheel. + + diff --git a/2026/day-50/k8s-arch.jpeg b/2026/day-50/k8s-arch.jpeg new file mode 100644 index 0000000000..7981482634 Binary files /dev/null and b/2026/day-50/k8s-arch.jpeg differ diff --git a/2026/day-50/kubectl.png b/2026/day-50/kubectl.png new file mode 100644 index 0000000000..aba3c6dd29 Binary files /dev/null and b/2026/day-50/kubectl.png differ diff --git a/2026/day-51/day-51-pods.md b/2026/day-51/day-51-pods.md new file mode 100644 index 0000000000..e7b2d9f1f5 --- /dev/null +++ b/2026/day-51/day-51-pods.md @@ -0,0 +1,53 @@ +# The four required fields of a Kubernetes manifest and what each does? +## apiVersion — which version of the Kubernetes API to use (e.g. apps/v1, v1) +## kind — the type of resource to create (e.g. Deployment, Service, Pod) +## metadata — identifying info about the object, at minimum a name +## spec — the desired state of the object; what you want Kubernetes to create/maintain + +## nginx pod: +```yml +apiVersion: v1 +kind: Pod +metadata: + name: nginx-pod +spec: + containers: + - name: nginx + image: nginx +``` + +## busybox pod: +```yml +apiVersion: v1 +kind: Pod +metadata: + name: busybox-pod +spec: + containers: + - name: busybox + image: busybox + command: ["sleep", "3600"] +``` + +## third pod (alpine): + +```yml +apiVersion: v1 +kind: Pod +metadata: + name: third-pod +spec: + containers: + - name: alpine + image: alpine + command: ["sleep", "3600"] +``` + +## Difference between imperative (kubectl run) and declarative (kubectl apply -f) +### Imperative (kubectl run) — you tell Kubernetes what to do directly via a command, quick but not reproducible. Declarative (kubectl apply -f) — you define the desired state in a YAML file and Kubernetes figures out how to get there, repeatable and version-controllable. + +## What happens when you delete a standalone Pod? +### It's gone permanently. Unlike a Pod managed by a Deployment or ReplicaSet, there's no controller watching it — so Kubernetes does not reschedule or recreate it. + + + diff --git a/2026/day-52/day-52(1).png b/2026/day-52/day-52(1).png new file mode 100644 index 0000000000..f4746270e2 Binary files /dev/null and b/2026/day-52/day-52(1).png differ diff --git a/2026/day-52/day-52(2).png b/2026/day-52/day-52(2).png new file mode 100644 index 0000000000..5941f253dd Binary files /dev/null and b/2026/day-52/day-52(2).png differ diff --git a/2026/day-52/day-52(3).png b/2026/day-52/day-52(3).png new file mode 100644 index 0000000000..4c1d7206cc Binary files /dev/null and b/2026/day-52/day-52(3).png differ diff --git a/2026/day-52/day-52-namespaces-deployments.md b/2026/day-52/day-52-namespaces-deployments.md new file mode 100644 index 0000000000..f74184255b --- /dev/null +++ b/2026/day-52/day-52-namespaces-deployments.md @@ -0,0 +1,70 @@ +# Important screenshots of today's session +![alt text](day-52(3)-1.png) + +![alt text](day-52(1)-1.png) + +![alt text](day-52(2)-1.png) + +![alt text](day-52-1.png) + + +```yml +# namespace.yml +kind: Namespace +apiVersion: v1 +metadata: + name: production +``` + + + + +## Deployment manifest with explanation +```yml +# deployment.yml +kind: Deployment # what resource to create +apiVersion: apps/v1 # which api group the source belongs +metadata: # identity/info of resourece + name: nginx-deployment # What name to give + namespace: dev # which namespace will it belong + labels: # these are like identification mark + app: nginx +spec: # deployment's information of what to create like replicas and pod template + replicas: 5 + selector: + matchLabels: # find and own pods that have these labels + app: nginx # Label to use + template: # blueprint for creating each pod + metadata: # identity of each pod that gets created + labels: + app: nginx # every pod gets this label + + spec: # specification of container + containers: + - name: nginx # name of conatiner + image: nginx:1.24 # Image to use + ports: # ports on which conatiner will run + - containerPort: 80 +``` + +## What namespaces are and why you would use them? +### namespaces are like different environment inside the cluster. Like different rooms inside a house, so that our request doesn't get in wrong room. +### we use them to avoid name conflicts, two teams can have same pod named 'nginx' as long as they are in different namespace. + +## What happens when you delete a Pod managed by a Deployment vs a standalone Pod? +### The pod get recreated if deleted from deployment as the manifest will maintain the desired state by creating Replicasets. +## A Deployment creates one (or more during updates) ReplicaSet, and the ReplicaSet maintains the desired number of Pods. +### But in case of standalone pod it will not be recreated. + +## How scaling works (both imperative and declarative) +### Imperative scaling → you manually run a command (like kubectl scale) to change replicas right now. No automation. +### Declarative scaling → you define desired state in YAML (replicas: 3), and Kubernetes ensures it stays that way. + +## How rolling updates and rollbacks work +### In a rolling update, old pods keep running while creating new pods and after successfull creation,old pods gets deleted. +### Rollback = going back to a previous working version of your Deployment. +### How it actually works +#### Every time you update a Deployment, Kubernetes keeps revision history ((ReplicaSets)) +#### If the new version is broken, you can revert to a previous ReplicaSet + + diff --git a/2026/day-52/day-52.png b/2026/day-52/day-52.png new file mode 100644 index 0000000000..6224a5220d Binary files /dev/null and b/2026/day-52/day-52.png differ diff --git a/2026/day-53/day-53(1).png b/2026/day-53/day-53(1).png new file mode 100644 index 0000000000..bf35e6dd1d Binary files /dev/null and b/2026/day-53/day-53(1).png differ diff --git a/2026/day-53/day-53-services.md b/2026/day-53/day-53-services.md new file mode 100644 index 0000000000..43837dba50 --- /dev/null +++ b/2026/day-53/day-53-services.md @@ -0,0 +1,65 @@ + +# What problem Services solve and how they relate to Pods and Deployments +## Pods are ephemeral and get new IP addresses when they restart; Services provide a single, permanent IP and DNS name to act as a stable entry point. They decouple the requester from the specific backend Pods, ensuring traffic always finds a healthy instance. + + +# Your three Service manifests with an explanation of each type +```yml +# loadbalancer-service.yml +kind: Service +apiVersion: v1 +metadata: + name: web-app-loadbalancer +spec: + type: LoadBalancer + selector: + app: web-app + ports: + - port: 80 + targetPort: 80 +``` + +```yml +# cluster-service.yml +kind: Service +apiVersion: v1 +metadata: + name: web-app-clusterip +spec: + type: ClusterIP + selector: + app: web-app + ports: + - port: 80 + targetPort: 80 +``` + +```yml +# nodeport-service.yml +kind: Service +apiVersion: v1 +metadata: + name: web-app-nodeport +spec: + type: NodePort + selector: + app: web-app + ports: + - port: 80 + targetPort: 80 + nodePort: 30080 +``` + + + +# The difference between ClusterIP, NodePort, and LoadBalancer +## ClusterIP is for internal communication, +## NodePort is a basic way to expose services to the outside world via the host IP, and +## LoadBalancer is the enterprise-standard for external access using a dedicated cloud IP. Think of them as levels of visibility: Internal $\rightarrow$ Host Network $\rightarrow$ Public Internet. + +# How Kubernetes DNS works for service discovery +## Kubernetes runs a built-in DNS service (CoreDNS) that watches for new Services and creates a record for each (e.g., my-svc.my-namespace.svc.cluster.local). Pods can simply "call" another service by its name instead of tracking volatile IP addresses. + +# What Endpoints are and how to inspect them +## Endpoints are the list of actual Pod IP addresses that match a Service's selector and are currently "Ready" to receive traffic. You can inspect them using kubectl get endpoints or see them detailed under the "Endpoints" section of kubectl describe svc . + diff --git a/2026/day-53/day-53.png b/2026/day-53/day-53.png new file mode 100644 index 0000000000..47b98fc150 Binary files /dev/null and b/2026/day-53/day-53.png differ diff --git a/2026/day-54/day-54(1).png b/2026/day-54/day-54(1).png new file mode 100644 index 0000000000..4d9a651959 Binary files /dev/null and b/2026/day-54/day-54(1).png differ diff --git a/2026/day-54/day-54(2).png b/2026/day-54/day-54(2).png new file mode 100644 index 0000000000..bc35d293b9 Binary files /dev/null and b/2026/day-54/day-54(2).png differ diff --git a/2026/day-54/day-54-configmaps-secrets.md b/2026/day-54/day-54-configmaps-secrets.md new file mode 100644 index 0000000000..ff0080a07a --- /dev/null +++ b/2026/day-54/day-54-configmaps-secrets.md @@ -0,0 +1,27 @@ +# What ConfigMaps and Secrets are and when to use each +### A conifgmap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume. +### ConfigMap does not provide secrecy or encryption. If the data you want to store are confidential, use a Secret rather than a ConfigMap. + +# The difference between environment variables and volume mounts +### Use Environment Variables When: +#### Configuration data is small (< 1MB total) +#### Values are simple key-value pairs +#### Application expects standard environment variables +#### Configuration is truly static during pod lifetime +#### You need maximum portability across platforms + +### Use Volume Mounts When: +#### Configuration files are large or complex +#### You need structured data (JSON, YAML, XML) +#### Configuration might change during runtime +#### You have binary data or certificates +#### File permissions and ownership matter +#### You need atomic updates to multiple files + +# Why base64 is encoding, not encryption +### Base64 is an encoding scheme, not encryption, because it is a reversible, keyless transformation designed solely for data format compatibility rather than security + +# How ConfigMap updates propagate to volumes but not env vars +### Volumes: When a ConfigMap is mounted as a volume, the kubelet eventually updates the files in the container (delay depends on sync period and cache strategy). The application must detect and reload the file changes. +### Environment Variables: Values are injected once at pod startup. Updates do not propagate; a pod restart (e.g., via kubectl rollout restart) is required for changes to take effect. + diff --git a/2026/day-54/day-54.png b/2026/day-54/day-54.png new file mode 100644 index 0000000000..550c8c6e5d Binary files /dev/null and b/2026/day-54/day-54.png differ diff --git a/2026/day-55/day-55_PV__PVC/PersisentVolume.yml b/2026/day-55/day-55_PV__PVC/PersisentVolume.yml new file mode 100644 index 0000000000..380f394f7e --- /dev/null +++ b/2026/day-55/day-55_PV__PVC/PersisentVolume.yml @@ -0,0 +1,15 @@ +kind: PersistentVolume +apiVersion: v1 +metadata: + name: static-provision-volume + labels: + day: day-55 +spec: + storageClassName: manual + capacity: + storage: 1Gi + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /tmp/k8s-pv-data diff --git a/2026/day-55/day-55_PV__PVC/day-55-persistent-volumes.md b/2026/day-55/day-55_PV__PVC/day-55-persistent-volumes.md new file mode 100644 index 0000000000..e2ac109b96 --- /dev/null +++ b/2026/day-55/day-55_PV__PVC/day-55-persistent-volumes.md @@ -0,0 +1,75 @@ +# Kubernetes Persistent Storage + +--- + +## Why Containers Need Persistent Storage + +- Containers are ephemeral — when a container restarts or dies, all data inside is lost +- Default container storage is tied to the container lifecycle +- Apps like databases, file uploads, logs need data to survive restarts +- Multiple containers may need to share the same data +- Without persistent storage, stateful apps cannot run reliably in Kubernetes + +--- + +## What PVs and PVCs Are and How They Relate + +### PersistentVolume (PV) + +- A piece of actual storage provisioned in the cluster +- Created by the cluster admin (or dynamically by a provisioner) +- Lives independently of any Pod +- Has its own lifecycle — not tied to a Pod or namespace + +### PersistentVolumeClaim (PVC) + +- A request for storage made by a user/app +- You specify how much storage you need and what access mode +- Kubernetes finds a matching PV and binds them together +- Pod uses the PVC, not the PV directly + +### How They Relate + +- PVC is like a ticket — PV is the actual storage +- Kubernetes matches a PVC to a suitable PV based on size, access mode, and StorageClass +- Once bound, that PV is exclusively reserved for that PVC + +--- + +## Static vs Dynamic Provisioning + +### Static Provisioning + +- Admin manually creates PVs in advance +- PVCs then bind to one of the available pre-created PVs +- Admin must know storage needs ahead of time +- If no matching PV exists, PVC stays Pending + +### Dynamic Provisioning + +- No need to pre-create PVs manually +- PVC references a StorageClass +- Kubernetes automatically provisions a PV when the PVC is created +- Needs a provisioner running in the cluster (e.g. local-path, AWS EBS, GCE PD) +- More flexible and scalable than static + +--- + +## Access Modes + +- `ReadWriteOnce (RWO)` — mounted as read-write by a single node only +- `ReadOnlyMany (ROX)` — mounted as read-only by many nodes simultaneously +- `ReadWriteMany (RWX)` — mounted as read-write by many nodes simultaneously +- Not all storage backends support all access modes +- For example, AWS EBS only supports RWO, NFS supports RWX + +--- + +## Reclaim Policies + +- `Retain` — PV is kept after PVC is deleted, data is preserved, admin must manually clean up +- `Delete` — PV and the underlying storage are automatically deleted when PVC is deleted +- `Recycle` — deprecated, used to do a basic scrub and make PV available again +- Default policy depends on the StorageClass being used +- Use `Retain` when data must not be lost accidentally +- Use `Delete` for temporary or dev workloads where cleanup should be automatic diff --git a/2026/day-55/day-55_PV__PVC/pod.yml b/2026/day-55/day-55_PV__PVC/pod.yml new file mode 100644 index 0000000000..e042b2c0a7 --- /dev/null +++ b/2026/day-55/day-55_PV__PVC/pod.yml @@ -0,0 +1,27 @@ +# Problem - Data lost on pod recreation + + +kind: Pod +apiVersion: v1 +metadata: + name: ephermal-pod + namespace: volumes +spec: + containers: + - name: busybox + image: busybox:latest + command: ["/bin/sh"] + args: + - "-c" + - | + mkdir -p /data + MSG="[$(date '+%Y-%m-%d %H:%M:%S')] Message written" + echo "$MSG" > /data/message.txt + echo "$MSG" + tail -f /dev/null + volumeMounts: + - mountPath: /cache + name: empty-volume + volumes: + - name: empty-volume + emptyDir: {} diff --git a/2026/day-55/day-55_PV__PVC/pvc-dynamic.yml b/2026/day-55/day-55_PV__PVC/pvc-dynamic.yml new file mode 100644 index 0000000000..c4b77aad13 --- /dev/null +++ b/2026/day-55/day-55_PV__PVC/pvc-dynamic.yml @@ -0,0 +1,13 @@ +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: myclaim + namespace: volumes +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi + storageClassName: standard + diff --git a/2026/day-55/day-55_PV__PVC/pvc-pod-dynamic.yml b/2026/day-55/day-55_PV__PVC/pvc-pod-dynamic.yml new file mode 100644 index 0000000000..a38ba9e479 --- /dev/null +++ b/2026/day-55/day-55_PV__PVC/pvc-pod-dynamic.yml @@ -0,0 +1,17 @@ +kind: Pod +apiVersion: v1 +metadata: + name: pvc-consumer + namespace: volumes +spec: + containers: + - name: busybox + image: busybox:latest + command: ["sh", "-c", "while true; do echo writing; echo hello >> /data/out.txt; sleep 5; done"] + volumeMounts: + - name: storage + mountPath: /data + volumes: + - name: storage + persistentVolumeClaim: + claimName: myclaim diff --git a/2026/day-55/day-55_PV__PVC/pvc-pod.yml b/2026/day-55/day-55_PV__PVC/pvc-pod.yml new file mode 100644 index 0000000000..32a5b1966a --- /dev/null +++ b/2026/day-55/day-55_PV__PVC/pvc-pod.yml @@ -0,0 +1,25 @@ +apiVersion: v1 +kind: Pod +metadata: + name: mypod + namespace: volumes +spec: + containers: + - name: busybox + image: busybox:latest + volumeMounts: + - mountPath: "/data" + name: mypvc + command: ["/bin/sh"] + args: + - "-c" + - | + mkdir -p /data + MSG="[$(date '+%Y-%m-%d %H:%M:%S')] Message written" + echo "$MSG" > /data/message.txt + echo "$MSG" + tail -f /dev/null + volumes: + - name: mypvc + persistentVolumeClaim: + claimName: myclaim diff --git a/2026/day-55/day-55_PV__PVC/pvc.yml b/2026/day-55/day-55_PV__PVC/pvc.yml new file mode 100644 index 0000000000..4eb53bd51e --- /dev/null +++ b/2026/day-55/day-55_PV__PVC/pvc.yml @@ -0,0 +1,17 @@ +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: myclaim + namespace: volumes +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi + storageClassName: manual + selector: + matchLabels: + day: day-55 + + diff --git "a/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Day-55-stateful-stes-notes.md" "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Day-55-stateful-stes-notes.md" new file mode 100644 index 0000000000..430d4d3621 --- /dev/null +++ "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Day-55-stateful-stes-notes.md" @@ -0,0 +1,245 @@ +# Kubernetes StatefulSets — Complete Notes + +--- + +## 1. What is a StatefulSet? + +A **StatefulSet** is a Kubernetes workload API object used to manage **stateful applications**. Unlike Deployments, StatefulSets give each pod a **stable, unique identity** that persists across rescheduling. + +Each pod in a StatefulSet gets: +- A **stable hostname**: `nginx-stats-0`, `nginx-stats-1`, `nginx-stats-2` +- A **stable DNS name**: `...svc.cluster.local` +- Its **own PersistentVolumeClaim (PVC)** — data is NOT shared between pods + +--- + +## 2. StatefulSet vs Deployment + +| Feature | StatefulSet | Deployment | +|---|---|---| +| Pod identity | Stable, unique (`pod-0`, `pod-1`) | Random (`pod-abc123`) | +| Pod DNS name | Stable per pod | Not stable | +| Storage | Each pod gets its own PVC | Shared or no persistent storage | +| Scaling order | Ordered (0 → 1 → 2) | Random/parallel | +| Use case | Databases, queues, stateful apps | Stateless apps (web servers, APIs) | +| Pod restart | Same name and storage retained | New random name | + +### When to use StatefulSet +- Databases (MySQL, PostgreSQL, MongoDB) +- Message queues (Kafka, RabbitMQ) +- Distributed systems (Elasticsearch, Zookeeper) +- Any app that needs **stable network identity** or **per-pod storage** + +### When to use Deployment +- Stateless web servers +- REST APIs +- Frontend apps +- Any app where pods are interchangeable + +--- + +## 3. StatefulSet YAML + +```yaml +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: nginx-stats + namespace: nginx +spec: + selector: + matchLabels: + app: nginx + serviceName: "my-service" # Must match the headless service name + replicas: 3 + minReadySeconds: 10 + template: + metadata: + labels: + app: nginx + spec: + terminationGracePeriodSeconds: 10 + containers: + - name: nginx + image: nginx:latest + ports: + - containerPort: 80 + volumeMounts: + - name: www + mountPath: /usr/share/nginx/html + volumeClaimTemplates: + - metadata: + name: www + spec: + accessModes: [ "ReadWriteOnce" ] + resources: + requests: + storage: 100Mi +``` + +--- + +## 4. Headless Service + +A **Headless Service** has `clusterIP: None`. Instead of load balancing traffic to a virtual IP, it returns the **actual pod IPs** directly via DNS. + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service + namespace: nginx +spec: + clusterIP: None # ← This makes it headless + selector: + app: nginx + ports: + - port: 80 + targetPort: 80 +``` + +### Regular Service vs Headless Service + +| | Regular Service | Headless Service | +|---|---|---| +| `clusterIP` | Virtual IP (e.g. `10.96.x.x`) | `None` | +| DNS resolution | Returns ClusterIP (load balanced) | Returns individual pod IPs | +| Use with | Deployments | StatefulSets | +| Pod addressable? | No | Yes (each pod has DNS) | + +--- + +## 5. Stable DNS Names + +Each StatefulSet pod gets a DNS entry in the format: + +``` +...svc.cluster.local +``` + +For our setup: + +``` +nginx-stats-0.my-service.nginx.svc.cluster.local → 10.244.1.x +nginx-stats-1.my-service.nginx.svc.cluster.local → 10.244.1.9 +nginx-stats-2.my-service.nginx.svc.cluster.local → 10.244.1.11 +``` + +This DNS name is **stable** — even if the pod is deleted and recreated, it gets the same DNS name and reconnects to its own storage. + +--- + +## 6. volumeClaimTemplates + +`volumeClaimTemplates` automatically creates a **separate PVC for each pod**. This is the key feature that enables per-pod storage isolation. + +```yaml +volumeClaimTemplates: +- metadata: + name: www + spec: + accessModes: [ "ReadWriteOnce" ] + resources: + requests: + storage: 100Mi +``` + +This creates: + +``` +NAME STATUS CAPACITY +www-nginx-stats-0 Bound 100Mi +www-nginx-stats-1 Bound 100Mi +www-nginx-stats-2 Bound 100Mi +``` + +Each pod mounts **only its own PVC**. Data written by `nginx-stats-0` is NOT visible to `nginx-stats-1`. + +--- + +## 7. DNS Resolution — Lab Verification + +### Step 1: Launch a busybox pod + +```bash +kubectl run busybox --image=busybox:1.28 --rm -it --restart=Never -- /bin/sh +``` + +### Step 2: Run nslookup inside busybox + +```bash +nslookup nginx-stats-0.my-service.nginx.svc.cluster.local +nslookup nginx-stats-1.my-service.nginx.svc.cluster.local +nslookup nginx-stats-2.my-service.nginx.svc.cluster.local +``` + +### Successful Output + +``` +Server: 10.96.0.10 +Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local + +Name: nginx-stats-1.my-service.nginx.svc.cluster.local +Address 1: 10.244.1.9 nginx-stats-1.my-service.nginx.svc.cluster.local + +Name: nginx-stats-2.my-service.nginx.svc.cluster.local +Address 1: 10.244.1.11 nginx-stats-2.my-service.nginx.svc.cluster.local +``` + +> ✅ Each pod resolves to its own unique IP — DNS is working correctly. + +--- + +## 8. Per-Pod Storage — Lab Verification + +### Write unique data to each pod + +```bash +kubectl exec nginx-stats-0 -n nginx -- sh -c "echo 'Data from web-0' > /usr/share/nginx/html/index.html" +kubectl exec nginx-stats-1 -n nginx -- sh -c "echo 'Data from web-1' > /usr/share/nginx/html/index.html" +kubectl exec nginx-stats-2 -n nginx -- sh -c "echo 'Data from web-2' > /usr/share/nginx/html/index.html" +``` + +### Verify each pod has isolated data + +```bash +kubectl exec nginx-stats-0 -n nginx -- cat /usr/share/nginx/html/index.html # → Data from web-0 +kubectl exec nginx-stats-1 -n nginx -- cat /usr/share/nginx/html/index.html # → Data from web-1 +kubectl exec nginx-stats-2 -n nginx -- cat /usr/share/nginx/html/index.html # → Data from web-2 +``` + +Each pod returning **different data** confirms that `volumeClaimTemplates` created separate PVCs per pod. + +--- + +## 9. Useful Commands + +```bash +# Get all pods in nginx namespace +kubectl get pods -n nginx + +# Watch pods in real time +kubectl get pods -n nginx -l app=nginx -w + +# Check PVCs +kubectl get pvc -n nginx + +# Check headless service +kubectl get svc -n nginx + +# Describe service (verify selector) +kubectl describe svc my-service -n nginx + +# Check DNS from inside cluster +kubectl run busybox --image=busybox:1.28 --rm -it --restart=Never -- /bin/sh +``` + +--- + +## 10. Key Takeaways + +- StatefulSets give pods **stable identity** — name, DNS, and storage survive restarts +- **Headless Services** (`clusterIP: None`) enable per-pod DNS resolution +- **`volumeClaimTemplates`** auto-creates one PVC per pod — storage is isolated +- Pod DNS format: `...svc.cluster.local` +- Use StatefulSets for **databases and stateful apps**; use Deployments for **stateless apps** diff --git "a/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Deployment.yml" "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Deployment.yml" new file mode 100644 index 0000000000..2da107dff4 --- /dev/null +++ "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Deployment.yml" @@ -0,0 +1,22 @@ +kind: Deployment +apiVersion: apps/v1 +metadata: + name: nginx-deployment + namespace: nginx + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:latest + ports: + - containerPort: 80 diff --git "a/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Service.yml" "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Service.yml" new file mode 100644 index 0000000000..ac175e200f --- /dev/null +++ "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Service.yml" @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service + namespace: nginx +spec: + selector: + app: nginx + ports: + - protocol: TCP + port: 80 + targetPort: 80 + clusterIP: None diff --git "a/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/StatefulSet.yml" "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/StatefulSet.yml" new file mode 100644 index 0000000000..0948fafaf8 --- /dev/null +++ "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/StatefulSet.yml" @@ -0,0 +1,34 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: nginx-stats + namespace: nginx +spec: + selector: + matchLabels: + app: nginx # has to match .spec.template.metadata.labels + serviceName: "my-service" + replicas: 3 # by default is 1 + minReadySeconds: 10 # by default is 0 + template: + metadata: + labels: + app: nginx # has to match .spec.selector.matchLabels + spec: + terminationGracePeriodSeconds: 10 + containers: + - name: nginx + image: nginx:latest + ports: + - containerPort: 80 + volumeMounts: + - name: www + mountPath: /usr/share/nginx/html + volumeClaimTemplates: + - metadata: + name: www + spec: + accessModes: [ "ReadWriteOnce" ] + resources: + requests: + storage: 100Mi diff --git "a/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/day-56.png" "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/day-56.png" new file mode 100644 index 0000000000..121ff45397 Binary files /dev/null and "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/day-56.png" differ diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/busybox-liveness-probe.yml" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/busybox-liveness-probe.yml" new file mode 100644 index 0000000000..7771bcc392 --- /dev/null +++ "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/busybox-liveness-probe.yml" @@ -0,0 +1,24 @@ +kind: Pod +apiVersion: v1 +metadata: + name: busybox +spec: + containers: + - name: busybox + image: busybox:latest + command: ["sh","-c","touch /tmp/healthy && sleep 30 && rm -f /tmp/healthy"] + resources: + requests: + memory: "128Mi" + cpu: "100m" + limits: + memory: "256Mi" + cpu: "250m" + livenessProbe: + exec: + command: + - cat + - /tmp/healthy + initialDelaySeconds: 5 + periodSeconds: 5 + failureThreshold: 3 diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/busybox-startup-probe.yml" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/busybox-startup-probe.yml" new file mode 100644 index 0000000000..a9d27311aa --- /dev/null +++ "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/busybox-startup-probe.yml" @@ -0,0 +1,34 @@ +kind: Pod +apiVersion: v1 +metadata: + name: busybox +spec: + containers: + - name: busybox + image: busybox:latest + command: ["sh","-c","sleep 20 && touch /tmp/started && touch /tmp/healthy && sleep 600 "] + resources: + requests: + memory: "128Mi" + cpu: "100m" + limits: + memory: "256Mi" + cpu: "250m" + startupProbe: + exec: + command: + - cat + - /tmp/started + periodSeconds: 5 # check every 5s + failureThreshold: 12 # allow up to 60s for startup (5 × 12) + timeoutSeconds: 1 + + livenessProbe: + exec: + command: + - cat + - /tmp/healthy + initialDelaySeconds: 0 # no extra delay — startup probe handles the wait + periodSeconds: 5 + failureThreshold: 3 + timeoutSeconds: 1 diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57(1).png" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57(1).png" new file mode 100644 index 0000000000..5f509d7720 Binary files /dev/null and "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57(1).png" differ diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57(2).png" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57(2).png" new file mode 100644 index 0000000000..e590a316bd Binary files /dev/null and "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57(2).png" differ diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57-resources-probes.md" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57-resources-probes.md" new file mode 100644 index 0000000000..f631bfeef4 --- /dev/null +++ "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57-resources-probes.md" @@ -0,0 +1,518 @@ +# Day 57 — Kubernetes Resource Management & Probes + +> 90 Days of DevOps | Uttam Tripathi | CSJMU Kanpur + +--- + +## 📌 Table of Contents + +1. [Requests vs Limits](#1-requests-vs-limits) +2. [What Happens When Limits Are Exceeded](#2-what-happens-when-limits-are-exceeded) +3. [Liveness vs Readiness vs Startup Probes](#3-liveness-vs-readiness-vs-startup-probes) +4. [Hands-on Demo Results](#4-hands-on-demo-results) +5. [Screenshots & Observations](#5-screenshots--observations) +6. [Key Takeaways](#6-key-takeaways) + +--- + +## 1. Requests vs Limits + +### What are they? + +```yaml +resources: + requests: + memory: "128Mi" # used for SCHEDULING + cpu: "100m" + limits: + memory: "256Mi" # used for ENFORCEMENT + cpu: "250m" +``` + +### Requests — Scheduling + +- Used by the **Kubernetes Scheduler** to decide **which node** to place the pod on +- The scheduler looks for a node that has **at least** this much free resource +- If no node has enough → pod stays in **Pending** state forever +- Does **not** restrict actual usage — just a reservation + +``` +Pod requests 128Mi memory + ↓ +Scheduler scans all nodes + ↓ +Node A: 100Mi free → ❌ skip +Node B: 512Mi free → ✅ schedule here +``` + +### Limits — Enforcement + +- Enforced by the **Linux kernel** (cgroups) at runtime +- Container **cannot exceed** these values +- Exceeding CPU limit → container **throttled** (slowed down) +- Exceeding memory limit → container **OOMKilled** (killed immediately) + +### Key Difference Table + +| | `requests` | `limits` | +|--|-----------|---------| +| **Used by** | Kubernetes Scheduler | Linux Kernel (cgroups) | +| **Purpose** | Node selection | Runtime enforcement | +| **Effect** | Pod placed on right node | Pod throttled or killed | +| **If not set** | Scheduler has no hint | No restriction (dangerous) | +| **CPU exceed** | N/A | Throttled (slowed) | +| **Memory exceed** | N/A | OOMKilled (exit 137) | + +### Best Practice + +``` +requests = what your app typically uses +limits = maximum your app should ever use + +requests ≤ limits (always) +``` + +--- + +## 2. What Happens When Limits Are Exceeded + +### CPU Limit Exceeded → Throttling + +``` +Container tries to use 500m CPU +Limit is set to 250m + ↓ +Kernel throttles CPU cycles + ↓ +App runs slower (NOT killed) +Container stays Running ✅ +RESTARTS: 0 +``` + +CPU is a **compressible** resource — Kubernetes throttles, never kills for CPU. + +### Memory Limit Exceeded → OOMKilled + +``` +Container tries to allocate 200Mi +Limit is set to 100Mi + ↓ +Linux OOM Killer activates + ↓ +Container killed with SIGKILL +Exit Code: 137 (128 + signal 9) +STATUS: OOMKilled ❌ +``` + +Memory is a **non-compressible** resource — Kubernetes kills immediately. + +### Exit Code 137 Explained + +``` +137 = 128 + 9 + ↑ + SIGKILL (signal 9 sent by OOM killer) +``` + +### How to Confirm OOMKill + +```bash +# Check pod status +kubectl get pod +# STATUS: OOMKilled + +# Get full details +kubectl describe pod +# Last State: Terminated +# Reason: OOMKilled +# Exit Code: 137 + +# Programmatic check +kubectl get pod -o jsonpath='{.status.containerStatuses[0].lastState.terminated.reason}' +# OOMKilled +``` + +### Pending Pod — Requests Too High + +``` +Pod requests 128Gi memory + 100 CPU cores + ↓ +Scheduler scans all nodes + ↓ +No node can satisfy request + ↓ +Pod stays PENDING forever +No OOMKill, No restart — just stuck +``` + +```bash +# Check why pod is pending +kubectl describe pod | grep -A 5 "Events" +# Warning FailedScheduling 0/1 nodes are available: +# 1 Insufficient memory, 1 Insufficient cpu +``` + +### OOMKill vs Pending vs Throttle — Summary + +| Situation | Status | Exit Code | Restarted? | +|-----------|--------|-----------|-----------| +| Memory limit exceeded | `OOMKilled` | 137 | ✅ Yes (if restartPolicy: Always) | +| CPU limit exceeded | `Running` | — | ❌ No (throttled) | +| Requests too high | `Pending` | — | ❌ No (never scheduled) | +| Normal exit | `Completed` | 0 | ❌ No | + +--- + +## 3. Liveness vs Readiness vs Startup Probes + +### Overview + +``` +Container starts + ↓ + startupProbe ← IS APP DONE BOOTING? + ↓ (succeeds once → stops forever) + livenessProbe ← IS APP STILL ALIVE? + readinessProbe ← IS APP READY FOR TRAFFIC? +``` + +### Probe Types Available + +```yaml +# 1. exec — run a command inside container +exec: + command: [cat, /tmp/healthy] + +# 2. httpGet — HTTP request to an endpoint +httpGet: + path: /healthz + port: 8080 + +# 3. tcpSocket — check if port is open +tcpSocket: + port: 3306 +``` + +### startupProbe + +**Question it answers:** Has the app finished starting up? + +```yaml +startupProbe: + exec: + command: + - cat + - /tmp/started + periodSeconds: 5 # check every 5s + failureThreshold: 12 # 60s budget (5 × 12) + timeoutSeconds: 1 +``` + +- Runs **first**, from container start +- livenessProbe and readinessProbe are **disabled** until this passes +- Once it succeeds → **stops forever, never runs again** +- Budget formula: `periodSeconds × failureThreshold = max startup time` +- If budget exceeded → container restarted + +**Use cases:** +- Java/Spring Boot apps (slow JVM startup) +- Apps running DB migrations on boot +- Apps waiting for external service connections +- Any app taking more than 30s to start + +### livenessProbe + +**Question it answers:** Is the app still alive and functioning? + +```yaml +livenessProbe: + exec: + command: + - cat + - /tmp/healthy + initialDelaySeconds: 0 # startup probe handles wait + periodSeconds: 5 + failureThreshold: 3 # restart after 3 failures = 15s + timeoutSeconds: 1 +``` + +- Starts after startupProbe succeeds +- Runs **forever** throughout container lifetime +- On failure → container **restarted** (RESTARTS counter goes up) +- Container is killed with SIGTERM then SIGKILL + +**Use cases:** +- Detecting deadlocked apps (running but frozen) +- Detecting memory leak causing unresponsiveness +- Auto-recovery from silent crashes +- App stuck in infinite loop + +### readinessProbe + +**Question it answers:** Is the app ready to receive traffic? + +```yaml +readinessProbe: + httpGet: + path: /ready + port: 8080 + initialDelaySeconds: 0 + periodSeconds: 5 + failureThreshold: 3 + successThreshold: 1 +``` + +- Starts after startupProbe succeeds +- Runs **forever** throughout container lifetime +- On failure → pod removed from **Service endpoints** (traffic stops) +- Container is **never restarted** — RESTARTS stays 0 +- On recovery → pod **automatically added back** to endpoints + +**Use cases:** +- DB connection temporarily lost +- App temporarily overloaded +- Rolling deployment (new pod not ready yet) +- App draining connections during graceful shutdown +- Waiting for cache to warm up + +### All 3 Probes Comparison Table + +| | `startupProbe` | `livenessProbe` | `readinessProbe` | +|--|--------------|----------------|-----------------| +| **Purpose** | App done booting? | App still alive? | App ready for traffic? | +| **Runs when** | Container start only | After startup succeeds | After startup succeeds | +| **Runs how long** | Until first success | Forever | Forever | +| **On failure** | Restart (budget exceeded) | Restart container | Remove from endpoints | +| **Container restarted?** | ✅ Yes | ✅ Yes | ❌ Never | +| **Traffic stopped?** | ✅ Yes (0/1) | ✅ Yes (during restart) | ✅ Yes (indefinitely) | +| **RESTARTS counter** | Goes up | Goes up | Stays at 0 | +| **Recovers automatically?** | N/A | ✅ After restart | ✅ Without restart | + +### What Happens if You Skip a Probe? + +| Missing Probe | Real Problem | +|--------------|-------------| +| ❌ No `startupProbe` | Liveness kills slow-starting app before it finishes booting | +| ❌ No `livenessProbe` | Deadlocked/frozen app runs forever — users get errors with no recovery | +| ❌ No `readinessProbe` | Traffic hits pod before it is ready — causes errors during deployments | + +### Production Best Practice Template + +```yaml +# Always use all 3 in production +startupProbe: + httpGet: + path: /healthz + port: 8080 + failureThreshold: 12 # 60s startup budget + periodSeconds: 5 + +livenessProbe: + httpGet: + path: /healthz # same endpoint as startup + port: 8080 + initialDelaySeconds: 0 # startup probe already handled the wait + periodSeconds: 10 + failureThreshold: 3 + +readinessProbe: + httpGet: + path: /ready # SEPARATE endpoint from liveness + port: 8080 + initialDelaySeconds: 0 + periodSeconds: 5 + failureThreshold: 3 +``` + +> `/healthz` and `/ready` are **separate endpoints** — liveness and readiness +> can fail independently. App can be alive but not ready (DB reconnecting). + +--- + +## 4. Hands-on Demo Results + +### Demo 1 — OOMKill (polinux/stress) + +**Manifest used:** +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: oomkill-demo +spec: + containers: + - name: stress + image: polinux/stress + command: ["stress"] + args: ["--vm", "1", "--vm-bytes", "200M", "--vm-hang", "1"] + resources: + limits: + memory: "100Mi" # container requests 200M but limit is 100Mi + restartPolicy: Never +``` + +**Result observed:** +```bash +kubectl get pod oomkill-demo +# NAME READY STATUS RESTARTS +# oomkill-demo 0/1 OOMKilled 0 + +kubectl describe pod oomkill-demo +# Last State: Terminated +# Reason: OOMKilled +# Exit Code: 137 +``` + +--- + +### Demo 2 — Pending Pod + +**Manifest used:** +```yaml +resources: + requests: + memory: "128Gi" # no node has 128GB RAM + cpu: "100" # no node has 100 cores +``` + +**Result observed:** +```bash +kubectl get pod pending-demo +# NAME READY STATUS RESTARTS +# pending-demo 0/1 Pending 0 + +kubectl describe pod pending-demo +# Warning FailedScheduling +# 0/1 nodes are available: Insufficient memory, Insufficient cpu +``` + +--- + +### Demo 3 — Liveness Probe (busybox) + +**Manifest used:** +```yaml +command: ["sh","-c","touch /tmp/healthy && sleep 30 && rm -f /tmp/healthy && sleep 600"] +livenessProbe: + exec: + command: [cat, /tmp/healthy] + periodSeconds: 5 + failureThreshold: 3 +``` + +**Result observed:** +```bash +# After 30s — file deleted, probe fails 3x +kubectl get pod busybox -w +# NAME READY STATUS RESTARTS +# busybox 1/1 Running 0 +# busybox 1/1 Running 1 ← restarted after probe failed! +``` + +--- + +### Demo 4 — Readiness Probe (nginx) + +**Manifest used:** +```yaml +readinessProbe: + httpGet: + path: / + port: 80 + periodSeconds: 5 + failureThreshold: 3 +``` + +**Steps:** +```bash +# 1. Apply pod and expose +kubectl apply -f nginx-readiness.yml +kubectl expose pod nginx-readiness --port=80 --name=readiness-svc + +# 2. Confirm endpoint exists +kubectl get endpoints readiness-svc +# ENDPOINTS: 10.244.0.5:80 ✅ + +# 3. Break the probe +kubectl exec nginx-readiness -- rm /usr/share/nginx/html/index.html + +# 4. After 15s — pod NOT READY, endpoints EMPTY +kubectl get pod nginx-readiness +# READY: 0/1 RESTARTS: 0 ✅ (not restarted — just removed from traffic) + +kubectl get endpoints readiness-svc +# ENDPOINTS: ✅ + +# 5. Restore — pod recovers without restart +kubectl exec nginx-readiness -- sh -c "echo 'back' > /usr/share/nginx/html/index.html" +kubectl get pod nginx-readiness +# READY: 1/1 RESTARTS: 0 ✅ +``` + +--- + +### Demo 5 — Startup + Liveness Probe (busybox) + +**Manifest used:** +```yaml +command: ["sh","-c","sleep 20 && touch /tmp/started && touch /tmp/healthy && sleep 600"] +startupProbe: + exec: + command: [cat, /tmp/started] + periodSeconds: 5 + failureThreshold: 12 # 60s budget +livenessProbe: + exec: + command: [cat, /tmp/healthy] + periodSeconds: 5 + failureThreshold: 3 +``` + +**Result observed:** +```bash +kubectl get pod busybox -w +# AGE 21s → 0/1 Running 0 ← startup probe running, not ready yet +# AGE 25s → 1/1 Running 0 ← startup succeeded, liveness took over ✅ +# AGE 65s → 1/1 Running 0 ← healthy, RESTARTS: 0 ✅ +``` + +--- + +## 5. Screenshots & Observations + +![alt text](day-57.png) +![alt text](day-57(2).png) +![alt text](day-57(1).png) + + +**Command to capture probe events:** +```bash +kubectl describe pod | grep -A 30 "Events" +``` + +--- + +## 6. Key Takeaways + +``` +1. requests = scheduling hint (node selection) + limits = runtime enforcement (kernel enforced) + +2. CPU exceeded → throttled (slowed, not killed) + RAM exceeded → OOMKilled (exit code 137) + requests > node capacity → Pending (never scheduled) + +3. startupProbe → protects BOOT phase (runs once) + livenessProbe → protects RUNTIME health (restarts on fail) + readinessProbe → protects TRAFFIC routing (no restart on fail) + +4. Always use all 3 probes in production + Use separate /healthz and /ready endpoints + +5. Readiness failure = RESTARTS stays 0 (key interview answer!) + Liveness failure = RESTARTS goes up +``` + +--- + +*Day 57 of 90 | #90DaysOfDevOps | Uttam Tripathi* diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57.png" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57.png" new file mode 100644 index 0000000000..561234c486 Binary files /dev/null and "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57.png" differ diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/exceed-memory-pod.yml" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/exceed-memory-pod.yml" new file mode 100644 index 0000000000..b1c0af381a --- /dev/null +++ "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/exceed-memory-pod.yml" @@ -0,0 +1,14 @@ +kind: Pod +apiVersion: v1 +metadata: + name: polinux +spec: + containers: + - name: app + image: polinux/stress + command: ["stress"] + args: ["--vm", "1", "--vm-bytes", "200M", "--vm-hang", "1"] + resources: + limits: + memory: "100Mi" + diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/nginx-readiness-probe.yml" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/nginx-readiness-probe.yml" new file mode 100644 index 0000000000..4f4c94b9c5 --- /dev/null +++ "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/nginx-readiness-probe.yml" @@ -0,0 +1,28 @@ +kind: Pod +apiVersion: v1 +metadata: + name: nginx + labels: + app: nginx-readiness +spec: + containers: + - name: nginx + image: nginx:1.25.5 + ports: + - containerPort: 80 + name: http + resources: + requests: + memory: "64Mi" + cpu: "100m" + limits: + memory: "128Mi" + cpu: "250m" + readinessProbe: + httpGet: + path: / + port: 80 + initialDelaySeconds: 5 + periodSeconds: 5 + failureThreshold: 3 + successThreshold: 1 diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/pending-pod.yml" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/pending-pod.yml" new file mode 100644 index 0000000000..914148a884 --- /dev/null +++ "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/pending-pod.yml" @@ -0,0 +1,12 @@ +kind: Pod +apiVersion: v1 +metadata: + name: polinux +spec: + containers: + - name: app + image: polinux/stress + resources: + requests: + memory: "128Mi" + cpu: "100" diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/pod.yml" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/pod.yml" new file mode 100644 index 0000000000..5c16f78f85 --- /dev/null +++ "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/pod.yml" @@ -0,0 +1,15 @@ +kind: Pod +apiVersion: v1 +metadata: + name: nginx +spec: + containers: + - name: app + image: nginx:latest + resources: + requests: + memory: "128Mi" + cpu: "100m" + limits: + memory: "256Mi" + cpu: "250m" diff --git "a/2026/day-58/Day58\342\200\223Metrics-Server-and-Horizontal-Pod-Autoscaler/Deployment.yml" "b/2026/day-58/Day58\342\200\223Metrics-Server-and-Horizontal-Pod-Autoscaler/Deployment.yml" new file mode 100644 index 0000000000..d05a89a197 --- /dev/null +++ "b/2026/day-58/Day58\342\200\223Metrics-Server-and-Horizontal-Pod-Autoscaler/Deployment.yml" @@ -0,0 +1,25 @@ +kind: Deployment +apiVersion: apps/v1 +metadata: + name: php-apache + namespace: apache + labels: + app: apache +spec: + replicas: 1 + selector: + matchLabels: + app: apache + template: + metadata: + labels: + app: apache + spec: + containers: + - name: apache-server + image: registry.k8s.io/hpa-example + ports: + - containerPort: 80 + resources: + requests: + cpu: "200m" diff --git "a/2026/day-58/Day58\342\200\223Metrics-Server-and-Horizontal-Pod-Autoscaler/hpa.yml" "b/2026/day-58/Day58\342\200\223Metrics-Server-and-Horizontal-Pod-Autoscaler/hpa.yml" new file mode 100644 index 0000000000..1f9ad37e2c --- /dev/null +++ "b/2026/day-58/Day58\342\200\223Metrics-Server-and-Horizontal-Pod-Autoscaler/hpa.yml" @@ -0,0 +1,33 @@ +kind: HorizontalPodAutoscaler +apiVersion: autoscaling/v2 +metadata: + name: hpa + namespace: apache +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: php-apache + minReplicas: 1 + maxReplicas: 10 + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 50 + behavior: + scaleUp: + stabilizationWindowSeconds: 0 + policies: + - type: Percent + value: 100 + periodSeconds: 15 + scaleDown: + stabilizationWindowSeconds: 300 + policies: + - type: Percent + value: 100 + periodSeconds: 5 + diff --git a/2026/day-58/day-58(1).png b/2026/day-58/day-58(1).png new file mode 100644 index 0000000000..8652f654ee Binary files /dev/null and b/2026/day-58/day-58(1).png differ diff --git a/2026/day-58/day-58-metrics-hpa.md b/2026/day-58/day-58-metrics-hpa.md new file mode 100644 index 0000000000..7091788924 --- /dev/null +++ b/2026/day-58/day-58-metrics-hpa.md @@ -0,0 +1,321 @@ +# Day 58 — Metrics Server and Horizontal Pod Autoscaler (HPA) + +--- + +## 1. What is the Metrics Server and Why Does HPA Need It? + +### What is Metrics Server? + +Metrics Server is a **cluster-wide aggregator of resource usage data**. It collects CPU and memory usage from each node's `kubelet` and exposes them via the Kubernetes Metrics API (`metrics.k8s.io`). + +It is **not installed by default** — you must deploy it manually. + +```bash +# Install Metrics Server +kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml + +# Verify it is running +kubectl get deployment metrics-server -n kube-system +``` + +### Why Does HPA Need It? + +HPA (Horizontal Pod Autoscaler) makes scaling decisions based on **live resource usage**. Without Metrics Server, HPA has no data source to read from and cannot function. + +The flow looks like this: + +``` +kubelet (on each node) + ↓ collects container stats +Metrics Server + ↓ aggregates and exposes via API +metrics.k8s.io API + ↓ HPA reads from here +HPA Controller + ↓ decides to scale up or down +Deployment (replicas adjusted) +``` + +**Without Metrics Server:** +- `kubectl top nodes` → error +- `kubectl top pods` → error +- HPA TARGETS column shows `/50%` +- No autoscaling happens + +**With Metrics Server:** +- Live CPU/memory data available +- HPA can calculate utilization +- Autoscaling works correctly + +### Quick check commands + +```bash +# Check if metrics are available +kubectl top nodes +kubectl top pods -n apache + +# Raw metrics API +kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods +``` + +--- + +## 2. How HPA Calculates Desired Replicas + +HPA uses a simple formula to decide how many replicas are needed: + +``` +desiredReplicas = ceil( currentReplicas × (currentMetricValue / desiredMetricValue) ) +``` + +### Example Calculation + +``` +currentReplicas = 2 +currentCPU usage = 90% +desiredCPU target = 50% + +desiredReplicas = ceil( 2 × (90 / 50) ) + = ceil( 2 × 1.8 ) + = ceil( 3.6 ) + = 4 pods +``` + +HPA rounds **up** (ceiling), never down — to ensure load is handled. + +### Scale Up Example + +``` +Pods = 1, CPU = 474%, Target = 50% + +desiredReplicas = ceil( 1 × (474 / 50) ) + = ceil( 9.48 ) + = 10 pods ← hits maxReplicas cap +``` + +### Scale Down Example + +``` +Pods = 10, CPU = 0%, Target = 50% + +desiredReplicas = ceil( 10 × (0 / 50) ) + = ceil( 0 ) + = 1 pod ← but waits stabilizationWindowSeconds first +``` + +### Key Behaviours + +- Scale **up** is immediate (stabilizationWindowSeconds: 0) +- Scale **down** waits for stabilization window (default 300 seconds) to avoid flapping +- HPA always respects `minReplicas` and `maxReplicas` boundaries +- CPU utilization % = (actual CPU used) ÷ (CPU request) × 100 — this is why CPU requests are mandatory + +--- + +## 3. Difference Between `autoscaling/v1` and `autoscaling/v2` + +### autoscaling/v1 — Old, Limited + +```yaml +apiVersion: autoscaling/v1 +kind: HorizontalPodAutoscaler +metadata: + name: php-apache +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: php-apache + minReplicas: 1 + maxReplicas: 10 + targetCPUUtilizationPercentage: 50 # CPU only, no other options +``` + +Limitations: +- CPU metrics only +- No memory scaling +- No custom metrics +- No behavior/cooldown control +- Deprecated — avoid using in new setups + +### autoscaling/v2 — Current, Powerful + +```yaml +apiVersion: autoscaling/v2 +kind: HorizontalPodAutoscaler +metadata: + name: php-apache + namespace: apache +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: php-apache + minReplicas: 1 + maxReplicas: 10 + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 50 + - type: Resource + resource: + name: memory # memory scaling — not in v1 + target: + type: Utilization + averageUtilization: 70 + behavior: # fine-grained control — not in v1 + scaleUp: + stabilizationWindowSeconds: 0 + policies: + - type: Percent + value: 100 + periodSeconds: 15 + scaleDown: + stabilizationWindowSeconds: 300 + policies: + - type: Percent + value: 100 + periodSeconds: 15 +``` + +### Comparison Table + +| Feature | autoscaling/v1 | autoscaling/v2 | +|---|---|---| +| CPU scaling | ✅ | ✅ | +| Memory scaling | ❌ | ✅ | +| Custom metrics | ❌ | ✅ | +| External metrics | ❌ | ✅ | +| behavior section | ❌ | ✅ | +| Scale up control | ❌ | ✅ | +| Scale down cooldown | ❌ | ✅ | +| Recommended | ❌ Deprecated | ✅ Use this | + +**Always use `autoscaling/v2`** for all new HPA definitions. + +--- + +## 4. Screenshots — kubectl top, HPA Events, Pod Scaling + +### kubectl top (Metrics Server working) + +![alt text](day-58(1).png) +![alt text](day-58.png) + +``` +$ kubectl top nodes +NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% +minikube 350m 8% 1100Mi 27% + +$ kubectl top pods -n apache +NAME CPU(cores) MEMORY(bytes) +php-apache-6d5b6b7c9f-xk2p4 200m 18Mi +``` + +### HPA Status — Idle (no load) + +``` +$ kubectl get hpa -n apache +NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE +php-apache Deployment/php-apache cpu: 0%/50% 1 10 1 6m7s +``` + +### HPA Status — Under Load (load generator running) + +``` +$ kubectl get hpa php-apache -n apache --watch +NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE +php-apache Deployment/php-apache cpu: 474%/50% 1 10 4 9m37s +php-apache Deployment/php-apache cpu: 320%/50% 1 10 7 10m +php-apache Deployment/php-apache cpu: 198%/50% 1 10 10 10m30s +php-apache Deployment/php-apache cpu: 53%/50% 1 10 10 11m +``` + +### Pod Scaling — Pods created automatically + +``` +$ kubectl get pods -n apache +NAME READY STATUS RESTARTS AGE +php-apache-6d5b6b7c9f-xk2p4 1/1 Running 0 15m ← original +php-apache-6d5b6b7c9f-ab3c1 1/1 Running 0 2m ← scaled up +php-apache-6d5b6b7c9f-de4f2 1/1 Running 0 2m +php-apache-6d5b6b7c9f-gh5j3 1/1 Running 0 1m +php-apache-6d5b6b7c9f-kl6m4 1/1 Running 0 1m +... +``` + +### HPA Status — Cooling Down (load stopped) + +``` +$ kubectl get hpa php-apache -n apache --watch +NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE +php-apache Deployment/php-apache cpu: 53%/50% 1 10 10 11m +php-apache Deployment/php-apache cpu: 50%/50% 1 10 10 12m +php-apache Deployment/php-apache cpu: 18%/50% 1 10 10 12m +php-apache Deployment/php-apache cpu: 0%/50% 1 10 10 12m + ↑ waiting 300s stabilization window +php-apache Deployment/php-apache cpu: 0%/50% 1 10 1 17m + ↑ scaled back down after cooldown +``` + +### HPA Events + +``` +$ kubectl describe hpa php-apache -n apache + +Events: + Type Reason Age Message + ---- ------ ---- ------- + Normal SuccessfulRescale 10m New size: 4; reason: cpu resource utilization above target + Normal SuccessfulRescale 9m New size: 10; reason: cpu resource utilization above target + Normal SuccessfulRescale 4m New size: 1; reason: All metrics below target +``` + +--- + +## 5. Complete Setup — Full Command Reference + +```bash +# Step 1 — Create namespace +kubectl create namespace apache + +# Step 2 — Apply Deployment + HPA +kubectl apply -f deployment.yml +kubectl apply -f hpa.yml + +# Step 3 — Expose service +kubectl expose deployment php-apache --port=80 --name=php-apache -n apache + +# Step 4 — Verify everything +kubectl get deployment -n apache +kubectl get svc -n apache +kubectl get hpa -n apache + +# Step 5 — Generate load (in separate terminal) +kubectl run load-generator \ + --image=busybox:1.36 \ + --restart=Never \ + -n apache \ + -- /bin/sh -c "while true; do wget -q -O- http://php-apache; done" + +# Step 6 — Watch HPA scale +kubectl get hpa -n apache -w +kubectl get pods -n apache -w + +# Step 7 — Stop load and watch scale down +kubectl delete pod load-generator -n apache +``` + +--- + +## Key Takeaways + +- Metrics Server is **mandatory** for HPA — install it first +- Always set `resources.requests.cpu` in your container spec — without it HPA shows `` +- Use `autoscaling/v2` — v1 is deprecated and CPU-only +- Scale **up** is fast, scale **down** is slow by design (prevents flapping) +- The `behavior` section gives fine-grained control over scaling speed and cooldown +- HPA and Deployment replicas coexist — HPA takes control of the replica count when attached diff --git a/2026/day-58/day-58.png b/2026/day-58/day-58.png new file mode 100644 index 0000000000..75209310e7 Binary files /dev/null and b/2026/day-58/day-58.png differ