diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml new file mode 100644 index 0000000000..aecdf963b3 --- /dev/null +++ b/.github/workflows/stale.yml @@ -0,0 +1,27 @@ +# This workflow warns and then closes issues and PRs that have had no activity for a specified amount of time. +# +# You can adjust the behavior by modifying this file. +# For more information, see: +# https://github.com/actions/stale +name: Mark stale issues and pull requests + +on: + schedule: + - cron: '20 7 * * *' + +jobs: + stale: + + runs-on: ubuntu-latest + permissions: + issues: write + pull-requests: write + + steps: + - uses: actions/stale@v5 + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + stale-issue-message: 'Stale issue message' + stale-pr-message: 'Stale pull request message' + stale-issue-label: 'no-issue-activity' + stale-pr-label: 'no-pr-activity' diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000000..269d0f1e86 --- /dev/null +++ b/.gitignore @@ -0,0 +1,42 @@ +# OS +.DS_Store +Thumbs.db + +# Logs +*.log +*.pid + +# Environment +.env +.env.* + +# Python +__pycache__/ +*.pyc +.venv/ + +# Node +node_modules/ + +# Build +build/ +dist/ + +# Terraform +.terraform/ +*.tfstate +*.tfstate.* +crash.log + +# Kubernetes +.kube/ + +# IDE +.vscode/ +.idea/ + +# Caches +.cache/ +.pytest_cache/ +coverage/ +CLAUDE.md diff --git a/2023/day1/tasks.md b/2023/day01/README.md similarity index 69% rename from 2023/day1/tasks.md rename to 2023/day01/README.md index ade0bae42f..362e4be12e 100644 --- a/2023/day1/tasks.md +++ b/2023/day01/README.md @@ -5,7 +5,8 @@ This is the day you have to Take this challenge and start your #90DaysOfDevOps w - Fork this Repo. - Start with a DevOps Roadmap[https://youtu.be/iOE9NTAG35g] - Write a LinkedIn post or a small article about your understanding of DevOps - - What is DevOps - - What is Automation, Scaling, Infrastructure - - Why DevOps is Important, etc - \ No newline at end of file +- What is DevOps +- What is Automation, Scaling, Infrastructure +- Why DevOps is Important, etc + +[Next Day →](../day02/README.md) diff --git a/2023/day01/devops.txt b/2023/day01/devops.txt new file mode 100644 index 0000000000..19244fa356 --- /dev/null +++ b/2023/day01/devops.txt @@ -0,0 +1,8 @@ +DevOps is a methodology which Involves practices to bridge the gap of Dev and ops team by using Open source automation build tools. +These are the articles which I refered to, + + +formal defination : "DevOps is the union of people, process, and products to enable continuous delivery of value to our end users." + +The main goal of DEVOPS is to shorten cycle time. Start with the release pipeline. How long does it take to deploy a change of one line of code or configuration. + diff --git a/2023/day02/README.md b/2023/day02/README.md new file mode 100644 index 0000000000..342ac39c32 --- /dev/null +++ b/2023/day02/README.md @@ -0,0 +1,13 @@ +Day 2 Task: Basics linux command + +Task: What is the linux command to + +1. Check your present working directory. +2. List all the files or directories including hidden files. +3. Create a nested directory A/B/C/D/E + +Note: [Check this file for reference](basic_linux_commands.md) + +Check the basic_linux_commands.md file on the same directory day2 + +[← Previous Day](../day01/README.md) | [Next Day →](../day03/README.md) diff --git a/2023/day02/basic_linux_commands.md b/2023/day02/basic_linux_commands.md new file mode 100644 index 0000000000..24bb7fe1e3 --- /dev/null +++ b/2023/day02/basic_linux_commands.md @@ -0,0 +1,41 @@ +## Basic linux commands + +### Listing commands +```ls option_flag arguments ```--> list the sub directories and files avaiable in the present directory + +Examples: + +- ``` ls -l ```--> list the files and directories in long list format with extra information +- ```ls -a ```--> list all including hidden files and directory +- ```ls *.sh``` --> list all the files having .sh extension. + +- ```ls -i ``` --> list the files and directories with index numbers inodes +- ``` ls -d */``` --> list only directories.(we can also specify a pattern) + +### Directoy commands +- ```pwd``` --> print work directory. Gives the present working directory. + +- ```cd path_to_directory``` --> change directory to the provided path + +- ```cd ~ ``` or just ```cd ``` --> change directory to the home directory + +- ``` cd - ``` --> Go to the last working directory. + +- ``` cd ..``` --> change directory to one step back. + +- ``` cd ../..``` --> Change directory to 2 levels back. + +- ``` mkdir directoryName``` --> to make a directory in a specific location + +Examples: +``` +mkdir newFolder # make a new folder 'newFolder' + +mkdir .NewFolder # make a hidden directory (also . before a file to make it hidden) + +mkdir A B C D #make multiple directories at the same time + +mkdir /home/user/Mydirectory # make a new folder in a specific location + +mkdir -p A/B/C/D # make a nested directory +``` diff --git a/2023/day02/solution.md b/2023/day02/solution.md new file mode 100644 index 0000000000..188408e6b3 --- /dev/null +++ b/2023/day02/solution.md @@ -0,0 +1,101 @@ + +## Basic linux commands + +- ``ls`` --> The ls command is used to list files or directories in Linux and other Unix-based operating systems. + +![ls](https://user-images.githubusercontent.com/76457594/210222403-35776fbc-e509-4c9c-a0ad-e5975599ffab.png) + +- ``` ls -l ```--> Type the ls -l command to list the contents of the directory in a table format with columns including. + + - content permissions + - number of links to the content + - owner of the content + - group owner of the content + - size of the content in bytes + - last modified date / time of the content + - file or directory name + +![Uploading ls -l.png…]() + + + +- ```ls -a ```--> Type the ls -a command to list files or directories including hidden files or directories. In Linux, anything that begins with a . is considered a hidden file. + +![ls -a](https://user-images.githubusercontent.com/76457594/210223013-9353abf0-159c-4797-a19f-3b78a8d4ef00.png) + +- ```ls *.sh``` --> + + +![ls * sh](https://user-images.githubusercontent.com/76457594/210223067-f5c3a5bf-09b4-4525-90e7-9ce61186ae2e.png) + + +- ```ls -i ``` --> List the files and directories with index numbers in oders + +![ls-i](https://user-images.githubusercontent.com/76457594/210225502-946551c7-fd81-402b-b8ce-091792e24c44.png) + + + +- ``` ls -d */``` --> Type the ls -d */ command to list only directories. + + +![ls-d*](https://user-images.githubusercontent.com/76457594/210223178-f7097a96-31b1-4c98-8b81-2e5b0e3a7bb7.png) + + +## Directoy commands +- ```pwd``` --> Print work directory. Gives the present working directory. + +![pwd](https://user-images.githubusercontent.com/76457594/210223234-e5f3a48c-1b08-4bce-943e-4fed50a12700.png) + +- ```cd path_to_directory``` --> Change directory to the provided path. + +![cd](https://user-images.githubusercontent.com/76457594/210223291-355b8eb1-d1b5-41a4-a1b3-07fe7d786794.png) + +- ```cd ~ ``` or just ```cd ``` --> Change directory to the home directory. + +![cd ~](https://user-images.githubusercontent.com/76457594/210223377-845975d3-344d-49d3-946e-05f2d2170ac4.png) + +- ``` cd - ``` --> Go to the last working directory. + +![cd -](https://user-images.githubusercontent.com/76457594/210223414-d6333b9c-21cb-4053-abb9-871bbca5db08.png) + + +- ``` cd ..``` --> Chnage directory to one step back. + +![cd](https://user-images.githubusercontent.com/76457594/210223531-956598ad-301c-486a-b02e-6e69c4104adb.png) + +- ``` cd ../..``` --> Use ls ../.. for contents two levels above. + +![cd bs](https://user-images.githubusercontent.com/76457594/210223634-2f37f616-5857-4f31-a9a6-796b0f0ab1e5.png) + + +- ``` mkdir directoryName``` --> Use to make a directory in a specific location + +![mkdir ](https://user-images.githubusercontent.com/76457594/210224037-9ba396ad-77a8-48d4-8d28-2fa513c2b06a.png) + + +- ``` mkdir .NewFolder ``` --> Make a hidden directory (also . before a file to make it hidden) + + +![mkdir ](https://user-images.githubusercontent.com/76457594/210224230-89db3d98-f04a-4edd-998f-0f9a0219f06e.png) + + +- ```mkdir A B C D ``` --> Make multiple directories at the same time. + +![mkdir A B C](https://user-images.githubusercontent.com/76457594/210224267-6d14de9a-2c05-4ea9-853f-ddb44dda8f23.png) + + +- ```mkdir /home/user/Mydirectory ``` --> make a new folder in a specific location + +![mkdir inside](https://user-images.githubusercontent.com/76457594/210224331-dc7a2916-a64c-40ed-8951-7e2677df4957.png) + + +- ```mkdir -p A/B/C/D ``` --> Make a nested directory + +![mkdir-p](https://user-images.githubusercontent.com/76457594/210224365-78ec406e-0a2e-4666-a30d-ac406f0dd695.png) + + + + + + + diff --git a/2023/day03/README.md b/2023/day03/README.md new file mode 100644 index 0000000000..c3d1d17563 --- /dev/null +++ b/2023/day03/README.md @@ -0,0 +1,19 @@ +Day 3 Task: Basic Linux Commands + +Task: What is the linux command to + +1. To view what's written in a file. +2. To change the access permissions of files. +3. To check which commands you have run till now. +4. To remove a directory/ Folder. +5. To create a fruits.txt file and to view the content. +6. Add content in devops.txt (One in each line) - Apple, Mango, Banana, Cherry, Kiwi, Orange, Guava. +7. To Show only top three fruits from the file. +8. To Show only bottom three fruits from the file. +9. To create another file Colors.txt and to view the content. +10. Add content in Colors.txt (One in each line) - Red, Pink, White, Black, Blue, Orange, Purple, Grey. +11. To find the difference between fruits.txt and Colors.txt file. + +Reference: https://www.linkedin.com/pulse/linux-commands-devops-used-day-to-day-activit-chetan-/ + +[← Previous Day](../day02/README.md) | [Next Day →](../day04/README.md) diff --git a/2023/day03/solution.md b/2023/day03/solution.md new file mode 100644 index 0000000000..49698301df --- /dev/null +++ b/2023/day03/solution.md @@ -0,0 +1,40 @@ + +# Basic Linux Commands + +1. To view what's written in a file. + - ``` cat filename ``` + +![3 1](https://user-images.githubusercontent.com/76457594/210305889-d19f82d5-dbb1-46fc-99e2-b217146b6e8a.png) + + + +2. To change the access permissions of files. + + - ``` chmod 777 foldername ``` + +![Uploading 3.2.png…]() + +3. To check which commands you have run till now. + + - ``` history ``` + + ![Uploading 3.3.png…]() + +4. To remove a directory/ Folder. + + - ``` rm filename ``` + + ![3 4](https://user-images.githubusercontent.com/76457594/210308917-7281e0eb-6fcb-4554-8ffe-835cf0b961d1.png) + + - ``` rmdir foldername ``` + + ![3 4b](https://user-images.githubusercontent.com/76457594/210309299-367e6253-7e11-4ead-a19c-6eb3922780d1.png) + +5. To create a fruits.txt file and to view the content. + - ``` vim fruits.txt ``` + - ``` cat fruits.txt ``` + +![3 5](https://user-images.githubusercontent.com/76457594/210311435-e6f8aa0c-dc0c-44a6-84e7-6e4c91e4ea87.png) + + + diff --git a/2023/day04/README.md b/2023/day04/README.md new file mode 100644 index 0000000000..2ffe27d9a9 --- /dev/null +++ b/2023/day04/README.md @@ -0,0 +1,31 @@ +# Day 4 Task: Basic Linux Shell Scripting for DevOps Engineers. + +## What is Kernel + +The kernel is a computer program that is the core of a computer’s operating system, with complete control over everything in the system. + +## What is Shell + +A shell is special user program which provide an interface to user to use operating system services. Shell accept human readable commands from user and convert them into something which kernel can understand. It is a command language interpreter that execute commands read from input devices such as keyboards or from files. The shell gets started when the user logs in or start the terminal. + +## What is Linux Shell Scripting? + +A shell script is a computer program designed to be run by a linux shell, a command-line interpreter. The various dialects of shell scripts are considered to be scripting languages. Typical operations performed by shell scripts include file manipulation, program execution, and printing text. + +**Tasks** + +- Explain in your own words and examples, what is Shell Scripting for DevOps. +- What is `#!/bin/bash?` can we write `#!/bin/sh` as well? +- Write a Shell Script which prints `I will complete #90DaysOofDevOps challenge` +- Write a Shell Script to take user input, input from arguments and print the variables. +- Write an Example of If else in Shell Scripting by comparing 2 numbers + +Was it difficult? + +- Post about it on LinkedIn and Let me know :) + +Article Reference: [Click here to read basic Linux Shell Scripting](https://devopscube.com/linux-shell-scripting-for-devops/) + +YouTube Video: [EASIEST Shell Scripting Tutorial for DevOps Engineers](https://www.youtube.com/watch?v=_-D6gkRj7xc&list=PLlfy9GnSVerQr-Se9JRE_tZJk3OUoHCkh&index=3) + +[← Previous Day](../day03/README.md) | [Next Day →](../day05/README.md) diff --git a/2023/day05/README.md b/2023/day05/README.md new file mode 100644 index 0000000000..d894468fd3 --- /dev/null +++ b/2023/day05/README.md @@ -0,0 +1,53 @@ +# Day 5 Task: Advanced Linux Shell Scripting for DevOps Engineers with User management + +If you noticed that there are a total 90 sub-directories in the directory '2023' of this repository. What did you think, how did I create 90 directories? Manually one by one or using a script, or a command? + +All 90 directories within seconds using a simple command. + +` mkdir day{1..90}` + +### Tasks + +1. You have to do the same using Shell Script i.e using either Loops or command with start day and end day variables using arguments - + +So Write a bash script create directories.sh that when the script is executed with three given arguments (one is the directory name and second is start number of directories and third is the end number of directories ) it creates a specified number of directories with a dynamic directory name. + +Example 1: When the script is executed as + +`./createDirectories.sh day 1 90` + +then it creates 90 directories as `day1 day2 day3 .... day90` + +Example 2: When the script is executed as + +`./createDirectories.sh Movie 20 50` +then it creates 50 directories as `Movie20 Movie21 Movie23 ...Movie50` + +Notes: +You may need to use loops or commands (or both), based on your preference . [Check out this reference: https://www.geeksforgeeks.org/bash-scripting-for-loop/](https://www.geeksforgeeks.org/bash-scripting-for-loop/) + +2. Create a Script to backup all your work done till now. + +Backups are an important part of DevOps Engineer's day to Day activities +The video in References will help you to understand How a DevOps Engineer takes backups (it can feel a bit difficult but keep trying, Nothing is impossible.) +Watch [this video](https://youtu.be/aolKiws4Joc) + +In case of Doubts, post it in [Discord Channel for #90DaysOfDevOps](https://discord.gg/hs3Pmc5F) + +3. Read About Cron and Crontab, to automate the backup Script + +Cron is the system's main scheduler for running jobs or tasks unattended. A command called crontab allows the user to submit, edit or delete entries to cron. A crontab file is a user file that holds the scheduling information. + +Watch This video as a Reference to Task 2 and 3 [https://youtu.be/aolKiws4Joc](https://youtu.be/aolKiws4Joc) + +4. Read about User Management and Let me know on Linkedin if you're ready for Day 6. + +A user is an entity, in a Linux operating system, that can manipulate files and perform several other operations. Each user is assigned an ID that is unique for each user in the operating system. In this post, we will learn about users and commands which are used to get information about the users. After installation of the operating system, the ID 0 is assigned to the root user and the IDs 1 to 999 (both inclusive) are assigned to the system users and hence the ids for local user begins from 1000 onwards. + +5. Create 2 users and just display their Usernames + +[Check out this reference: https://www.geeksforgeeks.org/user-management-in-linux/](https://www.geeksforgeeks.org/user-management-in-linux/) + +Post your daily work on Linkedin and let [me](https://www.linkedin.com/in/shubhamlondhe1996/) know , writing an article is the best :) + +[← Previous Day](../day04/README.md) | [Next Day →](../day06/README.md) diff --git a/2023/day06/README.md b/2023/day06/README.md new file mode 100644 index 0000000000..76c9d09ab3 --- /dev/null +++ b/2023/day06/README.md @@ -0,0 +1,31 @@ +# Day 6 Task: File Permissions and Access Control Lists + +### Today is more on Reading, Learning and Implementing File permissions + +The concept of Linux File permission and ownership is important in Linux. +Here, we will be working on Linux permissions and ownership and will do tasks on +both of them. +Let us start with the Permissions. + +1. Create a simple file and do `ls -ltr` to see the details of the files [refer to Notes](https://github.com/LondheShubham153/90DaysOfDevOps/tree/master/2023/day06/notes) + +Each of the three permissions are assigned to three defined categories of users. The categories are: + +- owner — The owner of the file or application. +- "chown" is used to change the ownership permission of a file or directory. +- group — The group that owns the file or application. +- "chgrp" is used to change the group permission of a file or directory. +- others — All users with access to the system. (outised the users are in a group) +- "chmod" is used to change the other users permissions of a file or directory. + + As a task, change the user permissions of the file and note the changes after `ls -ltr` + +2. Write an article about File Permissions based on your understanding from the notes. + +3. Read about ACL and try out the commands `getfacl` and `setfacl` + +In case of any doubts, post it on [Discord Community](https://discord.gg/hs3Pmc5F) + +Happy Learning + +[← Previous Day](../day05/README.md) | [Next Day →](../day07/README.md) diff --git a/2023/day06/notes/Linux_Basic_&_FilePermissions.docx b/2023/day06/notes/Linux_Basic_&_FilePermissions.docx new file mode 100644 index 0000000000..ad1c207882 Binary files /dev/null and b/2023/day06/notes/Linux_Basic_&_FilePermissions.docx differ diff --git a/2023/day07/README.md b/2023/day07/README.md new file mode 100644 index 0000000000..d942492d95 --- /dev/null +++ b/2023/day07/README.md @@ -0,0 +1,43 @@ +# Day 7 Task: Understanding package manager and systemctl + +### What is a package manager in Linux? + +In simpler words, a package manager is a tool that allows users to install, remove, upgrade, configure and manage software packages on an operating system. The package manager can be a graphical application like a software center or a command line tool like apt-get or pacman. + +You’ll often find me using the term ‘package’ in tutorials and articles, To understand package manager, you must understand what a package is. + +### What is a package? + +A package is usually referred to an application but it could be a GUI application, command line tool or a software library (required by other software programs). A package is essentially an archive file containing the binary executable, configuration file and sometimes information about the dependencies. + +### Different kinds of package managers + +Package Managers differ based on packaging system but same packaging system may have more than one package manager. + +For example, RPM has Yum and DNF package managers. For DEB, you have apt-get, aptitude command line based package managers. + +## Tasks + +1. You have to install docker and jenkins in your system from your terminal using package managers + +2. Write a small blog or article to install these tools using package managers on Ubuntu and CentOS + +### systemctl and systemd + +systemctl is used to examine and control the state of “systemd” system and service manager. systemd is system and service manager for Unix like operating systems(most of the distributions, not all). + +## Tasks + +1. check the status of docker service in your system (make sure you completed above tasks, else docker won't be installed) + +2. stop the service jenkins and post before and after screenshots + +3. read about the commands systemctl vs service + +eg. `systemctl status docker` vs `service docker status` + +For Reference, read [this](https://www.howtogeek.com/devops/how-to-check-if-the-docker-daemon-or-a-container-is-running/#:~:text=Checking%20With%20Systemctl&text=Check%20what%27s%20displayed%20under%20%E2%80%9CActive,running%20sudo%20systemctl%20start%20docker%20.) + +#### Post about this and bring your friends to this #90DaysOfDevOps challenge. + +[← Previous Day](../day06/README.md) | [Next Day →](../day08/README.md) diff --git a/2023/day08/README.md b/2023/day08/README.md new file mode 100644 index 0000000000..cbaed0f8b3 --- /dev/null +++ b/2023/day08/README.md @@ -0,0 +1,51 @@ +# Day 8 Task: Basic Git & GitHub for DevOps Engineers. + +## What is Git? + +Git is a version control system that allows you to track changes to files and coordinate work on those files among multiple people. It is commonly used for software development, but it can be used to track changes to any set of files. + +With Git, you can keep a record of who made changes to what part of a file, and you can revert back to earlier versions of the file if needed. Git also makes it easy to collaborate with others, as you can share changes and merge the changes made by different people into a single version of a file. + +## What is Github? + +GitHub is a web-based platform that provides hosting for version control using Git. It is a subsidiary of Microsoft, and it offers all of the distributed version control and source code management (SCM) functionality of Git as well as adding its own features. GitHub is a very popular platform for developers to share and collaborate on projects, and it is also used for hosting open-source projects. + +## What is Version Control? How many types of version controls we have? + +Version control is a system that tracks changes to a file or set of files over time so that you can recall specific versions later. It allows you to revert files back to a previous state, revert the entire project back to a previous state, compare changes over time, see who last modified something that might be causing a problem, who introduced an issue and when, and more. + +There are two main types of version control systems: centralized version control systems and distributed version control systems. + +1. A centralized version control system (CVCS) uses a central server to store all the versions of a project's files. Developers "check out" files from the central server, make changes, and then "check in" the updated files. Examples of CVCS include Subversion and Perforce. + +2. A distributed version control system (DVCS) allows developers to "clone" an entire repository, including the entire version history of the project. This means that they have a complete local copy of the repository, including all branches and past versions. Developers can work independently and then later merge their changes back into the main repository. Examples of DVCS include Git, Mercurial, and Darcs. + +## Why we use distributed version control over centralized version control? + +1. Better collaboration: In a DVCS, every developer has a full copy of the repository, including the entire history of all changes. This makes it easier for developers to work together, as they don't have to constantly communicate with a central server to commit their changes or to see the changes made by others. + +2. Improved speed: Because developers have a local copy of the repository, they can commit their changes and perform other version control actions faster, as they don't have to communicate with a central server. + +3. Greater flexibility: With a DVCS, developers can work offline and commit their changes later when they do have an internet connection. They can also choose to share their changes with only a subset of the team, rather than pushing all of their changes to a central server. + +4. Enhanced security: In a DVCS, the repository history is stored on multiple servers and computers, which makes it more resistant to data loss. If the central server in a CVCS goes down or the repository becomes corrupted, it can be difficult to recover the lost data. + +Overall, the decentralized nature of a DVCS allows for greater collaboration, flexibility, and security, making it a popular choice for many teams. + +## Task: + +- Install Git on your computer (if it is not already installed). You can download it from the official website at https://git-scm.com/downloads +- Create a free account on GitHub (if you don't already have one). You can sign up at https://github.com/ +- Learn the basics of Git by reading through the [video](https://youtu.be/AT1uxOLsCdk) This will give you an understanding of what Git is, how it works, and how to use it to track changes to files. + +## Exercises: + +1. Create a new repository on GitHub and clone it to your local machine +2. Make some changes to a file in the repository and commit them to the repository using Git +3. Push the changes back to the repository on GitHub + +Reff :- https://youtu.be/AT1uxOLsCdk + +Post your daily work on Linkedin and le me know , writing an article is the best :) + +[← Previous Day](../day07/README.md) | [Next Day →](../day09/README.md) diff --git a/2023/day09/README.md b/2023/day09/README.md new file mode 100644 index 0000000000..fd9e178d58 --- /dev/null +++ b/2023/day09/README.md @@ -0,0 +1,28 @@ +# Day 9 Task: Deep Dive in Git & GitHub for DevOps Engineers. + +## Find the answers by your understandings(Shoulden't be copied by internet & used hand-made diagrams) of below quistions and Write blog on it. + +1. What is Git and why is it important? +2. What is difference Between Main Branch and Master Branch?? +3. Can you explain the difference between Git and GitHub? +4. How do you create a new repository on GitHub? +5. What is difference between local & remote repository? How to connect local to remote? + +## Tasks + +task-1: + +- Set your user name and email address, which will be associated with your commits. + +task-2: + +- Create a repository named "Devops" on GitHub +- Connect your local repository to the repository on GitHub. +- Create a new file in Devops/Git/Day-02.txt & add some content to it +- Push your local commits to the repository on GitHub + +reff :- https://youtu.be/AT1uxOLsCdk + +Note: These steps assume that you have already installed Git on your computer and have created a GitHub account. If you need help with these prerequisites, you can refer to the [day-08](https://github.com/LondheShubham153/90DaysOfDevOps/blob/ee7c53f276edb02a85a97282027028295be17c04/2023/day08/README.md) + +[← Previous Day](../day08/README.md) | [Next Day →](../day10/README.md) diff --git a/2023/day10/README.md b/2023/day10/README.md new file mode 100644 index 0000000000..71250e5259 --- /dev/null +++ b/2023/day10/README.md @@ -0,0 +1,69 @@ +# Day 10 Task: Advance Git & GitHub for DevOps Engineers. + +## Git Branching + +Use a branch to isolate development work without affecting other branches in the repository. Each repository has one default branch, and can have multiple other branches. You can merge a branch into another branch using a pull request. + +Branches allow you to develop features, fix bugs, or safely experiment with new ideas in a contained area of your repository. + +## Git Revert and Reset + +Two commonly used tools that git users will encounter are those of git reset and git revert . The benefit of both of these commands is that you can use them to remove or edit changes you’ve made in the code in previous commits. + +## Git Rebase and Merge + +### What Is Git Rebase? + +Git rebase is a command that lets users integrate changes from one branch to another, and the logs are modified once the action is complete. Git rebase was developed to overcome merging’s shortcomings, specifically regarding logs. + +### What Is Git Merge? + +Git merge is a command that allows developers to merge Git branches while the logs of commits on branches remain intact. + +The merge wording can be confusing because we have two methods of merging branches, and one of those ways is actually called “merge,” even though both procedures do essentially the same thing. + +Refer to this article for a better understanding of Git Rebase and Merge [Read here](https://www.simplilearn.com/git-rebase-vs-merge-article) + +## Task 1: + +Add a text file called version01.txt inside the Devops/Git/ with “This is first feature of our application” written inside. +This should be in a branch coming from `master`, +[hint try `git checkout -b dev`], +swithch to `dev` branch ( Make sure your commit message will reflect as "Added new feature"). +[Hint use your knowledge of creating branches and Git commit command] + +- version01.txt should reflect at local repo first followed by Remote repo for review. + [Hint use your knowledge of Git push and git pull commands here] + +Add new commit in `dev` branch after adding below mentioned content in Devops/Git/version01.txt: +While writing the file make sure you write these lines + +- 1st line>> This is the bug fix in development branch +- Commit this with message “ Added feature2 in development branch” + +- 2nd line>> This is gadbad code +- Commit this with message “ Added feature3 in development branch + +- 3rd line>> This feature will gadbad everything from now. +- Commit with message “ Added feature4 in development branch + +Restore the file to a previous version where the content should be “This is the bug fix in development branch” +[Hint use git revert or reset according to your knowledge] + +## Task 2: + +- Demonstrate the concept of branches with 2 or more branches with screenshot. +- add some changes to `dev` branch and merge that branch in `master` +- as a practice try git rebase too, see what difference you get. + +## Note: + +We should learn and follow the [best practices](https://www.flagship.io/git-branching-strategies/) , industry follows for branching. + +Simple Reference on branching: [video](https://youtu.be/NzjK9beT_CY) + +Advance Reference on branching : [video](https://youtu.be/7xhkEQS3dXw) + +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. Happy Learning :) + +[← Previous Day](../day09/README.md) | [Next Day →](../day11/README.md) diff --git a/2023/day10/tasks.md b/2023/day10/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day11/README.md b/2023/day11/README.md new file mode 100644 index 0000000000..08249b0568 --- /dev/null +++ b/2023/day11/README.md @@ -0,0 +1,57 @@ +# Day 11 Task: Advance Git & GitHub for DevOps Engineers: Part-2 + +## Git Stash: + +Git stash is a command that allows you to temporarily save changes you have made in your working directory, without committing them. This is useful when you need to switch to a different branch to work on something else, but you don't want to commit the changes you've made in your current branch yet. + +To use Git stash, you first create a new branch and make some changes to it. Then you can use the command git stash to save those changes. This will remove the changes from your working directory and record them in a new stash. You can apply these changes later. git stash list command shows the list of stashed changes. + +You can also use git stash drop to delete a stash and git stash clear to delete all the stashes. + +## Cherry-pick: + +Git cherry-pick is a command that allows you to select specific commits from one branch and apply them to another. This can be useful when you want to selectively apply changes that were made in one branch to another. + +To use git cherry-pick, you first create two new branches and make some commits to them. Then you use git cherry-pick command to select the specific commits from one branch and apply them to the other. + +## Resolving Conflicts: + +Conflicts can occur when you merge or rebase branches that have diverged, and you need to manually resolve the conflicts before git can proceed with the merge/rebase. +git status command shows the files that have conflicts, git diff command shows the difference between the conflicting versions and git add command is used to add the resolved files. + +# Task-01 + +- Create a new branch and make some changes to it. +- Use git stash to save the changes without committing them. +- Switch to a different branch, make some changes and commit them. +- Use git stash pop to bring the changes back and apply them on top of the new commits. + +# Task-02 + +- In version01.txt of development branch add below lines after “This is the bug fix in development branch” that you added in Day10 and reverted to this commit. +- Line2>> After bug fixing, this is the new feature with minor alteration” + + Commit this with message “ Added feature2.1 in development branch” + +- Line3>> This is the advancement of previous feature + + Commit this with message “ Added feature2.2 in development branch” + +- Line4>> Feature 2 is completed and ready for release + + Commit this with message “ Feature2 completed” + +- All these commits messages should be reflected in Production branch too which will come out from Master branch (Hint: try rebase). + +# Task-03 + +- In Production branch Cherry pick Commit “Added feature2.2 in development branch” and added below lines in it: +- Line to be added after Line3>> This is the advancement of previous feature +- Line4>>Added few more changes to make it more optimized. +- Commit: Optimized the feature + +## Reference [video](https://youtu.be/apGV9Kg7ics) + +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. Happy Learning :) + +[← Previous Day](../day10/README.md) | [Next Day →](../day12/README.md) diff --git a/2023/day11/tasks.md b/2023/day11/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day12/README.md b/2023/day12/README.md new file mode 100644 index 0000000000..456bfe2feb --- /dev/null +++ b/2023/day12/README.md @@ -0,0 +1,17 @@ +## Finally!! 🎉 + +You have completed the Linux & Git-GitHub handson and I hope you have learned something interesting from it.🙌 + +Now why not make an interesting 😉 assignment, which not only will help you for the future but also for the DevOps Community! + +Let’s make a well articulated and documented **"cheat-sheet"** with all the commands you learned so far in Linux, Git-GitHub and brief info about its usage. + +Let’s show us your knowledge mixed with your creativity😎 + +_I have added a [cheatsheet](https://education.github.com/git-cheat-sheet-education.pdf) for your reference, Make sure every cheatsheet must be UNIQUE_ + +Post it on Linkedin and Spread the knowledge.😃 + +**Happy Learning :)** + +[← Previous Day](../day11/README.md) | [Next Day →](../day13/README.md) diff --git a/2023/day12/tasks.md b/2023/day12/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day13/README.md b/2023/day13/README.md new file mode 100644 index 0000000000..f366710009 --- /dev/null +++ b/2023/day13/README.md @@ -0,0 +1,29 @@ +Hello Dosto 😎 + +Let's Start with Basics of Python as this is also important for Devops Engineer to build the logic and Programs. + +**What is Python?** + +- Python is a Open source, general purpose, high level, and object-oriented programming language. +- It was created by **Guido van Rossum** +- Python consists of vast libraries and various frameworks like Django,Tensorflow, Flask, Pandas, Keras etc. + +**How to Install Python?** + +You can install Python in your System whether it is window, MacOS, ubuntu, centos etc. Below are the links for the installation: + +- [Windows Installation](https://www.python.org/downloads/) +- Ubuntu: apt-get install python3.6 + +Task1: + +1. Install Python in your respective OS, and check the version. +2. Read about different Data Types in Python. + +You can get the complete Playlist [here](https://www.youtube.com/watch?v=abPgj_3hzVY&list=PLlfy9GnSVerS_L5z0COaF7rsbgWmJXTOM)🙌 + +Don't forget to share your Journey over linkedin. Let the community know that you have started another chapter of your Journey. + +Happy Learning, Ruko Mat Phod do😃 + +[← Previous Day](../day12/README.md) | [Next Day →](../day14/README.md) diff --git a/2023/day13/tasks.md b/2023/day13/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day14/README.md b/2023/day14/README.md new file mode 100644 index 0000000000..88dbb3a46c --- /dev/null +++ b/2023/day14/README.md @@ -0,0 +1,61 @@ +## Day 14 Task: Python Data Types and Data Structures for DevOps + +### New day, New Topic.... Let's learn along 😉 + +### Data Types + +- Data types are the classification or categorization of data items. It represents the kind of value that tells what operations can be performed on a particular data. +- Since everything is an object in Python programming, data types are actually classes and variables are instance (object) of these classes. +- Python has the following data types built-in by default: Numeric(Integer, complex, float), Sequential(string,lists, tuples), Boolean, Set, Dictionaries, etc + +To check what is the data type of the variable used, we can simply write: +`your_variable=100` +`type(your_variable)` + +### Data Structures + +Data Structures are a way of organizing data so that it can be accessed more efficiently depending upon the situation. Data Structures are fundamentals of any programming language around which a program is built. Python helps to learn the fundamental of these data structures in a simpler way as compared to other programming languages. + +- Lists + Python Lists are just like the arrays, declared in other languages which is an ordered collection of data. It is very flexible as the items in a list do not need to be of the same type + +- Tuple + Python Tuple is a collection of Python objects much like a list but Tuples are immutable in nature i.e. the elements in the tuple cannot be added or removed once created. Just like a List, a Tuple can also contain elements of various types. + +- Dictionary + Python dictionary is like hash tables in any other language with the time complexity of O(1). It is an unordered collection of data values, used to store data values like a map, which, unlike other Data Types that hold only a single value as an element, Dictionary holds the key:value pair. Key-value is provided in the dictionary to make it more optimized + +## Tasks + +1. Give the Difference between List, Tuple and set. Do Handson and put screenshots as per your understanding. +2. Create below Dictionary and use Dictionary methods to print your favourite tool just by using the keys of the Dictionary. + +``` +fav_tools = +{ + 1:"Linux", + 2:"Git", + 3:"Docker", + 4:"Kubernetes", + 5:"Terraform", + 6:"Ansible", + 7:"Chef" +} +``` + +3. Create a List of cloud service providers + eg. + +``` +cloud_providers = ["AWS","GCP","Azure"] +``` + +Write a program to add `Digital Ocean` to the list of cloud_providers and sort the list in alphabetical order. + +[Hint: Use keys to built in functions for Lists] + +If you want to deep dive further, Watch [Python](https://youtu.be/abPgj_3hzVY) + +You can share the learning with everyone over linkedin and tag us along 😃 + +[← Previous Day](../day13/README.md) | [Next Day →](../day15/README.md) diff --git a/2023/day14/tasks.md b/2023/day14/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day15/README.md b/2023/day15/README.md new file mode 100644 index 0000000000..decf2b5ed9 --- /dev/null +++ b/2023/day15/README.md @@ -0,0 +1,29 @@ +## Day 15 Task: Python Libraries for DevOps + +### Reading JSON and YAML in Python + +- As a DevOps Engineer you should be able to parse files, be it txt, json, yaml, etc. +- You should know what all libraries one should use in Pythonfor DevOps. +- Python has numerous libraries like `os`, `sys`, `json`, `yaml` etc that a DevOps Engineer uses in day to day tasks. + +## Tasks + +1. Create a Dictionary in Python and write it to a json File. + +2. Read a json file `services.json` kept in this folder and print the service names of every cloud service provider. + +``` +output + +aws : ec2 +azure : VM +gcp : compute engine + +``` + +3. Read YAML file using python, file `services.yaml` and read the contents to convert yaml to json + +Python Project for your practice: +https://youtube.com/playlist?list=PLlfy9GnSVerSzFmQ8JqP9v0XHHOAeWbjo + +[← Previous Day](../day14/README.md) | [Next Day →](../day16/README.md) diff --git a/2023/day15/parser.py b/2023/day15/parser.py new file mode 100644 index 0000000000..93d41bbb4a --- /dev/null +++ b/2023/day15/parser.py @@ -0,0 +1,19 @@ +import json +import yaml + +json_file = "services.json" +yaml_file = "services.yaml" + +with open(json_file, 'r', encoding='utf-8') as f: + json_data = json.loads(f.read()) + +print("JSON:\n",json_data) + +with open(yaml_file, "r") as stream: + try: + yaml_data = yaml.safe_load(stream) + except yaml.YAMLError as exc: + print(exc) + + +print("YAML:\n",yaml_data) \ No newline at end of file diff --git a/2023/day15/services.json b/2023/day15/services.json new file mode 100644 index 0000000000..1bc8a04f89 --- /dev/null +++ b/2023/day15/services.json @@ -0,0 +1,23 @@ +{ + "services": { + "debug": "on", + "aws": { + "name": "EC2", + "type": "pay per hour", + "instances": 500, + "count": 500 + }, + "azure": { + "name": "VM", + "type": "pay per hour", + "instances": 500, + "count": 500 + }, + "gcp": { + "name": "Compute Engine", + "type": "pay per hour", + "instances": 500, + "count": 500 + } + } + } \ No newline at end of file diff --git a/2023/day15/services.yaml b/2023/day15/services.yaml new file mode 100644 index 0000000000..0b367bc23e --- /dev/null +++ b/2023/day15/services.yaml @@ -0,0 +1,18 @@ +--- +services: + debug: 'on' + aws: + name: EC2 + type: pay per hour + instances: 500 + count: 500 + azure: + name: VM + type: pay per hour + instances: 500 + count: 500 + gcp: + name: Compute Engine + type: pay per hour + instances: 500 + count: 500 diff --git a/2023/day15/tasks.md b/2023/day15/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day16/README.md b/2023/day16/README.md new file mode 100644 index 0000000000..981c2dc916 --- /dev/null +++ b/2023/day16/README.md @@ -0,0 +1,32 @@ +## Day 16 Task: Docker for DevOps Engineers. + +### Docker + +Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any environment and know your code will run. + +# Tasks + +As you have already installed docker in previous days tasks, now is the time to run Docker commands. + +- Use the `docker run` command to start a new container and interact with it through the command line. [Hint: docker run hello-world] + +- Use the `docker inspect` command to view detailed information about a container or image. + +- Use the `docker port` command to list the port mappings for a container. + +- Use the `docker stats` command to view resource usage statistics for one or more containers. + +- Use the `docker top` command to view the processes running inside a container. + +- Use the `docker save` command to save an image to a tar archive. + +- Use the `docker load` command to load an image from a tar archive. + +These tasks involve simple operations that can be used to manage images and containers. + +For reference you can watch this video: +https://youtu.be/Tevxhn6Odc8 + +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. Happy Learning :) + +[← Previous Day](../day15/README.md) | [Next Day →](../day17/README.md) diff --git a/2023/day16/tasks.md b/2023/day16/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day17/README.md b/2023/day17/README.md new file mode 100644 index 0000000000..430ddb1154 --- /dev/null +++ b/2023/day17/README.md @@ -0,0 +1,31 @@ +## Day 17 Task: Docker Project for DevOps Engineers. + +### You people are doing just amazing in **#90daysofdevops**. Today's challenge is so special Because You are going to do DevOps project today with Docker. Are You Exited 😍 + +# Dockerfile + +Docker is a tool that makes it easy to run applications in containers. Containers are like small packages that hold everything an application needs to run. To create these containers, developers use something called a Dockerfile. + +A Dockerfile is like a set of instructions for making a container. It tells Docker what base image to use, what commands to run, and what files to include. For example, if you were making a container for a website, the Dockerfile might tell Docker to use an official web server image, copy the files for your website into the container, and start the web server when the container starts. + +For more about Dockerfile visit [here](https://rushikesh-mashidkar.hashnode.dev/dockerfile-docker-compose-swarm-and-volumes) + +task: + +- Create a Dockerfile for a simple web application (e.g. a Node.js or Python app) + +- Build the image using the Dockerfile and run the container + +- Verify that the application is working as expected by accessing it in a web browser + +- Push the image to a public or private repository (e.g. Docker Hub ) + +For Refference Project visit [here](https://youtu.be/Tevxhn6Odc8) + +If you want to dive further, Watch [bootcamp](https://youtube.com/playlist?list=PLlfy9GnSVerRqYJgVYO0UiExj5byjrW8u) + +You can share the learning with everyone over linkedin and tag us along 😃 + +Happy Learning:) + +[← Previous Day](../day16/README.md) | [Next Day →](../day18/README.md) diff --git a/2023/day17/tasks.md b/2023/day17/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day18/README.md b/2023/day18/README.md new file mode 100644 index 0000000000..b57f22cf0b --- /dev/null +++ b/2023/day18/README.md @@ -0,0 +1,43 @@ +# Day 18 Task: Docker for DevOps Engineers + +Till now you have created Docker file and pushed it to the Repository. Let's move forward and dig more on other Docker concepts. +Aj thodi padhai krte hai on Docker Compose 😃 + +## Docker Compose + +- Docker Compose is a tool that was developed to help define and share multi-container applications. +- With Compose, we can create a YAML file to define the services and with a single command, can spin everything up or tear it all down. +- Learn more about docker-compose [visit here](https://tecadmin.net/tutorial/docker/docker-compose/) + +## What is YAML? + +- YAML is a data serialization language that is often used for writing configuration files. Depending on whom you ask, YAML stands for yet another markup language or YAML ain’t markup language (a recursive acronym), which emphasizes that YAML is for data, not documents. +- YAML is a popular programming language because it is human-readable and easy to understand. +- YAML files use a .yml or .yaml extension. +- Read more about it [here](https://www.redhat.com/en/topics/automation/what-is-yaml) + +## Task-1 + +Learn how to use the docker-compose.yml file, to set up the environment, configure the services and links between different containers, and also to use environment variables in the docker-compose.yml file. + +[Sample docker-compose.yaml file](https://github.com/LondheShubham153/90DaysOfDevOps/blob/master/2023/day18/docker-compose.yaml) + +## Task-2 + +- Pull a pre-existing Docker image from a public repository (e.g. Docker Hub) and run it on your local machine. Run the container as a non-root user (Hint- Use `usermod ` command to give user permission to docker). Make sure you reboot instance after giving permission to user. +- Inspect the container's running processes and exposed ports using the docker inspect command. +- Use the docker logs command to view the container's log output. +- Use the docker stop and docker start commands to stop and start the container. +- Use the docker rm command to remove the container when you're done. + +## How to run Docker commands without sudo? + +- Make sure docker is installed and system is updated (This is already been completed as a part of previous tasks): +- sudo usermod -a -G docker $USER +- Reboot the machine. + +For reference you can watch this [video](https://youtu.be/Tevxhn6Odc8) + +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. Happy Learning :) + +[← Previous Day](../day17/README.md) | [Next Day →](../day19/README.md) diff --git a/2023/day18/docker-compose.yaml b/2023/day18/docker-compose.yaml new file mode 100644 index 0000000000..b11a5f4a43 --- /dev/null +++ b/2023/day18/docker-compose.yaml @@ -0,0 +1,12 @@ +version : "3.3" +services: + web: + image: nginx:latest + ports: + - "80:80" + db: + image: mysql + ports: + - "3306:3306" + environment: + - "MYSQL_ROOT_PASSWORD=test@123" diff --git a/2023/day18/tasks.md b/2023/day18/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day19/README.md b/2023/day19/README.md new file mode 100644 index 0000000000..6ad763f8e1 --- /dev/null +++ b/2023/day19/README.md @@ -0,0 +1,39 @@ +# Day 19 Task: Docker for DevOps Engineers + +**Till now you have learned how to create docker-compose.yml file and pushed it to the Repository. Let's move forward and dig more on other Docker-compose.yml concepts.** +**Aaj thodi padhai krte hai on Docker Volume & Docker Network** 😃 + +# Docker-Volume + +Docker allows you to create something called volumes. Volumes are like separate storage areas that can be accessed by containers. They allow you to store data, like a database, outside the container, so it doesn't get deleted when the container is deleted. +You can also mount from the same volume and create more containers having same data. +[reference](https://docs.docker.com/storage/volumes/) + +# Docker Network + +Docker allows you to create virtual spaces called networks, where you can connect multiple containers (small packages that hold all the necessary files for a specific application to run) together. This way, the containers can communicate with each other and with the host machine (the computer on which the Docker is installed). +When we run a container, it has its own storage space that is only accessible by that specific container. If we want to share that storage space with other containers, we can't do that. [reference](https://docs.docker.com/network/) + +## Task-1 + +- Create a multi-container docker-compose file which will bring _UP_ and bring _DOWN_ containers in a single shot ( Example - Create application and database container ) + +_hints:_ + +- Use the `docker-compose up` command with the `-d` flag to start a multi-container application in detached mode. +- Use the `docker-compose scale` command to increase or decrease the number of replicas for a specific service. You can also add [`replicas`](https://stackoverflow.com/questions/63408708/how-to-scale-from-within-docker-compose-file) in deployment file for _auto-scaling_. +- Use the `docker-compose ps` command to view the status of all containers, and `docker-compose logs` to view the logs of a specific service. +- Use the `docker-compose down` command to stop and remove all containers, networks, and volumes associated with the application + +## Task-2 + +- Learn how to use Docker Volumes and Named Volumes to share files and directories between multiple containers. +- Create two or more containers that read and write data to the same volume using the `docker run --mount` command. +- Verify that the data is the same in all containers by using the docker exec command to run commands inside each container. +- Use the docker volume ls command to list all volumes and docker volume rm command to remove the volume when you're done. + +## You can use this task as _Project_ to add in your resume. + +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. Happy Learning :) + +[← Previous Day](../day18/README.md) | [Next Day →](../day20/README.md) diff --git a/2023/day19/sample_project_deployment.yaml b/2023/day19/sample_project_deployment.yaml new file mode 100644 index 0000000000..821be80f2b --- /dev/null +++ b/2023/day19/sample_project_deployment.yaml @@ -0,0 +1,20 @@ +version : "3.3" +services: + web: + image: varsha0108/local_django:latest + deploy: + replicas: 2 + ports: + - "8001-8005:8001" + volumes: + - my_django_volume:/app + db: + image: mysql + ports: + - "3306:3306" + environment: + - "MYSQL_ROOT_PASSWORD=test@123" +volumes: + my_django_volume: + external: true + diff --git a/2023/day19/tasks.md b/2023/day19/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day2/tasks.md b/2023/day2/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day20/README.md b/2023/day20/README.md new file mode 100644 index 0000000000..e9c4b59ba9 --- /dev/null +++ b/2023/day20/README.md @@ -0,0 +1,16 @@ +## Finally!! 🎉 + +You have completed✅ the Docker handson and I hope you have learned something interesting from it.🙌 + +Now it's time to take your Docker skills to the next level by creating a comprehensive cheat-sheet of all the commands you've learned so far. This cheat-sheet should include commands for both Docker and Docker-Compose, as well as brief explanations of their usage. +This cheat-sheet will not only help you in the future but also contribute to the DevOps community by providing a useful resource for others.😊🙌 + +So, put your knowledge and creativity to the test and create a cheat-sheet that truly stands out! 🚀 + +_I have added a [cheatsheet](https://cdn.hashnode.com/res/hashnode/image/upload/v1670863735841/r6xdXpsap.png?auto=compress,format&format=webp) for your reference, Make sure every cheatsheet must be UNIQUE_ + +Post it on Linkedin and Spread the knowledge.😃 + +**Happy Learning :)** + +[← Previous Day](../day19/README.md) | [Next Day →](../day21/README.md) diff --git a/2023/day20/tasks.md b/2023/day20/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day21/README.md b/2023/day21/README.md new file mode 100644 index 0000000000..efb2cfc646 --- /dev/null +++ b/2023/day21/README.md @@ -0,0 +1,40 @@ +## Day 21 Task: Docker Important interview Questions. + +## Docker Interview + +Docker is a good topic to ask in DevOps Engineer Interviews, mostly for freshers. +One must surely try these questions in order to be better in Docker + +## Questions + +- What is the Difference between an Image, Container and Engine? +- What is the Difference between the Docker command COPY vs ADD? +- What is the Difference between the Docker command CMD vs RUN? +- How Will you reduce the size of the Docker image? +- Why and when to use Docker? +- Explain the Docker components and how they interact with each other. +- Explain the terminology: Docker Compose, Docker File, Docker Image, Docker Container? +- In what real scenarios have you used Docker? +- Docker vs Hypervisor? +- What are the advantages and disadvantages of using docker? +- What is a Docker namespace? +- What is a Docker registry? +- What is an entry point? +- How to implement CI/CD in Docker? +- Will data on the container be lost when the docker container exits? +- What is a Docker swarm? +- What are the docker commands for the following: + - view running containers + - command to run the container under a specific name + - command to export a docker + - command to import an already existing docker image + - commands to delete a container + - command to remove all stopped containers, unused networks, build caches, and dangling images? +- What are the common docker practices to reduce the size of Docker Image? + +These questions will help you in your next DevOps Interview. +_Write a Blog and share it on LinkedIn._ + +**Happy Learning :)** + +[← Previous Day](../day20/README.md) | [Next Day →](../day22/README.md) diff --git a/2023/day21/tasks.md b/2023/day21/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day22/README.md b/2023/day22/README.md new file mode 100644 index 0000000000..5117b18ce1 --- /dev/null +++ b/2023/day22/README.md @@ -0,0 +1,30 @@ +# Day-22 : Getting Started with Jenkins 😃 + +**Linux, Git, Git-Hub, Docker finish ho chuka hai to chaliye seekhte hai inko deploy krne ke lye CI-CD tool:** + +## What is Jenkins? + +- Jenkins is an open source continuous integration-continuous delivery and deployment (CI/CD) automation software DevOps tool written in the Java programming language. It is used to implement CI/CD workflows, called pipelines. + +- Jenkins is a tool that is used for automation, and it is an open-source server that allows all the developers to build, test and deploy software. It works or runs on java as it is written in java. By using Jenkins we can make a continuous integration of projects(jobs) or end-to-endpoint automation. + +- Jenkins achieves Continuous Integration with the help of plugins. Plugins allow the integration of Various DevOps stages. If you want to integrate a particular tool, you need to install the plugins for that tool. For example Git, Maven 2 project, Amazon EC2, HTML publisher etc. + +**Let us do discuss the necessity of this tool before going ahead to the procedural part for installation:** + +- Nowadays, humans are becoming lazy😴 day by day so even having digital screens and just one click button in front of us then also need some automation. + +- Here, I’m referring to that part of automation where we need not have to look upon a process(here called a job) for completion and after it doing another job. For that, we have Jenkins with us. + +Note: By now Jenkins should be installed on your machine(as it was a part of previous tasks, if not follow [Installation Guide](https://youtu.be/OkVtBKqMt7I)) + +## Tasks: + +**1. What you understood in Jenkin, write a small article in your own words (Don't copy from Internet Directly)** + +**2.Create a freestyle pipeline to print "Hello World!!** +Hint: Use link for [Article](https://www.geeksforgeeks.org/what-is-jenkins) + +Don't forget to post your progress on Linkedin. Till then Happy learning :) + +[← Previous Day](../day21/README.md) | [Next Day →](../day23/README.md) diff --git a/2023/day22/tasks.md b/2023/day22/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day23/README.md b/2023/day23/README.md new file mode 100644 index 0000000000..1fc2135053 --- /dev/null +++ b/2023/day23/README.md @@ -0,0 +1,40 @@ +# Day 23 Task: Jenkins Freestyle Project for DevOps Engineers. + +The Community is absolutely crushing it in the #90daysofdevops journey. Today's challenge is particularly exciting as it entails creating a Jenkins Freestyle Project, an opportunity for DevOps engineers to showcase their skills and push their limits. Who's ready to dive in and make it happen? 😍 + +## What is CI/CD? + +- CI or Continuous Integration is the practice of automating the integration of code changes from multiple developers into a single codebase. It is a software development practice where the developers commit their work frequently into the central code repository (Github or Stash). Then there are automated tools that build the newly committed code and do a code review, etc as required upon integration. + The key goals of Continuous Integration are to find and address bugs quicker, make the process of integrating code across a team of developers easier, improve software quality and reduce the time it takes to release new feature updates. + +- CD or Continuous Delivery is carried out after Continuous Integration to make sure that we can release new changes to our customers quickly in an error-free way. This includes running integration and regression tests in the staging area (similar to the production environment) so that the final release is not broken in production. It ensures to automate the release process so that we have a release-ready product at all times and we can deploy our application at any point in time. + +## What Is a Build Job? + +A Jenkins build job contains the configuration for automating a specific task or step in the application building process. These tasks include gathering dependencies, compiling, archiving, or transforming code, and testing and deploying code in different environments. + +Jenkins supports several types of build jobs, such as freestyle projects, pipelines, multi-configuration projects, folders, multibranch pipelines, and organization folders. + +## What is Freestyle Projects ?? 🤔 + +A freestyle project in Jenkins is a type of project that allows you to build, test, and deploy software using a variety of different options and configurations. Here are a few tasks that you could complete when working with a freestyle project in Jenkins: + +# Task-01 + +- create a agent for your app. ( which you deployed from docker in earlier task) +- Create a new Jenkins freestyle project for your app. +- In the "Build" section of the project, add a build step to run the "docker build" command to build the image for the container. +- Add a second step to run the "docker run" command to start a container using the image created in step 3. + +# Task-02 + +- Create Jenkins project to run "docker-compose up -d" command to start the multiple containers defined in the compose file (Hint- use day-19 Application & Database docker-compose file) +- Set up a cleanup step in the Jenkins project to run "docker-compose down" command to stop and remove the containers defined in the compose file. + +For Refference jenkins Freestyle Project visit [here](https://youtu.be/wwNWgG5htxs) + +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. + +Happy Learning:) + +[← Previous Day](../day22/README.md) | [Next Day →](../day24/README.md) diff --git a/2023/day23/tasks.md b/2023/day23/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day24/README.md b/2023/day24/README.md new file mode 100644 index 0000000000..b611db8316 --- /dev/null +++ b/2023/day24/README.md @@ -0,0 +1,29 @@ +# Day 24 Task: Complete Jenkins CI/CD Project + +Let's make a beautiful CI/CD Pipeline for your Node JS Application 😍 + +## Did you finish Day 23? + +- Day 23 was all about Jenkins CI/CD, make sure you have done it and understood the concepts. As today You will be doing one Project End to End and adding it to your resume :) +- As you have worked with Docker and Docker compose, it will be good to use it in a live project. + +# Task-01 + +- Fork [this](https://github.com/LondheShubham153/node-todo-cicd.git) repository: +- Create a connection to your Jenkins job and your GitHub Repository via GitHub Integration. +- Read About [GitHub WebHooks](https://betterprogramming.pub/how-too-add-github-webhook-to-a-jenkins-pipeline-62b0be84e006) and make sure you have CICD setup +- Refer [this](https://youtu.be/nplH3BzKHPk) video for the entire project + +# Task-02 + +- In the Execute shell run the application using Docker compose +- You will have to make a Docker Compose file for this Project (Can be a good open source contribution) +- Run the project and give yourself a treat:) + +For Reference and entire hands-on Project visit [here](https://youtu.be/nplH3BzKHPk) + +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. + +Happy Learning:) + +[← Previous Day](../day23/README.md) | [Next Day →](../day25/README.md) diff --git a/2023/day24/tasks.md b/2023/day24/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day25/README.md b/2023/day25/README.md new file mode 100644 index 0000000000..dabbc9b07e --- /dev/null +++ b/2023/day25/README.md @@ -0,0 +1,31 @@ +# Day 25 Task: Complete Jenkins CI/CD Project - Continued with Documentation + +I can imagine catching up will be tough so take a small breather today and complete the Jenkins CI/CD project from Day 24 and add a documentation. + +## Did you finish Day 24? + +- Day 24 will give you an End to End project and adding it to your resume will be a cherry on the top. + +- take more time, finish the project, add a Documentation, add it to your Resume and post about it today. + +# Task-01 + +- Document the process from cloning the repository to adding webhooks, and Deployment, etc. as a README , go through [this example](https://github.com/LondheShubham153/fynd-my-movie/blob/master/README.md) + +- A well written readme file will help others to understand your project and you will understand how to use the project again without any problems. + +# Task-02 + +- Also it's important to keep smaller goals, as its a small task, think of a small Goal you can accomplish. + +- Write about it using [this template](https://www.linkedin.com/posts/shubhamlondhe1996_taking-resolutions-and-having-goals-for-an-activity-7023858409762373632-s2J8?utm_source=share&utm_medium=member_desktop) + +- Have small goals and strategies to achieve them, also have a small reward for yourself. + +For Reference and entire hands-on Project visit [here](https://youtu.be/nplH3BzKHPk) + +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. + +Happy Learning:) + +[← Previous Day](../day24/README.md) | [Next Day →](../day26/README.md) diff --git a/2023/day25/tasks.md b/2023/day25/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day26/README.md b/2023/day26/README.md new file mode 100644 index 0000000000..b0d65accb6 --- /dev/null +++ b/2023/day26/README.md @@ -0,0 +1,59 @@ +# Day 26 Task: Jenkins Declarative Pipeline + +One of the most important parts of your DevOps and CICD journey is a Declarative Pipeline Syntax of Jenkins + +## Some terms for your Knowledge + +**What is Pipeline -** A pipeline is a collection of steps or jobs interlinked in a sequence. + +**Declarative:** Declarative is a more recent and advanced implementation of a pipeline as a code. + +**Scripted:** Scripted was the first and most traditional implementation of the pipeline as a code in Jenkins. It was designed as a general-purpose DSL (Domain Specific Language) built with Groovy. + +# Why you should have a Pipeline + +The definition of a Jenkins Pipeline is written into a text file (called a [`Jenkinsfile`](https://www.jenkins.io/doc/book/pipeline/jenkinsfile)) which in turn can be committed to a project’s source control repository. +This is the foundation of "Pipeline-as-code"; treating the CD pipeline as a part of the application to be versioned and reviewed like any other code. + +**Creating a `Jenkinsfile` and committing it to source control provides a number of immediate benefits:** + +- Automatically creates a Pipeline build process for all branches and pull requests. +- Code review/iteration on the Pipeline (along with the remaining source code). + +# Pipeline syntax + +```groovy +pipeline { + agent any + stages { + stage('Build') { + steps { + // + } + } + stage('Test') { + steps { + // + } + } + stage('Deploy') { + steps { + // + } + } + } +} +``` + +# Task-01 + +- Create a New Job, this time select Pipeline instead of Freestyle Project. +- Follow the Official Jenkins [Hello world example](https://www.jenkins.io/doc/pipeline/tour/hello-world/) +- Complete the example using the Declarative pipeline +- In case of any issues feel free to post on any Groups, [Discord](https://discord.gg/Q6ntmMtH) or [Telegram](https://t.me/trainwithshubham) + +You can post your progress on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. + +Happy Learning:) + +[← Previous Day](../day25/README.md) | [Next Day →](../day27/README.md) diff --git a/2023/day26/tasks.md b/2023/day26/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day27/README.md b/2023/day27/README.md new file mode 100644 index 0000000000..277a2db069 --- /dev/null +++ b/2023/day27/README.md @@ -0,0 +1,43 @@ +# Day 27 Task: Jenkins Declarative Pipeline with Docker + +Day 26 was all about a Declarative pipeline, now its time to level up things, let's integrate Docker and your Jenkins declarative pipeline + +## Use your Docker Build and Run Knowledge + +**docker build -** you can use `sh 'docker build . -t ' ` in your pipeline stage block to run the docker build command. (Make sure you have docker installed with correct permissions. + +**docker run:** you can use `sh 'docker run -d '` in your pipeline stage block to build the container. + +**How will the stages look** + +```groovy +stages { + stage('Build') { + steps { + sh 'docker build -t trainwithshubham/django-app:latest' + } + } + } +``` + +# Task-01 + +- Create a docker-integrated Jenkins declarative pipeline +- Use the above-given syntax using `sh` inside the stage block +- You will face errors in case of running a job twice, as the docker container will be already created, so for that do task 2 + +# Task-02 + +- Create a docker-integrated Jenkins declarative pipeline using the `docker` groovy syntax inside the stage block. +- You won't face errors, you can Follow [this documentation](https://tempora-mutantur.github.io/jenkins.io/github_pages_test/doc/book/pipeline/docker/) + +- Complete your previous projects using this Declarative pipeline approach + +- In case of any issues feel free to post on any Groups, [Discord](https://discord.gg/Q6ntmMtH) or [Telegram](https://t.me/trainwithshubham) + +Are you enjoying the #90DaysOfDevOps Challenge? +Let me know how are feeling after 4 weeks of DevOps Learnings, + +Happy Learning:) + +[← Previous Day](../day26/README.md) | [Next Day →](../day28/README.md) diff --git a/2023/day27/tasks.md b/2023/day27/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day28/README.md b/2023/day28/README.md new file mode 100644 index 0000000000..1c388c0d38 --- /dev/null +++ b/2023/day28/README.md @@ -0,0 +1,49 @@ +# Day 28 Task: Jenkins Agents + +# Jenkins Master (Server) + +Jenkins’s server or master node holds all key configurations. Jenkins master server is like a control server that orchestrates all the workflow defined in the pipelines. For example, scheduling a job, monitoring the jobs, etc. + +# Jenkins Agent + +An agent is typically a machine or container that connects to a Jenkins master and this agent that actually execute all the steps mentioned in a Job. When you create a Jenkins job, you have to assign an agent to it. Every agent has a label as a unique identifier. + +When you trigger a Jenkins job from the master, the actual execution happens on the agent node that is configured in the job. + +A single, monolithic Jenkins installation can work great for a small team with a relatively small number of projects. As your needs grow, however, it often becomes necessary to scale up. Jenkins provides a way to do this called “master to agent connection.” Instead of serving the Jenkins UI and running build jobs all on a single system, you can provide Jenkins with agents to handle the execution of jobs while the master serves the Jenkins UI and acts as a control node. + +

+ +## Pre-requisites + +Let’s say we’re starting with a fresh Ubuntu 22.04 Linux installation. To get an agent working make sure you install Java ( same version as jenkins master server ) and Docker on it. + +`Note:- +While creating an agent, be sure to separate rights, permissions, and ownership for jenkins users. ` + +# Task-01 + +- Create an agent by setting up a node on Jenkins + +- Create a new AWS EC2 Instance and connect it to master(Where Jenkins is installed) + +- The connection of master and agent requires SSH and the public-private key pair exchange. +- Verify its status under "Nodes" section. + +- You can follow [this article](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7017885886461698048-os5f?utm_source=share&utm_medium=member_android) for the same + +# Task-02 + +- Run your previous Jobs (which you built on Day 26, and Day 27) on the new agent + +- Use labels for the agent, your master server should trigger builds for the agent server. + +- In case of any issues feel free to post on any Groups, [Discord](https://discord.gg/Q6ntmMtH) or [Telegram](https://t.me/trainwithshubham) + +Are you enjoying the #90DaysOfDevOps Challenge? + +Let me know how are feeling after 4 weeks of DevOps Learning. + +Happy Learning:) + +[← Previous Day](../day27/README.md) | [Next Day →](../day29/README.md) diff --git a/2023/day28/tasks.md b/2023/day28/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day29/README.md b/2023/day29/README.md new file mode 100644 index 0000000000..6563b7637e --- /dev/null +++ b/2023/day29/README.md @@ -0,0 +1,33 @@ +## Day 29 Task: Jenkins Important interview Questions. + +

+ + +## Jenkins Interview + +Here are some Jenkins-specific questions related to Docker that one can use during a DevOps Engineer interview: + +## Questions + +1. What’s the difference between continuous integration, continuous delivery, and continuous deployment? +2. Benefits of CI/CD +3. What is meant by CI-CD? +4. What is Jenkins Pipeline? +5. How do you configure the job in Jenkins? +6. Where do you find errors in Jenkins? +7. In Jenkins how can you find log files? +8. Jenkins workflow and write a script for this workflow? +9. How to create continuous deployment in Jenkins? +10. How build job in Jenkins? +11. Why we use pipeline in Jenkins? +12. Is Only Jenkins enough for automation? +13. How will you handle secrets? +14. Explain diff stages in CI-CD setup +15. Name some of the plugins in Jenkin? + +These questions will help you in your next DevOps Interview. +Write a Blog and share it on LinkedIn. + +_Happy Learning :)_ + +[← Previous Day](../day28/README.md) | [Next Day →](../day30/README.md) diff --git a/2023/day29/tasks.md b/2023/day29/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day3/tasks.md b/2023/day3/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day30/README.md b/2023/day30/README.md new file mode 100644 index 0000000000..af4d37aa2f --- /dev/null +++ b/2023/day30/README.md @@ -0,0 +1,29 @@ +## Day 30 Task: Kubernetes Architecture + +

+ +## Kubernetes Overview + +With the widespread adoption of [containers](https://cloud.google.com/containers) among organizations, Kubernetes, the container-centric management software, has become a standard to deploy and operate containerized applications and is one of the most important parts of DevOps. + +Originally developed at Google and released as open-source in 2014. Kubernetes builds on 15 years of running Google's containerized workloads and the valuable contributions from the open-source community. Inspired by Google’s internal cluster management system, [Borg](https://research.google.com/pubs/pub43438.html), + +## Tasks + +1. What is Kubernetes? Write in your own words and why do we call it k8s? + +2. What are the benefits of using k8s? + +3. Explain the architecture of Kubernetes, refer to [this video](https://youtu.be/FqfoDUhzyDo) + +4. What is Control Plane? + +5. Write the difference between kubectl and kubelets. + +6. Explain the role of the API server. + +Kubernetes architecture is important, so make sure you spend a day understanding it. [This video](https://youtu.be/FqfoDUhzyDo) will surely help you. + +_Happy Learning :)_ + +[← Previous Day](../day29/README.md) | [Next Day →](../day31/README.md) diff --git a/2023/day30/tasks.md b/2023/day30/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day31/README.md b/2023/day31/README.md new file mode 100644 index 0000000000..5b2a6b79e5 --- /dev/null +++ b/2023/day31/README.md @@ -0,0 +1,65 @@ +## Day 31 Task: Launching your First Kubernetes Cluster with Nginx running + +### Awesome! You learned the architecture of one of the top most important tool "Kubernetes" in your previous task. + +## What about doing some hands-on now? + +Let's read about minikube and implement _k8s_ in our local machine + +1. **What is minikube?** + +_Ans_:- Minikube is a tool which quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. It can deploy as a VM, a container, or on bare-metal. + +Minikube is a pared-down version of Kubernetes that gives you all the benefits of Kubernetes with a lot less effort. + +This makes it an interesting option for users who are new to containers, and also for projects in the world of edge computing and the Internet of Things. + +2. **Features of minikube** + +_Ans_ :- + +(a) Supports the latest Kubernetes release (+6 previous minor versions) + +(b) Cross-platform (Linux, macOS, Windows) + +(c) Deploy as a VM, a container, or on bare-metal + +(d) Multiple container runtimes (CRI-O, containerd, docker) + +(e) Direct API endpoint for blazing fast image load and build + +(f) Advanced features such as LoadBalancer, filesystem mounts, FeatureGates, and network policy + +(g) Addons for easily installed Kubernetes applications + +(h) Supports common CI environments + +## Task-01: + +## Install minikube on your local + +For installation, you can Visit [this page](https://minikube.sigs.k8s.io/docs/start/). + +If you want to try an alternative way, you can check [this](https://k8s-docs.netlify.app/en/docs/tasks/tools/install-minikube/). + +## Let's understand the concept **pod** + +_Ans:-_ + +Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. + +A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled. + +You can read more about pod from [here](https://kubernetes.io/docs/concepts/workloads/pods/) . + +## Task-02: + +## Create your first pod on Kubernetes through minikube. + +We are suggesting you make an nginx pod, but you can always show your creativity and do it on your own. + +**Having an issue? Don't worry, adding a sample yaml file for pod creation, you can always refer that.** + +_Happy Learning :)_ + +[← Previous Day](../day30/README.md) | [Next Day →](../day32/README.md) diff --git a/2023/day31/pod.yml b/2023/day31/pod.yml new file mode 100644 index 0000000000..cfc02a372d --- /dev/null +++ b/2023/day31/pod.yml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx +spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + + +# After creating this file , run below command: +# kubectl apply -f diff --git a/2023/day31/tasks.md b/2023/day31/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day32/Deployment.yml b/2023/day32/Deployment.yml new file mode 100644 index 0000000000..8f3814196b --- /dev/null +++ b/2023/day32/Deployment.yml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: todo-app + labels: + app: todo +spec: + replicas: 2 + selector: + matchLabels: + app: todo + template: + metadata: + labels: + app: todo + spec: + containers: + - name: todo + image: rishikeshops/todo-app + ports: + - containerPort: 3000 diff --git a/2023/day32/README.md b/2023/day32/README.md new file mode 100644 index 0000000000..eb2ee9c304 --- /dev/null +++ b/2023/day32/README.md @@ -0,0 +1,27 @@ +## Day 32 Task: Launching your Kubernetes Cluster with Deployment + +### Congratulation ! on your learning on K8s on Day-31 + +## What is Deployment in k8s + +A Deployment provides a configuration for updates for Pods and ReplicaSets. + +You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new replicas for scaling, or to remove existing Deployments and adopt all their resources with new Deployments. + +## Today's task let's keep it very simple. + +## Task-1: + +**Create one Deployment file to deploy a sample todo-app on K8s using "Auto-healing" and "Auto-Scaling" feature** + +- add a deployment.yml file (sample is kept in the folder for your reference) +- apply the deployment to your k8s (minikube) cluster by command + `kubectl apply -f deployment.yml` + +Let's make your resume shine with one more project ;) + +**Having an issue? Don't worry, adding a sample deployment file , you can always refer that or wathch [this video](https://youtu.be/ONrbWFJXLLk)** + +Happy Learning :) + +[← Previous Day](../day31/README.md) | [Next Day →](../day33/README.md) diff --git a/2023/day32/tasks.md b/2023/day32/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day33/README.md b/2023/day33/README.md new file mode 100644 index 0000000000..984842c527 --- /dev/null +++ b/2023/day33/README.md @@ -0,0 +1,34 @@ +# Day 33 Task: Working with Namespaces and Services in Kubernetes + +### Congrats🎊🎉 on updating your Deployment yesterday💥🙌 + +## What are Namespaces and Services in k8s + +In Kubernetes, Namespaces are used to create isolated environments for resources. Each Namespace is like a separate cluster within the same physical cluster. Services are used to expose your Pods and Deployments to the network. Read more about Namespace [Here](https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/) + +# Today's task: + +## Task 1: + +- Create a Namespace for your Deployment + +- Use the command `kubectl create namespace ` to create a Namespace + +- Update the deployment.yml file to include the Namespace + +- Apply the updated deployment using the command: + `kubectl apply -f deployment.yml -n ` + +- Verify that the Namespace has been created by checking the status of the Namespaces in your cluster. + +## Task 2: + +- Read about Services, Load Balancing, and Networking in Kubernetes. Refer official documentation of kubernetes [Link](https://kubernetes.io/docs/concepts/services-networking/) + +Need help with Namespaces? Check out this [video](https://youtu.be/K3jNo4z5Jx8) for assistance. + +Keep growing your Kubernetes knowledge💥🙌 + +Happy Learning! :) + +[← Previous Day](../day32/README.md) | [Next Day →](../day34/README.md) diff --git a/2023/day33/tasks.md b/2023/day33/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day34/README.md b/2023/day34/README.md new file mode 100644 index 0000000000..9753f7ff1f --- /dev/null +++ b/2023/day34/README.md @@ -0,0 +1,36 @@ +# Day 34 Task: Working with Services in Kubernetes + +### Congratulation🎊 on your learning on Deployments in K8s on Day-33 + +## What are Services in K8s + +In Kubernetes, Services are objects that provide stable network identities to Pods and abstract away the details of Pod IP addresses. Services allow Pods to receive traffic from other Pods, Services, and external clients. + +## Task-1: + +- Create a Service for your todo-app Deployment from Day-32 +- Create a Service definition for your todo-app Deployment in a YAML file. +- Apply the Service definition to your K8s (minikube) cluster using the `kubectl apply -f service.yml -n ` command. +- Verify that the Service is working by accessing the todo-app using the Service's IP and Port in your Namespace. + +## Task-2: + +- Create a ClusterIP Service for accessing the todo-app from within the cluster +- Create a ClusterIP Service definition for your todo-app Deployment in a YAML file. +- Apply the ClusterIP Service definition to your K8s (minikube) cluster using the `kubectl apply -f cluster-ip-service.yml -n ` command. +- Verify that the ClusterIP Service is working by accessing the todo-app from another Pod in the cluster in your Namespace. + +## Task-3: + +- Create a LoadBalancer Service for accessing the todo-app from outside the cluster +- Create a LoadBalancer Service definition for your todo-app Deployment in a YAML file. +- Apply the LoadBalancer Service definition to your K8s (minikube) cluster using the `kubectl apply -f load-balancer-service.yml -n ` command. +- Verify that the LoadBalancer Service is working by accessing the todo-app from outside the cluster in your Namespace. + +Struggling with Services? Take a look at this video for a step-by-step [guide](https://youtu.be/OJths_RojFA). + +Need help with Services in Kubernetes? Check out the Kubernetes [documentation](https://kubernetes.io/docs/concepts/services-networking/service/) for assistance. + +Happy Learning :) + +[← Previous Day](../day33/README.md) | [Next Day →](../day35/README.md) diff --git a/2023/day34/tasks.md b/2023/day34/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day35/README.md b/2023/day35/README.md new file mode 100644 index 0000000000..160e0030b2 --- /dev/null +++ b/2023/day35/README.md @@ -0,0 +1,37 @@ +# Day 35: Mastering ConfigMaps and Secrets in Kubernetes🔒🔑🛡️ + +### 👏🎉 Yay! Yesterday we conquered Namespaces and Services 💪💻🔗🚀 + +## What are ConfigMaps and Secrets in k8s + +In Kubernetes, ConfigMaps and Secrets are used to store configuration data and secrets, respectively. ConfigMaps store configuration data as key-value pairs, while Secrets store sensitive data in an encrypted form. + +- _Example :- Imagine you're in charge of a big spaceship (Kubernetes cluster) with lots of different parts (containers) that need information to function properly. + ConfigMaps are like a file cabinet where you store all the information each part needs in simple, labeled folders (key-value pairs). + Secrets, on the other hand, are like a safe where you keep the important, sensitive information that shouldn't be accessible to just anyone (encrypted data). + So, using ConfigMaps and Secrets, you can ensure each part of your spaceship (Kubernetes cluster) has the information it needs to work properly and keep sensitive information secure! 🚀_ +- Read more about [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) & [Secret](https://kubernetes.io/docs/concepts/configuration/secret/). + +## Today's task: + +## Task 1: + +- Create a ConfigMap for your Deployment +- Create a ConfigMap for your Deployment using a file or the command line +- Update the deployment.yml file to include the ConfigMap +- Apply the updated deployment using the command: `kubectl apply -f deployment.yml -n ` +- Verify that the ConfigMap has been created by checking the status of the ConfigMaps in your Namespace. + +## Task 2: + +- Create a Secret for your Deployment +- Create a Secret for your Deployment using a file or the command line +- Update the deployment.yml file to include the Secret +- Apply the updated deployment using the command: `kubectl apply -f deployment.yml -n ` +- Verify that the Secret has been created by checking the status of the Secrets in your Namespace. + +Need help with ConfigMaps and Secrets? Check out this [video](https://youtu.be/FAnQTgr04mU) for assistance. + +Keep learning and expanding your knowledge of Kubernetes💥🙌 + +[← Previous Day](../day34/README.md) | [Next Day →](../day36/README.md) diff --git a/2023/day35/tasks.md b/2023/day35/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day36/Deployment.yml b/2023/day36/Deployment.yml new file mode 100644 index 0000000000..3c9c1c7cbc --- /dev/null +++ b/2023/day36/Deployment.yml @@ -0,0 +1,26 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: todo-app-deployment +spec: + replicas: 1 + selector: + matchLabels: + app: todo-app + template: + metadata: + labels: + app: todo-app + spec: + containers: + - name: todo-app + image: rishikeshops/todo-app + ports: + - containerPort: 8000 + volumeMounts: + - name: todo-app-data + mountPath: /app + volumes: + - name: todo-app-data + persistentVolumeClaim: + claimName: pvc-todo-app diff --git a/2023/day36/README.md b/2023/day36/README.md new file mode 100644 index 0000000000..2079e66d65 --- /dev/null +++ b/2023/day36/README.md @@ -0,0 +1,51 @@ +# Day 36 Task: Managing Persistent Volumes in Your Deployment 💥 + +🙌 Kudos to you for conquering ConfigMaps and Secrets in Kubernetes yesterday. + +🔥 You're on fire! 🔥 + +## What are Persistent Volumes in k8s + +In Kubernetes, a Persistent Volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. A Persistent Volume Claim (PVC) is a request for storage by a user. The PVC references the PV, and the PV is bound to a specific node. Read official documentation of [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). + +⏰ Wait, wait, wait! 📣 Attention all #90daysofDevOps Challengers. 💪 + +Before diving into today's task, don't forget to share your thoughts on the #90daysofDevOps challenge 💪 Fill out our feedback form (https://lnkd.in/gcgvrq8b) to help us improve and provide the best experience 🌟 Your participation and support is greatly appreciated 🙏 Let's continue to grow together 🌱 + +## Today's tasks: + +### Task 1: + +Add a Persistent Volume to your Deployment todo app. + +- Create a Persistent Volume using a file on your node. [Template](https://github.com/LondheShubham153/90DaysOfDevOps/blob/94e3970819e097a5b8edea40fe565d583419f912/2023/day36/pv.yml) + +- Create a Persistent Volume Claim that references the Persistent Volume. [Template](https://github.com/LondheShubham153/90DaysOfDevOps/blob/94e3970819e097a5b8edea40fe565d583419f912/2023/day36/pvc.yml) + +- Update your deployment.yml file to include the Persistent Volume Claim. After Applying pv.yml pvc.yml your deployment file look like this [Template](https://github.com/LondheShubham153/90DaysOfDevOps/blob/94e3970819e097a5b8edea40fe565d583419f912/2023/day36/Deployment.yml) + +- Apply the updated deployment using the command: `kubectl apply -f deployment.yml` + +- Verify that the Persistent Volume has been added to your Deployment by checking the status of the Pods and Persistent Volumes in your cluster. Use this commands `kubectl get pods` , + +`kubectl get pv` + +⚠️ Don't forget: To apply changes or create files in your Kubernetes deployments, each file must be applied separately. ⚠️ + +### Task 2: + +Accessing data in the Persistent Volume, + +- Connect to a Pod in your Deployment using command : `kubectl exec -it -- /bin/bash + +` + +- Verify that you can access the data stored in the Persistent Volume from within the Pod + +Need help with Persistent Volumes? Check out this [video](https://youtu.be/U0_N3v7vJys) for assistance. + +Keep up the excellent work🙌💥 + +Happy Learning :) + +[← Previous Day](../day35/README.md) | [Next Day →](../day37/README.md) diff --git a/2023/day36/pv.yml b/2023/day36/pv.yml new file mode 100644 index 0000000000..9546aba56a --- /dev/null +++ b/2023/day36/pv.yml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pv-todo-app +spec: + capacity: + storage: 1Gi + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: "/tmp/data" diff --git a/2023/day36/pvc.yml b/2023/day36/pvc.yml new file mode 100644 index 0000000000..3d9dce14d8 --- /dev/null +++ b/2023/day36/pvc.yml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: pvc-todo-app +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi diff --git a/2023/day36/tasks.md b/2023/day36/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day37/README.md b/2023/day37/README.md new file mode 100644 index 0000000000..1300e335ae --- /dev/null +++ b/2023/day37/README.md @@ -0,0 +1,43 @@ +## Day 37 Task: Kubernetes Important interview Questions. + +## Questions + +1.What is Kubernetes and why it is important? + +2.What is difference between docker swarm and kubernetes? + +3.How does Kubernetes handle network communication between containers? + +4.How does Kubernetes handle scaling of applications? + +5.What is a Kubernetes Deployment and how does it differ from a ReplicaSet? + +6.Can you explain the concept of rolling updates in Kubernetes? + +7.How does Kubernetes handle network security and access control? + +8.Can you give an example of how Kubernetes can be used to deploy a highly available application? + +9.What is namespace is kubernetes? Which namespace any pod takes if we don't specify any namespace? + +10.How ingress helps in kubernetes? + +11.Explain different types of services in kubernetes? + +12.Can you explain the concept of self-healing in Kubernetes and give examples of how it works? + +13.How does Kubernetes handle storage management for containers? + +14.How does the NodePort service work? + +15.What is a multinode cluster and single-node cluster in Kubernetes? + +16.Difference between create and apply in kubernetes? + +## These questions will help you in your next DevOps Interview. + +_Write a Blog and share it on LinkedIn._ + +**_Happy Learning :)_** + +[← Previous Day](../day36/README.md) | [Next Day →](../day38/README.md) diff --git a/2023/day37/tasks.md b/2023/day37/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day38/README.md b/2023/day38/README.md new file mode 100644 index 0000000000..8f51187e87 --- /dev/null +++ b/2023/day38/README.md @@ -0,0 +1,30 @@ +# Day 38 Getting Started with AWS Basics☁ + +![AWS](https://user-images.githubusercontent.com/115981550/217238286-6c6bc6e7-a1ac-4d12-98f3-f95ff5bf53fc.png) + +Congratulations!!!! you have come so far. Don't let your excuses break your consistency. Let's begin our new Journey with Cloud☁. By this time you have created multiple EC2 instances, if not let's begin the journey: + +## AWS: + +Amazon Web Services is one of the most popular Cloud Provider that has free tier too for students and Cloud enthutiasts for their Handson while learning (Create your free account today to explore more on it). + +Read from [here](https://aws.amazon.com/what-is-aws/) + +## IAM: + +AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. +Read from [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) + +Get to know IAM more deeply [Click Here!!](https://www.youtube.com/watch?v=ORB4eY8EydA) + +### Task1: + +Create an IAM user with username of your own wish and grant EC2 Access. Launch your Linux instance through the IAM user that you created now and install jenkins and docker on your machine via single Shell Script. + +### Task2: + +In this task you need to prepare a devops team of avengers. Create 3 IAM users of avengers and assign them in devops groups with IAM policy. + +Post your progress on Linkedin. Till then Happy Learning :) + +[← Previous Day](../day37/README.md) | [Next Day →](../day39/README.md) diff --git a/2023/day38/tasks.md b/2023/day38/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day39/README.md b/2023/day39/README.md new file mode 100644 index 0000000000..9a7e3e934f --- /dev/null +++ b/2023/day39/README.md @@ -0,0 +1,41 @@ +# Day 39 AWS and IAM Basics☁ + +![AWS](https://miro.medium.com/max/1400/0*dIzXLQn6aBClm1TJ.png) + +By this time you have created multiple EC2 instances, and post installation manually installed applications like Jenkins, docker etc. +Now let's switch to little automation part. Sounds interesting??🤯 + +## AWS: + +Amazon Web Services is one of the most popular Cloud Provider that has free tier too for students and Cloud enthutiasts for their Handson while learning (Create your free account today to explore more on it). + +Read from [here](https://aws.amazon.com/what-is-aws/) + +## User Data in AWS: + +- When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives. +- You can also pass this data into the launch instance wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls). +- This will save time and manual effort everytime you launch an instance and want to install any application on it like apache, docker, Jenkins etc + +Read more from [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) + +## IAM: + +AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. +Read from [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) + +Get to know IAM more deeply🏊[Click Here!!](https://www.youtube.com/watch?v=ORB4eY8EydA) + +### Task1: + +- Launch EC2 instance with already installed Jenkins on it. Once server shows up in console, hit the IP address in browser and you Jenkins page should be visible. +- Take screenshot of Userdata and Jenkins page, this will verify the task completion. + +### Task2: + +- Read more on IAM Roles and explain the IAM Users, Groups and Roles in your own terms. +- Create three Roles named: DevOps-User, Test-User and Admin. + +Post your progress on Linkedin. Till then Happy Learning :) + +[← Previous Day](../day38/README.md) | [Next Day →](../day40/README.md) diff --git a/2023/day39/tasks.md b/2023/day39/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day4/tasks.md b/2023/day4/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day40/README.md b/2023/day40/README.md new file mode 100644 index 0000000000..ce2dbcfda3 --- /dev/null +++ b/2023/day40/README.md @@ -0,0 +1,49 @@ +# Day 40 AWS EC2 Automation ☁ + +![AWS](https://www.eginnovations.com/blog/wp-content/uploads/2021/09/Amazon-AWS-Cloud-Topimage-1.jpg) + +I hope your journey with AWS cloud and automation is going well [](https://emojipedia.org/emoji/%F0%9F%98%8D/) + +### 😍 + +## Automation in EC2: + +Amazon EC2 or Amazon Elastic Compute Cloud can give you secure, reliable, high-performance, and cost-effective computing infrastructure to meet demanding business needs. + +Also, if you know a few things, you can automate many things. + +Read from [here](https://aws.amazon.com/ec2/) + +## Launch template in AWS EC2: + +- You can make a launch template with the configuration information you need to start an instance. You can save launch parameters in launch templates so you don't have to type them in every time you start a new instance. +- For example, a launch template can have the AMI ID, instance type, and network settings that you usually use to launch instances. +- You can tell the Amazon EC2 console to use a certain launch template when you start an instance. + +Read more from [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html) + +## Instance Types: + +Amazon EC2 has a large number of instance types that are optimised for different uses. The different combinations of CPU, memory, storage and networking capacity in instance types give you the freedom to choose the right mix of resources for your apps. Each instance type comes with one or more instance sizes, so you can adjust your resources to meet the needs of the workload you want to run. + +Read from [here](https://aws.amazon.com/ec2/instance-types/?trk=32f4fbd0-ffda-4695-a60c-8857fab7d0dd&sc_channel=ps&s_kwcid=AL!4422!3!536392685920!e!!g!!ec2%20instance%20types&ef_id=CjwKCAiA0JKfBhBIEiwAPhZXD_O1-3qZkRa-KScynbwjvHd3l4UHSTfKuigd5ZPukXoDXu-v3MtC7hoCafEQAvD_BwE:G:s&s_kwcid=AL!4422!3!536392685920!e!!g!!ec2%20instance%20types) + +## AMI: + +An Amazon Machine Image (AMI) is an image that AWS supports and keeps up to date. It contains the information needed to start an instance. When you launch an instance, you must choose an AMI. When you need multiple instances with the same configuration, you can launch them from a single AMI. + +### Task1: + +- Create a launch template with Amazon Linux 2 AMI and t2.micro instance type with Jenkins and Docker setup (You can use the Day 39 User data script for installing the required tools. + +- Create 3 Instances using Launch Template, there must be an option that shows number of instances to be launched ,can you find it? :) + +- You can go one step ahead and create an auto-scaling group, sounds tough? + +Check [this](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html#create-launch-template-for-auto-scaling) out + +Post your progress on Linkedin. + +Happy Learning :) + +[← Previous Day](../day39/README.md) | [Next Day →](../day41/README.md) diff --git a/2023/day40/tasks.md b/2023/day40/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day41/README.md b/2023/day41/README.md new file mode 100644 index 0000000000..0a1488f068 --- /dev/null +++ b/2023/day41/README.md @@ -0,0 +1,53 @@ +# Day 41: Setting up an Application Load Balancer with AWS EC2 🚀 ☁ + +![LB2](https://user-images.githubusercontent.com/115981550/218145297-d55fe812-32b7-4242-a4f8-eb66312caa2c.png) + +### Hi, I hope you had a great day yesterday learning about the launch template and instances in EC2. Today, we are going to dive into one of the most important concepts in EC2: Load Balancing. + +## What is Load Balancing? + +Load balancing is the distribution of workloads across multiple servers to ensure consistent and optimal resource utilization. It is an essential aspect of any large-scale and scalable computing system, as it helps you to improve the reliability and performance of your applications. + +## Elastic Load Balancing: + +**Elastic Load Balancing (ELB)** is a service provided by Amazon Web Services (AWS) that automatically distributes incoming traffic across multiple EC2 instances. ELB provides three types of load balancers: + +Read more from [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) + +1. **Application Load Balancer (ALB)** - _operates at layer 7 of the OSI model and is ideal for applications that require advanced routing and microservices._ + +- Read more from [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) + +2. **Network Load Balancer (NLB)** - _operates at layer 4 of the OSI model and is ideal for applications that require high throughput and low latency._ + +- Read more from [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) + +3. **Classic Load Balancer (CLB)** - _operates at layer 4 of the OSI model and is ideal for applications that require basic load balancing features._ + +- Read more [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html) + +## 🎯 Today's Tasks: + +### Task 1: + +- launch 2 EC2 instances with an Ubuntu AMI and use User Data to install the Apache Web Server. +- Modify the index.html file to include your name so that when your Apache server is hosted, it will display your name also do it for 2nd instance which include " TrainWithShubham Community is Super Aweasome :) ". +- Copy the public IP address of your EC2 instances. +- Open a web browser and paste the public IP address into the address bar. +- You should see a webpage displaying information about your PHP installation. + +### Task 2: + +- Create an Application Load Balancer (ALB) in EC2 using the AWS Management Console. +- Add EC2 instances which you launch in task-1 to the ALB as target groups. +- Verify that the ALB is working properly by checking the health status of the target instances and testing the load balancing capabilities. + +![LoadBalancer](https://user-images.githubusercontent.com/115981550/218143557-26ec33ce-99a7-4db6-a46f-1cf48ed77ae0.png) + +Need help with task? Check out this [Blog for assistance](https://rushikesh-mashidkar.hashnode.dev/create-an-application-load-balancer-elastic-load-balancing-using-aws-ec2-instance). + +Don't forget to share your progress on LinkedIn and have a great day🙌💥 + +Happy Learning! 😃 + +[← Previous Day](../day40/README.md) | [Next Day →](../day42/README.md) diff --git a/2023/day41/tasks.md b/2023/day41/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day42/README.md b/2023/day42/README.md new file mode 100644 index 0000000000..5f8a37ff09 --- /dev/null +++ b/2023/day42/README.md @@ -0,0 +1,28 @@ +# Day 42: IAM Programmatic access and AWS CLI 🚀 ☁ + +Today is more of a reading excercise and getting some programmatic access for your AWS account + +## IAM Programmatic access + +In order to access your AWS account from a terminal or system, you can use AWS Access keys and AWS Secret Access keys +Watch [this video](https://youtu.be/XYKqL5GFI-I) for more details. + +## AWS CLI + +The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. + +The AWS CLI v2 offers several new features including improved installers, new configuration options such as AWS IAM Identity Center (successor to AWS SSO), and various interactive features. + +## Task-01 + +- Create AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY from AWS Console. + +## Task-02 + +- Setup and install AWS CLI and configure your account credentials + +Let me know if you have any issues while doing the task. + +Happy Learning :) + +[← Previous Day](../day41/README.md) | [Next Day →](../day43/README.md) diff --git a/2023/day42/tasks.md b/2023/day42/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day43/README.md b/2023/day43/README.md new file mode 100644 index 0000000000..b838d01544 --- /dev/null +++ b/2023/day43/README.md @@ -0,0 +1,32 @@ +# Day 43: S3 Programmatic access with AWS-CLI 💻 📁 + +Hi, I hope you had a great day yesterday. Today as part of the #90DaysofDevOps Challenge we will be exploring most commonly used service in AWS i.e S3. + +![s3](https://user-images.githubusercontent.com/115981550/218308379-a2e841cf-6b77-4d02-bfbe-20d1bae09b20.png) + +# S3 + +Amazon Simple Storage Service (Amazon S3) is an object storage service that provides a secure and scalable way to store and access data on the cloud. It is designed for storing any kind of data, such as text files, images, videos, backups, and more. +Read more [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) + +## Task-01 + +- Launch an EC2 instance using the AWS Management Console and connect to it using Secure Shell (SSH). +- Create an S3 bucket and upload a file to it using the AWS Management Console. +- Access the file from the EC2 instance using the AWS Command Line Interface (AWS CLI). + +Read more about S3 using aws-cli [here](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html) + +## Task-02 + +- Create a snapshot of the EC2 instance and use it to launch a new EC2 instance. +- Download a file from the S3 bucket using the AWS CLI. +- Verify that the contents of the file are the same on both EC2 instances. + +Added Some Useful commands to complete the task. [Click here for commands](https://github.com/LondheShubham153/90DaysOfDevOps/blob/833a67ac4ec17b992934cd6878875dccc4274f56/2023/day43/aws-cli.md) + +Let me know if you have any questions or face any issues while doing the tasks.🚀 + +Happy Learning :) + +[← Previous Day](../day42/README.md) | [Next Day →](../day44/README.md) diff --git a/2023/day43/aws-cli.md b/2023/day43/aws-cli.md new file mode 100644 index 0000000000..8c0f23fe2f --- /dev/null +++ b/2023/day43/aws-cli.md @@ -0,0 +1,21 @@ +Here are some commonly used AWS CLI commands for Amazon S3: + +`aws s3 ls` - This command lists all of the S3 buckets in your AWS account. + +`aws s3 mb s3://bucket-name` - This command creates a new S3 bucket with the specified name. + +`aws s3 rb s3://bucket-name` - This command deletes the specified S3 bucket. + +`aws s3 cp file.txt s3://bucket-name` - This command uploads a file to an S3 bucket. + +`aws s3 cp s3://bucket-name/file.txt .` - This command downloads a file from an S3 bucket to your local file system. + +`aws s3 sync local-folder s3://bucket-name` - This command syncs the contents of a local folder with an S3 bucket. + +`aws s3 ls s3://bucket-name` - This command lists the objects in an S3 bucket. + +`aws s3 rm s3://bucket-name/file.txt` - This command deletes an object from an S3 bucket. + +`aws s3 presign s3://bucket-name/file.txt` - This command generates a pre-signed URL for an S3 object, which can be used to grant temporary access to the object. + +`aws s3api list-buckets` - This command retrieves a list of all S3 buckets in your AWS account, using the S3 API. diff --git a/2023/day43/tasks.md b/2023/day43/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day44/README.md b/2023/day44/README.md new file mode 100644 index 0000000000..c836c86b29 --- /dev/null +++ b/2023/day44/README.md @@ -0,0 +1,23 @@ +# Day 44: Relational Database Service in AWS + +Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud + +## Task-01 + +- Create a Free tier RDS instance of MySQL +- Create an EC2 instance +- Create an IAM role with RDS access +- Assign the role to EC2 so that your EC2 Instance can connect with RDS +- Once the RDS instance is up and running, get the credentials and connect your EC2 instance using a MySQL client. + +Hint: + +You should install mysql client on EC2, and connect the Host and Port of RDS with this client. + +Post the screenshots once your EC2 instance can connect a MySQL server, that will be a small win for you. + +Watch [this video](https://youtu.be/MrA6Rk1Y82E) for reference. + +Happy Learning + +[← Previous Day](../day43/README.md) | [Next Day →](../day45/README.md) diff --git a/2023/day44/tasks.md b/2023/day44/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day45/README.md b/2023/day45/README.md new file mode 100644 index 0000000000..c2c11a93b2 --- /dev/null +++ b/2023/day45/README.md @@ -0,0 +1,18 @@ +# Day 45: Deploy Wordpress website on AWS + +Over 30% of all websites on the internet use WordPress as their content management system (CMS). It is most often used to run blogs, but it can also be used to run e-commerce sites, message boards, and many other popular things. This guide will show you how to set up a WordPress blog site. + +## Task-01 + +- As WordPress requires a MySQL database to store its data ,create an RDS as you did in Day 44 + +To configure this WordPress site, you will create the following resources in AWS: + +- An Amazon EC2 instance to install and host the WordPress application. +- An Amazon RDS for MySQL database to store your WordPress data. +- Setup the server and post your new Wordpress app. + +Read [this](https://aws.amazon.com/getting-started/hands-on/deploy-wordpress-with-amazon-rds/) for a detailed explanation +Happy Learning :) + +[← Previous Day](../day44/README.md) | [Next Day →](../day46/README.md) diff --git a/2023/day45/tasks.md b/2023/day45/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day46/README.md b/2023/day46/README.md new file mode 100644 index 0000000000..a44ae2f101 --- /dev/null +++ b/2023/day46/README.md @@ -0,0 +1,35 @@ +# Day-46: Set up CloudWatch alarms and SNS topic in AWS + +Hey learners, you have been using aws services atleast for last 45 days. Have you ever wondered what happen if for any service is charging you bill continously and you don't know till you loose all your pocket money ? + +Hahahaha😁, Well! we, as a responsible community ,always try to make it under free tier , but it's good to know and setup something , which will inform you whenever bill touches a Threshold. + +## What is Amazon CloudWatch? + +Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. + +Read more about cloudwatch from the official documentation [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) + +## What is Amazon SNS? + +Amazon Simple Notification Service is a notification service provided as part of Amazon Web Services since 2010. It provides a low-cost infrastructure for mass delivery of messages, predominantly to mobile users. + +Read more about it [here](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) + +## Task : + +- Create a CloudWatch alarm that monitors your billing and send an email to you when a it reaches $2. + +(You can keep it for your future use) + +- Delete your billing Alarm that you created now. + +(Now you also know how to delete as well. ) + +Need help with Cloudwatch? Check out this [official documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html) for assistance. + +Keep growing your AWS knowledge💥🙌 + +Happy Learning! :) + +[← Previous Day](../day45/README.md) | [Next Day →](../day47/README.md) diff --git a/2023/day46/tasks.md b/2023/day46/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day47/README.md b/2023/day47/README.md new file mode 100644 index 0000000000..7d3dc37e37 --- /dev/null +++ b/2023/day47/README.md @@ -0,0 +1,64 @@ +# Day 47: AWS Elastic Beanstalk +Today, we explore the new AWS service- Elastic Beanstalk. We'll also cover deploying a small web application (game) on this platform + +## What is AWS Elastic Beanstalk? +![image](https://github.com/Simbaa815/90DaysOfDevOps/assets/112085387/75f69087-d769-4586-b4a7-99a87feaec92) + +- AWS Elastic Beanstalk is a service used to deploy and scale web applications developed by developers. +- It supports multiple programming languages and runtime environments such as Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. + +## Why do we need AWS Elastic Beanstalk? +- Previously, developers faced challenges in sharing software modules across geographically separated teams. +- AWS Elastic Beanstalk solves this problem by providing a service to easily share applications across different devices. + +## Advantages of AWS Elastic Beanstalk +- Highly scalable +- Fast and simple to begin +- Quick deployment +- Supports multi-tenant architecture +- Simplifies operations +- Cost efficient + +## Components of AWS Elastic Beanstalk +- Application Version: Represents a specific iteration or release of an application's codebase. +- Environment Tier: Defines the infrastructure resources allocated for an environment (e.g., web server environment, worker environment). +- Environment: Represents a collection of AWS resources running an application version. +- Configuration Template: Defines the settings for an environment, including instance types, scaling options, and more. + +## Elastic Beanstalk Environment +There are two types of environments: web server and worker. + +- Web server environments are front-end facing, accessed directly by clients using a URL. + +- Worker environments support backend applications or micro apps. + +## Task-01 +Deploy the [2048-game](https://github.com/Simbaa815/2048-game) using the AWS Elastic Beanstalk. + +If you ever find yourself facing a challenge, feel free to refer to this helpful [blog](https://devxblog.hashnode.dev/aws-elastic-beanstalk-deploying-the-2048-game) post for guidance and support. + +--- + +# Additional work + +## Test Knowledge on aws 💻 📈 +Today, we will be test the aws knowledge on services in AWS, as part of the 90 Days of DevOps Challenge. + + +## Task-01 + +- Launch an EC2 instance using the AWS Management Console and connect to it using SSH. +- Install a web server on the EC2 instance and deploy a simple web application. +- Monitor the EC2 instance using Amazon CloudWatch and troubleshoot any issues that arise. + +## Task-02 +- Create an Auto Scaling group using the AWS Management Console and configure it to launch EC2 instances in response to changes in demand. +- Use Amazon CloudWatch to monitor the performance of the Auto Scaling group and the EC2 instances and troubleshoot any issues that arise. +- Use the AWS CLI to view the state of the Auto Scaling group and the EC2 instances and verify that the correct number of instances are running. + + +We hope that these tasks will give you hands-on experience with aws services and help you understand how these services work together. If you have any questions or face any issues while doing the tasks, please let us know. + +Happy Learning :) + +[← Previous Day](../day46/README.md) | [Next Day →](../day48/README.md) diff --git a/2023/day47/tasks.md b/2023/day47/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day48/README.md b/2023/day48/README.md new file mode 100644 index 0000000000..01836eac4e --- /dev/null +++ b/2023/day48/README.md @@ -0,0 +1,40 @@ +# Day-48 - ECS + +Today will be a great learning for sure. I know many of you may not know about the term "ECS". As you know, 90 Days Of DevOps Challenge is mostly about 'learning new' , let's learn then ;) + +## What is ECS ? + +- ECS (Elastic Container Service) is a fully-managed container orchestration service provided by Amazon Web Services (AWS). It allows you to run and manage Docker containers on a cluster of virtual machines (EC2 instances) without having to manage the underlying infrastructure. + +With ECS, you can easily deploy, manage, and scale your containerized applications using the AWS Management Console, the AWS CLI, or the API. ECS supports both "Fargate" and "EC2 launch types", which means you can run your containers on AWS-managed infrastructure or your own EC2 instances. + +ECS also integrates with other AWS services, such as Elastic Load Balancing, Auto Scaling, and Amazon VPC, allowing you to build scalable and highly available applications. Additionally, ECS has support for Docker Compose and Kubernetes, making it easy to adopt existing container workflows. + +Overall, ECS is a powerful and flexible container orchestration service that can help simplify the deployment and management of containerized applications in AWS. + +## Difference between EKS and ECS ? + +- EKS (Elastic Kubernetes Service) and ECS (Elastic Container Service) are both container orchestration platforms provided by Amazon Web Services (AWS). While both platforms allow you to run containerized applications in the AWS cloud, there are some differences between the two. + +**Architecture**: +ECS is based on a centralized architecture, where there is a control plane that manages the scheduling of containers on EC2 instances. On the other hand, EKS is based on a distributed architecture, where the Kubernetes control plane is distributed across multiple EC2 instances. + +**Kubernetes Support**: +EKS is a fully managed Kubernetes service, meaning that it supports Kubernetes natively and allows you to run your Kubernetes workloads on AWS without having to manage the Kubernetes control plane. ECS, on the other hand, has its own orchestration engine and does not support Kubernetes natively. + +**Scaling**: +EKS is designed to automatically scale your Kubernetes cluster based on demand, whereas ECS requires you to configure scaling policies for your tasks and services. + +**Flexibility**: +EKS provides more flexibility than ECS in terms of container orchestration, as it allows you to customize and configure Kubernetes to meet your specific requirements. ECS is more restrictive in terms of the options available for container orchestration. + +**Community**: +Kubernetes has a large and active open-source community, which means that EKS benefits from a wide range of community-driven development and support. ECS, on the other hand, has a smaller community and is largely driven by AWS itself. + +In summary, EKS is a good choice if you want to use Kubernetes to manage your containerized workloads on AWS, while ECS is a good choice if you want a simpler, more managed platform for running your containerized applications. + +# Task : + +Set up ECS (Elastic Container Service) by setting up Nginx on ECS. + +[← Previous Day](../day47/README.md) | [Next Day →](../day49/README.md) diff --git a/2023/day48/tasks.md b/2023/day48/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day49/README.md b/2023/day49/README.md new file mode 100644 index 0000000000..ecc603177a --- /dev/null +++ b/2023/day49/README.md @@ -0,0 +1,25 @@ +# Day 49 - INTERVIEW QUESTIONS ON AWS + +Hey people, we have listened to your suggestions and we are looking forward to get more! +As you people have asked to put more interview based questions as part of Daily Task, So here it it :) + +## INTERVIEW QUESTIONS: + +- Name 5 aws services you have used and what's the use cases? +- What are the tools used to send logs to the cloud environment? +- What are IAM Roles? How do you create /manage them? +- How to upgrade or downgrade a system with zero downtime? +- What is infrastructure as code and how do you use it? +- What is a load balancer? Give scenarios of each kind of balancer based on your experience. +- What is CloudFormation and why is it used for? +- Difference between AWS CloudFormation and AWS Elastic Beanstalk? +- What are the kinds of security attacks that can occur on the cloud? And how can we minimize them? +- Can we recover the EC2 instance when we have lost the key? +- What is a gateway? +- What is the difference between the Amazon Rds, Dynamodb, and Redshift? +- Do you prefer to host a website on S3? What's the reason if your answer is either yes or no? + +Let's share your answer on LinkedIn in best possible way thinking you are in a interview table. +Happy Learning !! :) + +[← Previous Day](../day48/README.md) | [Next Day →](../day50/README.md) diff --git a/2023/day49/tasks.md b/2023/day49/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day5/tasks.md b/2023/day5/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day50/README.md b/2023/day50/README.md new file mode 100644 index 0000000000..0340a36b09 --- /dev/null +++ b/2023/day50/README.md @@ -0,0 +1,30 @@ +# Day 50: Your CI/CD pipeline on AWS - Part-1 🚀 ☁ + +What if I tell you, in next 4 days, you'll be making a CI/CD pipeline on AWS with these tools. + +- CodeCommit +- CodeBuild +- CodeDeploy +- CodePipeline +- S3 + +## What is CodeCommit ? + +- CodeCommit is a managed source control service by AWS that allows users to store, manage, and version their source code and artifacts securely and at scale. It supports Git, integrates with other AWS services, enables collaboration through branch and merge workflows, and provides audit logs and compliance reports to meet regulatory requirements and track changes. Overall, CodeCommit provides developers with a reliable and efficient way to manage their codebase and set up a CI/CD pipeline for their software development projects. + +# Task-01 : + +- Set up a code repository on CodeCommit and clone it on your local. +- You need to setup GitCredentials in your AWS IAM. +- Use those credentials in your local and then clone the repository from CodeCommit + +# Task-02 : + +- Add a new file from local and commit to your local branch +- Push the local changes to CodeCommit repository. + +For more details watch [this](https://youtu.be/p5i3cMCQ760) video. + +Happy Learning :) + +[← Previous Day](../day49/README.md) | [Next Day →](../day51/README.md) diff --git a/2023/day50/tasks.md b/2023/day50/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day51/README.md b/2023/day51/README.md new file mode 100644 index 0000000000..01f0b70262 --- /dev/null +++ b/2023/day51/README.md @@ -0,0 +1,30 @@ +# Day 51: Your CI/CD pipeline on AWS - Part 2 🚀 ☁ + +On your journey of making a CI/CD pipeline on AWS with these tools, you completed AWS CodeCommit. + +Next few days you'll learn these tools/services: + +- CodeBuild +- CodeDeploy +- CodePipeline +- S3 + +## What is CodeBuild ? + +- AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. + +# Task-01 : + +- Read about Buildspec file for Codebuild. +- create a simple index.html file in CodeCommit Repository +- you have to build the index.html using nginx server + +# Task-02 : + +- Add buildspec.yaml file to CodeCommit Repository and complete the build process. + +For more details watch [this](https://youtu.be/p5i3cMCQ760) video. + +Happy Learning :) + +[← Previous Day](../day50/README.md) | [Next Day →](../day52/README.md) diff --git a/2023/day51/tasks.md b/2023/day51/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day52/README.md b/2023/day52/README.md new file mode 100644 index 0000000000..52dffd62ae --- /dev/null +++ b/2023/day52/README.md @@ -0,0 +1,31 @@ +# Day 52: Your CI/CD pipeline on AWS - Part 3 🚀 ☁ + +On your journey of making a CI/CD pipeline on AWS with these tools, you completed AWS CodeCommit & CodeBuild. + +Next few days you'll learn these tools/services: + +- CodeDeploy +- CodePipeline +- S3 + +## What is CodeDeploy ? + +- AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services. + +CodeDeploy can deploy application content that runs on a server and is stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. CodeDeploy can also deploy a serverless Lambda function. You do not need to make changes to your existing code before you can use CodeDeploy. + +# Task-01 : + +- Read about Appspec.yaml file for CodeDeploy. +- Deploy index.html file on EC2 machine using nginx +- you have to setup a CodeDeploy agent in order to deploy code on EC2 + +# Task-02 : + +- Add appspec.yaml file to CodeCommit Repository and complete the deployment process. + +For more details watch [this](https://youtu.be/IUF-pfbYGvg) video. + +Happy Learning :) + +[← Previous Day](../day51/README.md) | [Next Day →](../day53/README.md) diff --git a/2023/day52/tasks.md b/2023/day52/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day53/README.md b/2023/day53/README.md new file mode 100644 index 0000000000..2139f0cb5d --- /dev/null +++ b/2023/day53/README.md @@ -0,0 +1,21 @@ +# Day 53: Your CI/CD pipeline on AWS - Part 4 🚀 ☁ + +On your journey of making a CI/CD pipeline on AWS with these tools, you completed AWS CodeCommit, CodeBuild & CodeDeploy. + +Finish Off in style with AWS CodePipeline + +## What is CodePipeline ? + +- CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define. + Think of it as a CI/CD Pipeline service + +# Task-01 : + +- Create a Deployment group of Ec2 Instance. +- Create a CodePipeline that gets the code from CodeCommit, Builds the code using CodeBuild and deploys it to a Deployment Group. + +For more details watch [this](https://youtu.be/IUF-pfbYGvg) video. + +Happy Learning :) + +[← Previous Day](../day52/README.md) | [Next Day →](../day54/README.md) diff --git a/2023/day53/tasks.md b/2023/day53/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day54/README.md b/2023/day54/README.md new file mode 100644 index 0000000000..f134a32bf1 --- /dev/null +++ b/2023/day54/README.md @@ -0,0 +1,19 @@ +# Day 54: Understanding Infrastructure as Code and Configuration Management + +## What's the difference bhaiyya? + +When it comes to the cloud, Infrastructure as Code (IaC) and Configuration Management (CM) are inseparable. With IaC, a descriptive model is used for infrastructure management. To name a few examples of infrastructure: networks, virtual computers, and load balancers. Using an IaC model always results in the same setting. + +Throughout the lifecycle of a product, Configuration Management (CM) ensures that the performance, functional and physical inputs, requirements, design, and operations of that product remain consistent. + +# Task-01 + +- Read more about IaC and Config. Management Tools +- Give differences on both with suitable examples +- What are most commont IaC and Config management Tools? + +Write a blog on this topic in the most creative way and post it on linkedIn :) + +happy learning... + +[← Previous Day](../day53/README.md) | [Next Day →](../day55/README.md) diff --git a/2023/day54/tasks.md b/2023/day54/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day55/README.md b/2023/day55/README.md new file mode 100644 index 0000000000..5df87b107a --- /dev/null +++ b/2023/day55/README.md @@ -0,0 +1,28 @@ +# Day 55: Understanding Configuration Management with Ansible + +## What's this Ansible? + +Ansible is an open-source automation tool, or platform, used for IT tasks such as configuration management, application deployment, intraservice orchestration, and provisioning + +# Task-01 + +- Installation of Ansible on AWS EC2 (Master Node) + `sudo apt-add-repository ppa:ansible/ansible` `sudo apt update` + `sudo apt install ansible` + +# Task-02 + +- read more about Hosts file + `sudo nano /etc/ansible/hosts ansible-inventory --list -y` + +# Task-03 + +- Setup 2 more EC2 instances with same Private keys as the previous instance (Node) +- Copy the private key to master server where Ansible is setup +- Try a ping command using ansible to the Nodes. + +Write a blog on this topic with screenshots in the most creative way and post it on linkedIn :) + +happy learning... + +[← Previous Day](../day54/README.md) | [Next Day →](../day56/README.md) diff --git a/2023/day55/tasks.md b/2023/day55/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day56/README.md b/2023/day56/README.md new file mode 100644 index 0000000000..853372bae2 --- /dev/null +++ b/2023/day56/README.md @@ -0,0 +1,18 @@ +# Day 56: Understanding Ad-hoc commands in Ansible + +Ansible ad hoc commands are one-liners designed to achieve a very specific task they are like quick snippets and your compact swiss army knife when you want to do a quick task across multiple machines. + +To put simply, Ansible ad hoc commands are one-liner Linux shell commands and playbooks are like a shell script, a collective of many commands with logic. + +Ansible ad hoc commands come handy when you want to perform a quick task. + +# Task-01 + +- write an ansible ad hoc ping command to ping 3 servers from inventory file +- Write an ansible ad hoc command to check uptime + +- You can refer to [this](https://www.middlewareinventory.com/blog/ansible-ad-hoc-commands/) blog to understand the different examples of ad-hoc commands and try out them, post the screenshots in a blog with an explanation. + +happy Learning :) + +[← Previous Day](../day55/README.md) | [Next Day →](../day57/README.md) diff --git a/2023/day56/tasks.md b/2023/day56/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day57/README.md b/2023/day57/README.md new file mode 100644 index 0000000000..4866eecf58 --- /dev/null +++ b/2023/day57/README.md @@ -0,0 +1,13 @@ +# Day 57: Ansible Hands-on with video + +Ansible is fun, you saw in last few days how easy it is. + +Let's make it fun now, by using a video explanation for Ansible. + +# Task-01 + +- Write a Blog explanation for the [ansible video](https://youtu.be/SGB7EdiP39E) + +happy Learning :) + +[← Previous Day](../day56/README.md) | [Next Day →](../day58/README.md) diff --git a/2023/day57/tasks.md b/2023/day57/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day58/README.md b/2023/day58/README.md new file mode 100644 index 0000000000..f8facae4b7 --- /dev/null +++ b/2023/day58/README.md @@ -0,0 +1,23 @@ +# Day 58: Ansible Playbooks + +Ansible playbooks run multiple tasks, assign roles, and define configurations, deployment steps, and variables. If you’re using multiple servers, Ansible playbooks organize the steps between the assembled machines or servers and get them organized and running in the way the users need them to. Consider playbooks as the equivalent of instruction manuals. + +# Task-01 + +- Write an ansible playbook to create a file on a different server + +- Write an ansible playbook to create a new user. + +- Write an ansible playbook to install docker on a group of servers + +Watch [this](https://youtu.be/089mRKoJTzo) video to learn about ansible Playbooks + +# Task-02 + +- Write a blog about writing ansible playbooks with the best practices. + +Let me or anyone in the community know if you face any challenges + +happy Learning :) + +[← Previous Day](../day57/README.md) | [Next Day →](../day59/README.md) diff --git a/2023/day58/tasks.md b/2023/day58/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day59/README.md b/2023/day59/README.md new file mode 100644 index 0000000000..f8bf4d0908 --- /dev/null +++ b/2023/day59/README.md @@ -0,0 +1,26 @@ +# Day 59: Ansible Project 🔥 + +Ansible playbooks are amazing, as you learned yesterday. +What if you deploy a simple web app using ansible, sounds like a good project, right? + +# Task-01 + +- create 3 EC2 instances . make sure all three are created with same key pair + +- Install Ansible in host server + +- copy the private key from local to Host server (Ansible_host) at (/home/ubuntu/.ssh) + +- access the inventory file using sudo vim /etc/ansible/hosts + +- Create a playbook to install Nginx + +- deploy a sample webpage using the ansible playbook + +Read [this](https://medium.com/@sandeep010498/learn-ansible-with-real-time-project-cf6a0a512d45) Blog by [Sandeep Singh](https://medium.com/@sandeep010498) to clear all your doubts + +Let me or anyone in the community know if you face any challenges + +happy Learning :) + +[← Previous Day](../day58/README.md) | [Next Day →](../day60/README.md) diff --git a/2023/day59/tasks.md b/2023/day59/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day6/tasks.md b/2023/day6/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day60/README.md b/2023/day60/README.md new file mode 100644 index 0000000000..ecae296195 --- /dev/null +++ b/2023/day60/README.md @@ -0,0 +1,31 @@ +# Day 60 - Terraform🔥 + +Hello Learners , you guys are doing every task by creating an ec2 instance (mostly). Today let’s automate this process . How to do it ? Well Terraform is the solution . + +## What is Terraform? + +Terraform is an infrastructure as code (IaC) tool that allows you to create, manage, and update infrastructure +resources such as virtual machines, networks, and storage in a repeatable, scalable, and automated way. + +## Task 1: + +Install Terraform on your system +Refer this [link](https://phoenixnap.com/kb/how-to-install-terraform) for installation + +## Task 2: Answer below questions + +- Why we use terraform? +- What is Infrastructure as Code (IaC)? +- What is Resource? +- What is Provider? +- Whats is State file in terraform? What’s the importance of it ? +- What is Desired and Current State? + +You can prepare for tomorrow's task from [here](https://www.youtube.com/live/965CaSveIEI?feature=share)🚀🚀 + +We Hope this tasks will help you understand how to write a basic Terraform configuration file and basic commands on Terraform. + +Don’t forget to post in on LinkedIn. +Happy Learning:) + +[← Previous Day](../day59/README.md) | [Next Day →](../day61/README.md) diff --git a/2023/day60/tasks.md b/2023/day60/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day61/README.md b/2023/day61/README.md new file mode 100644 index 0000000000..9d518b70db --- /dev/null +++ b/2023/day61/README.md @@ -0,0 +1,37 @@ +# Day 61- Terraform🔥 + +Hope you've already got the gist of What Working with Terraform would be like . Lets begin +with day 2 of Terraform ! + +## Task 1: + +find purpose of basic Terraform commands which you'll use often + +1. `terraform init` + +2. `terraform init -upgrade` + +3. `terraform plan` + +4. `terraform apply` + +5. `terraform validate` + +6. `terraform fmt` + +7. `terraform destroy` + +Also along with these tasks its important to know about Terraform in general- +Who are Terraform's main competitors? +The main competitors are: + +Ansible +Packer +Cloud Foundry +Kubernetes + +Want a Free video Course for terraform? Click [here](https://bit.ly/tws-terraform) + +Don't forget to share your learnings on Linkedin ! Happy Learning :) + +[← Previous Day](../day60/README.md) | [Next Day →](../day62/README.md) diff --git a/2023/day61/tasks.md b/2023/day61/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day62/README.md b/2023/day62/README.md new file mode 100644 index 0000000000..76f61b708a --- /dev/null +++ b/2023/day62/README.md @@ -0,0 +1,79 @@ +# Day 62 - Terraform and Docker 🔥 + +Terraform needs to be told which provider to be used in the automation, hence we need to give the provider name with source and version. +For Docker, we can use this block of code in your main.tf + +## Blocks and Resources in Terraform + +## Terraform block + +## Task-01 + +- Create a Terraform script with Blocks and Resources + +``` +terraform { + required_providers { + docker = { + source = "kreuzwerker/docker" + version = "~> 2.21.0" +} +} +} +``` + +### Note: kreuzwerker/docker, is shorthand for registry.terraform.io/kreuzwerker/docker. + +## Provider Block + +The provider block configures the specified provider, in this case, docker. A provider is a plugin that Terraform uses to create and manage your resources. + +``` +provider "docker" {} +``` + +## Resource + +Use resource blocks to define components of your infrastructure. A resource might be a physical or virtual component such as a Docker container, or it can be a logical resource such as a Heroku application. + +Resource blocks have two strings before the block: the resource type and the resource name. In this example, the first resource type is docker_image and the name is Nginx. + +## Task-02 + +- Create a resource Block for an nginx docker image + +Hint: + +``` +resource "docker_image" "nginx" { + name = "nginx:latest" + keep_locally = false +} +``` + +- Create a resource Block for running a docker container for nginx + +``` +resource "docker_container" "nginx" { + image = docker_image.nginx.latest + name = "tutorial" + ports { + internal = 80 + external = 80 + } +} +``` + +Note: In case Docker is not installed + +`sudo apt-get install docker.io` +`sudo docker ps` +`sudo chown $USER /var/run/docker.sock` + +# Video Course + +I can imagine, Terraform can be tricky, so best to use a Free video Course for terraform [here](https://bit.ly/tws-terraform) + +Happy Learning :) + +[← Previous Day](../day61/README.md) | [Next Day →](../day63/README.md) diff --git a/2023/day62/tasks.md b/2023/day62/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day63/README.md b/2023/day63/README.md new file mode 100644 index 0000000000..e4338fb906 --- /dev/null +++ b/2023/day63/README.md @@ -0,0 +1,62 @@ +# Day 63 - Terraform Variables + +variables in Terraform are quite important, as you need to hold values of names of instance, configs , etc. + +We can create a variables.tf file which will hold all the variables. + +``` +variable "filename" { +default = "/home/ubuntu/terrform-tutorials/terraform-variables/demo-var.txt" +} +``` + +``` +variable "content" { +default = "This is coming from a variable which was updated" +} +``` + +These variables can be accessed by var object in main.tf + +## Task-01 + +- Create a local file using Terraform + Hint: + +``` +resource "local_file" "devops" { +filename = var.filename +content = var.content +} +``` + +## Data Types in Terraform + +## Map + +``` +variable "file_contents" { +type = map +default = { +"statement1" = "this is cool" +"statement2" = "this is cooler" +} +} +``` + +## Task-02 + +- Use terraform to demonstrate usage of List, Set and Object datatypes +- Put proper screenshots of the outputs + +Use `terraform refresh` + +To refresh the state by your configuration file, reloads the variables + +# Video Course + +I can imagine, Terraform can be tricky, so best to use a Free video Course for terraform [here](https://bit.ly/tws-terraform) + +Happy Learning :) + +[← Previous Day](../day62/README.md) | [Next Day →](../day64/README.md) diff --git a/2023/day63/tasks.md b/2023/day63/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day64/README.md b/2023/day64/README.md new file mode 100644 index 0000000000..d30e1048d9 --- /dev/null +++ b/2023/day64/README.md @@ -0,0 +1,67 @@ +# Day 64 - Terraform with AWS + +Provisioning on AWS is quite easy and straightforward with Terraform. + +## Prerequisites + +### AWS CLI installed + +The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. + +### AWS IAM user + +IAM (Identity Access Management) AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. + +_In order to connect your AWS account and Terraform, you need the access keys and secret access keys exported to your machine._ + +``` +export AWS_ACCESS_KEY_ID= +export AWS_SECRET_ACCESS_KEY= +``` + +### Install required providers + +``` +terraform { + required_providers { + aws = { + source = "hashicorp/aws" + version = "~> 4.16" +} +} + required_version = ">= 1.2.0" +} +``` + +Add the region where you want your instances to be + +``` +provider "aws" { +region = "us-east-1" +} +``` + +## Task-01 + +- Provision an AWS EC2 instance using Terraform + +Hint: + +``` +resource "aws_instance" "aws_ec2_test" { + count = 4 + ami = "ami-08c40ec9ead489470" + instance_type = "t2.micro" + tags = { + Name = "TerraformTestServerInstance" + } +} +``` + +# Video Course + +I can imagine, Terraform can be tricky, so best to use a Free video Course for terraform [here](https://bit.ly/tws-terraform) + +Happy Learning :) + +[← Previous Day](../day63/README.md) | [Next Day →](../day65/README.md) diff --git a/2023/day64/tasks.md b/2023/day64/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day65/README.md b/2023/day65/README.md new file mode 100644 index 0000000000..904c6c1158 --- /dev/null +++ b/2023/day65/README.md @@ -0,0 +1,67 @@ +# Day 65 - Working with Terraform Resources 🚀 + +Yesterday, we saw how to create a Terraform script with Blocks and Resources. Today, we will dive deeper into Terraform resources. + +## Understanding Terraform Resources + +A resource in Terraform represents a component of your infrastructure, such as a physical server, a virtual machine, a DNS record, or an S3 bucket. Resources have attributes that define their properties and behaviors, such as the size and location of a virtual machine or the domain name of a DNS record. + +When you define a resource in Terraform, you specify the type of resource, a unique name for the resource, and the attributes that define the resource. Terraform uses the resource block to define resources in your Terraform configuration. + +## Task 1: Create a security group + +To allow traffic to the EC2 instance, you need to create a security group. Follow these steps: + +In your main.tf file, add the following code to create a security group: + +``` +resource "aws_security_group" "web_server" { + name_prefix = "web-server-sg" + + ingress { + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["0.0.0.0/0"] + } +} +``` + +- Run terraform init to initialize the Terraform project. + +- Run terraform apply to create the security group. + +## Task 2: Create an EC2 instance + +- Now you can create an EC2 instance with Terraform. Follow these steps: + +- In your main.tf file, add the following code to create an EC2 instance: + +``` +resource "aws_instance" "web_server" { + ami = "ami-0557a15b87f6559cf" + instance_type = "t2.micro" + key_name = "my-key-pair" + security_groups = [ + aws_security_group.web_server.name + ] + + user_data = <<-EOF + #!/bin/bash + echo "

Welcome to my website!

" > index.html + nohup python -m SimpleHTTPServer 80 & + EOF +} +``` + +Note: Replace the ami and key_name values with your own. You can find a list of available AMIs in the AWS documentation. + +Run terraform apply to create the EC2 instance. + +## Task 3: Access your website + +- Now that your EC2 instance is up and running, you can access the website you just hosted on it. Follow these steps: + +Happy Terraforming! + +[← Previous Day](../day64/README.md) | [Next Day →](../day66/README.md) diff --git a/2023/day65/tasks.md b/2023/day65/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day66/README.md b/2023/day66/README.md new file mode 100644 index 0000000000..630837a5ff --- /dev/null +++ b/2023/day66/README.md @@ -0,0 +1,26 @@ +# Day 66 - Terraform Hands-on Project - Build Your Own AWS Infrastructure with Ease using Infrastructure as Code (IaC) Techniques(Interview Questions) ☁️ + +Welcome back to your Terraform journey. + +In the previous tasks, you have learned about the basics of Terraform, its configuration file, and creating an EC2 instance using Terraform. Today, we will explore more about Terraform and create multiple resources. + +## Task: + +- Create a VPC (Virtual Private Cloud) with CIDR block 10.0.0.0/16 +- Create a public subnet with CIDR block 10.0.1.0/24 in the above VPC. +- Create a private subnet with CIDR block 10.0.2.0/24 in the above VPC. +- Create an Internet Gateway (IGW) and attach it to the VPC. +- Create a route table for the public subnet and associate it with the public subnet. This route table should have a route to the Internet Gateway. +- Launch an EC2 instance in the public subnet with the following details: +- AMI: ami-0557a15b87f6559cf +- Instance type: t2.micro +- Security group: Allow SSH access from anywhere +- User data: Use a shell script to install Apache and host a simple website +- Create an Elastic IP and associate it with the EC2 instance. +- Open the website URL in a browser to verify that the website is hosted successfully. + +#### This Terraform hands-on task is designed to test your proficiency in using Terraform for Infrastructure as Code (IaC) on AWS. You will be tasked with creating a VPC, subnets, an internet gateway, and launching an EC2 instance with a web server running on it. This task will showcase your skills in automating infrastructure deployment using Terraform. It's a popular interview question for companies looking for candidates with hands-on experience in Terraform. That's it for today. + +Happy Terraforming:) + +[← Previous Day](../day65/README.md) | [Next Day →](../day67/README.md) diff --git a/2023/day66/tasks.md b/2023/day66/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day67/README.md b/2023/day67/README.md new file mode 100644 index 0000000000..62e6f35476 --- /dev/null +++ b/2023/day67/README.md @@ -0,0 +1,22 @@ +# Day 67: AWS S3 Bucket Creation and Management + +## AWS S3 Bucket + +Amazon S3 (Simple Storage Service) is an object storage service that offers industry-leading scalability, data availability, security, and performance. It can be used for a variety of use cases, such as storing and retrieving data, hosting static websites, and more. + +In this task, you will learn how to create and manage S3 buckets in AWS. + +## Task + +- Create an S3 bucket using Terraform. +- Configure the bucket to allow public read access. +- Create an S3 bucket policy that allows read-only access to a specific IAM user or role. +- Enable versioning on the S3 bucket. + +## Resources + +[Terraform S3 bucket resource](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket) + +Good luck and happy learning! + +[← Previous Day](../day66/README.md) | [Next Day →](../day68/README.md) diff --git a/2023/day67/tasks.md b/2023/day67/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day68/README.md b/2023/day68/README.md new file mode 100644 index 0000000000..4185d8a5dd --- /dev/null +++ b/2023/day68/README.md @@ -0,0 +1,66 @@ +# Day 68 - Scaling with Terraform 🚀 + +Yesterday, we learned how to AWS S3 Bucket with Terraform. Today, we will see how to scale our infrastructure with Terraform. + +## Understanding Scaling + +Scaling is the process of adding or removing resources to match the changing demands of your application. As your application grows, you will need to add more resources to handle the increased load. And as the load decreases, you can remove the extra resources to save costs. + +Terraform makes it easy to scale your infrastructure by providing a declarative way to define your resources. You can define the number of resources you need and Terraform will automatically create or destroy the resources as needed. + +## Task 1: Create an Auto Scaling Group + +Auto Scaling Groups are used to automatically add or remove EC2 instances based on the current demand. Follow these steps to create an Auto Scaling Group: + +- In your main.tf file, add the following code to create an Auto Scaling Group: + +``` +resource "aws_launch_configuration" "web_server_as" { + image_id = "ami-005f9685cb30f234b" + instance_type = "t2.micro" + security_groups = [aws_security_group.web_server.name] + + user_data = <<-EOF + #!/bin/bash + echo "

You're doing really Great

" > index.html + nohup python -m SimpleHTTPServer 80 & + EOF +} + +resource "aws_autoscaling_group" "web_server_asg" { + name = "web-server-asg" + launch_configuration = aws_launch_configuration.web_server_lc.name + min_size = 1 + max_size = 3 + desired_capacity = 2 + health_check_type = "EC2" + load_balancers = [aws_elb.web_server_lb.name] + vpc_zone_identifier = [aws_subnet.public_subnet_1a.id, aws_subnet.public_subnet_1b.id] +} + + +``` + +- Run terraform apply to create the Auto Scaling Group. + +## Task 2: Test Scaling + +- Go to the AWS Management Console and select the Auto Scaling Groups service. + +- Select the Auto Scaling Group you just created and click on the "Edit" button. + +- Increase the "Desired Capacity" to 3 and click on the "Save" button. + +- Wait a few minutes for the new instances to be launched. + +- Go to the EC2 Instances service and verify that the new instances have been launched. + +- Decrease the "Desired Capacity" to 1 and wait a few minutes for the extra instances to be terminated. + +- Go to the EC2 Instances service and verify that the extra instances have been terminated. + +Congratulations🎊🎉 You have successfully scaled your infrastructure with Terraform. + +Happy Learning :) + +[← Previous Day](../day67/README.md) | [Next Day →](../day69/README.md) diff --git a/2023/day68/tasks.md b/2023/day68/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day69/README.md b/2023/day69/README.md new file mode 100644 index 0000000000..570803dbdd --- /dev/null +++ b/2023/day69/README.md @@ -0,0 +1,182 @@ +# Day 69 - Meta-Arguments in Terraform + +When you define a resource block in Terraform, by default, this specifies one resource that will be created. To manage several of the same resources, you can use either count or for_each, which removes the need to write a separate block of code for each one. Using these options reduces overhead and makes your code neater. + +count is what is known as a ‘meta-argument’ defined by the Terraform language. Meta-arguments help achieve certain requirements within the resource block. + +## Count + +The count meta-argument accepts a whole number and creates the number of instances of the resource specified. + +When each instance is created, it has its own distinct infrastructure object associated with it, so each can be managed separately. When the configuration is applied, each object can be created, destroyed, or updated as appropriate. + +eg. + +``` + +terraform { + +required_providers { + +aws = { + +source = "hashicorp/aws" + +version = "~> 4.16" + +} + +} + +required_version = ">= 1.2.0" + +} + + + +provider "aws" { + +region = "us-east-1" + +} + + + +resource "aws_instance" "server" { + +count = 4 + + + +ami = "ami-08c40ec9ead489470" + +instance_type = "t2.micro" + + + +tags = { + +Name = "Server ${count.index}" + +} + +} + + + +``` + +## for_each + +Like the count argument, the for_each meta-argument creates multiple instances of a module or resource block. However, instead of specifying the number of resources, the for_each meta-argument accepts a map or a set of strings. This is useful when multiple resources are required that have different values. Consider our Active directory groups example, with each group requiring a different owner. + +``` + +terraform { + +required_providers { + +aws = { + +source = "hashicorp/aws" + +version = "~> 4.16" + +} + +} + +required_version = ">= 1.2.0" + +} + + + +provider "aws" { + +region = "us-east-1" + +} + + + +locals { + +ami_ids = toset([ + +"ami-0b0dcb5067f052a63", + +"ami-08c40ec9ead489470", + +]) + +} + + + +resource "aws_instance" "server" { + +for_each = local.ami_ids + + + +ami = each.key + +instance_type = "t2.micro" + +tags = { + +Name = "Server ${each.key}" + +} + +} + + + +Multiple key value iteration + +locals { + +ami_ids = { + +"linux" :"ami-0b0dcb5067f052a63", + +"ubuntu": "ami-08c40ec9ead489470", + +} + +} + + + +resource "aws_instance" "server" { + +for_each = local.ami_ids + + + +ami = each.value + +instance_type = "t2.micro" + + + +tags = { + +Name = "Server ${each.key}" + +} + +} + +``` + +## Task-01 + +- Create the above Infrastructure as code and demonstrate the use of Count and for_each. +- Write about meta-arguments and its use in Terraform. + +Happy learning :) + +[← Previous Day](../day68/README.md) | [Next Day →](../day70/README.md) diff --git a/2023/day69/tasks.md b/2023/day69/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day7/tasks.md b/2023/day7/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day70/README.md b/2023/day70/README.md new file mode 100644 index 0000000000..4a42230590 --- /dev/null +++ b/2023/day70/README.md @@ -0,0 +1,80 @@ +# Day 70 - Terraform Modules + +- Modules are containers for multiple resources that are used together. A module consists of a collection of .tf and/or .tf.json files kept together in a directory +- A module can call other modules, which lets you include the child module's resources into the configuration in a concise way. +- Modules can also be called multiple times, either within the same configuration or in separate configurations, allowing resource configurations to be packaged and re-used. + +### Below is the format on how to use modules: + +``` +# Creating a AWS EC2 Instance +resource "aws_instance" "server-instance" { + # Define number of instance + instance_count = var.number_of_instances + + # Instance Configuration + ami = var.ami + instance_type = var.instance_type + subnet_id = var.subnet_id + vpc_security_group_ids = var.security_group + + # Instance Tagsid + tags = { + Name = "${var.instance_name}" + } +} +``` + +``` +# Server Module Variables +variable "number_of_instances" { + description = "Number of Instances to Create" + type = number + default = 1 +} + +variable "instance_name" { + description = "Instance Name" +} + +variable "ami" { + description = "AMI ID" + default = "ami-xxxx" +} + +variable "instance_type" { + description = "Instance Type" +} + +variable "subnet_id" { + description = "Subnet ID" +} + +variable "security_group" { + description = "Security Group" + type = list(any) +} +``` + +``` +# Server Module Output +output "server_id" { + description = "Server ID" + value = aws_instance.server-instance.id +} + +``` + +## Task-01 + +Explain the below in your own words and it shouldnt be copied from Internet 😉 + +- Write about different modules Terraform. +- Difference between Root Module and Child Module. +- Is modules and Namespaces are same? Justify your answer for both Yes/No + +You all are doing great, and you have come so far. Well Done Everyone🎉 + +Thode mehnat aur krni hai bas to lge rho tab tak.....Happy learning :) + +[← Previous Day](../day69/README.md) | [Next Day →](../day71/README.md) diff --git a/2023/day70/tasks.md b/2023/day70/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day71/README.md b/2023/day71/README.md new file mode 100644 index 0000000000..7bcb7bb3e1 --- /dev/null +++ b/2023/day71/README.md @@ -0,0 +1,41 @@ +# Day 71 - Let's prepare for some interview questions of Terraform 🔥 + +### 1. What is Terraform and how it is different from other IaaC tools? + +### 2. How do you call a main.tf module? + +### 3. What exactly is Sentinel? Can you provide few examples where we can use for Sentinel policies? + +### 4. You have a Terraform configuration file that defines an infrastructure deployment. However, there are multiple instances of the same resource that need to be created. How would you modify the configuration file to achieve this? + +### 5. You want to know from which paths Terraform is loading providers referenced in your Terraform configuration (\*.tf files). You need to enable debug messages to find this out. Which of the following would achieve this? + +A. Set the environment variable TF_LOG=TRACE + +B. Set verbose logging for each provider in your Terraform configuration + +C. Set the environment variable TF_VAR_log=TRACE + +D. Set the environment variable TF_LOG_PATH + +### 6. Below command will destroy everything that is being created in the infrastructure. Tell us how would you save any particular resource while destroying the complete infrastructure. + +``` +terraform destroy +``` + +### 7. Which module is used to store .tfstate file in S3? + +### 8. How do you manage sensitive data in Terraform, such as API keys or passwords? + +### 9. You are working on a Terraform project that needs to provision an S3 bucket, and a user with read and write access to the bucket. What resources would you use to accomplish this, and how would you configure them? + +### 10. Who maintains Terraform providers? + +### 11. How can we export data from one module to another? + +# + +Waiting for your responses😉.....Till then Happy learning :) + +[← Previous Day](../day70/README.md) | [Next Day →](../day72/README.md) diff --git a/2023/day71/tasks.md b/2023/day71/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day72/README.md b/2023/day72/README.md new file mode 100644 index 0000000000..a283b10e39 --- /dev/null +++ b/2023/day72/README.md @@ -0,0 +1,16 @@ +Day 72 - Grafana🔥 + +Hello Learners , you guys are doing really a good job. You will not be there 24\*7 to monitor your resources. So, Today let’s monitor the resources in a smart way with - Grafana 🎉 + +## Task 1: + +> What is Grafana? What are the features of Grafana? +> Why Grafana? +> What type of monitoring can be done via Grafana? +> What databases work with Grafana? +> What are metrics and visualizations in Grafana? +> What is the difference between Grafana vs Prometheus? + +--- + +[← Previous Day](../day71/README.md) | [Next Day →](../day73/README.md) diff --git a/2023/day72/tasks.md b/2023/day72/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day73/README.md b/2023/day73/README.md new file mode 100644 index 0000000000..a1af9d7dc9 --- /dev/null +++ b/2023/day73/README.md @@ -0,0 +1,16 @@ +Day 73 - Grafana 🔥 +Hope you are now clear with the basics of grafana, like why we use, where we use, what can we do with this and so on. + +Now, let's do some practical stuff. + +--- + +Task: + +> Setup grafana in your local environment on AWS EC2. + +--- + +Ref: https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7042518379030556672-ZZA-?utm_source=share&utm_medium=member_desktop + +[← Previous Day](../day72/README.md) | [Next Day →](../day74/README.md) diff --git a/2023/day73/tasks.md b/2023/day73/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day74/README.md b/2023/day74/README.md new file mode 100644 index 0000000000..2877eeebd4 --- /dev/null +++ b/2023/day74/README.md @@ -0,0 +1,19 @@ +# Day 74 - Connecting EC2 with Grafana . + +You guys did amazing job last day setting up Grafana on Local 🔥. + +Now, let's do one step ahead. + +--- + +Task: + +Connect an Linux and one Windows EC2 instance with Grafana and monitor the different components of the server. + +--- + +Don't forget to share this amazing work over LinkedIn and Tag us. + +## Happy Learning :) + +[← Previous Day](../day73/README.md) | [Next Day →](../day75/README.md) diff --git a/2023/day74/tasks.md b/2023/day74/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day75/README.md b/2023/day75/README.md new file mode 100644 index 0000000000..3c75d41caa --- /dev/null +++ b/2023/day75/README.md @@ -0,0 +1,30 @@ +# Day 75 - Sending Docker Log to Grafana + +We have monitored ,😉 that you guys are understanding and doing amazing with monitoring tool. 👌 + +Today, make it little bit more complex but interesting 😍 and let's add one more **Project** 🔥 to your resume. + +--- + +## Task: + +- Install _Docker_ and start docker service on a Linux EC2 through [USER DATA](../day39/README.md) . +- Create 2 Docker containers and run any basic application on those containers (A simple todo app will work). +- Now intregrate the docker containers and share the real time logs with Grafana (Your Instance should be connected to Grafana and Docker plugin should be enabled on grafana). +- Check the logs or docker container name on Grafana UI. + +--- + +You can use [this video](https://youtu.be/y3SGHbixmJw) for your refernce. But it's always better to find your own way of doing. 😊 + +## Bonus : + +- As you have done this amazing task, here is one bonus link.❤️ + +## You can use this [refernce video](https://youtu.be/CCi957AnSfc) to intregrate _Prometheus_ with _Grafana_ and monitor Docker containers. Seems interesting ? + +Don't forget to share this amazing work over LinkedIn and Tag us. + +## Happy Learning :) + +[← Previous Day](../day74/README.md) | [Next Day →](../day76/README.md) diff --git a/2023/day75/tasks.md b/2023/day75/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day76/README.md b/2023/day76/README.md new file mode 100644 index 0000000000..7c3fbb0bd1 --- /dev/null +++ b/2023/day76/README.md @@ -0,0 +1,33 @@ +# Day 76 Build a Grafana dashboard + +A dashboard gives you an at-a-glance view of your data and lets you track metrics through different visualizations. + +Dashboards consist of panels, each representing a part of the story you want your dashboard to tell. + +Every panel consists of a query and a visualization. The query defines what data you want to display, whereas the visualization defines how the data is displayed. + +## Task 01 + +- In the sidebar, hover your cursor over the Create (plus sign) icon and then click Dashboard. + +- Click Add a new panel. + +- In the Query editor below the graph, enter the query from earlier and then press Shift + Enter: + +`sum(rate(tns_request_duration_seconds_count[5m])) by(route)` + +- In the Legend field, enter {{route}} to rename the time series in the legend. The graph legend updates when you click outside the field. + +- In the Panel editor on the right, under Settings, change the panel title to “Traffic”. + +- Click Apply in the top-right corner to save the panel and go back to the dashboard view. + +- Click the Save dashboard (disk) icon at the top of the dashboard to save your dashboard. + +- Enter a name in the Dashboard name field and then click Save. + +Read [this](https://grafana.com/tutorials/grafana-fundamentals/) in case you have any questions + +Do share some amazing Dashboards with the community + +[← Previous Day](../day75/README.md) | [Next Day →](../day77/README.md) diff --git a/2023/day76/tasks.md b/2023/day76/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day77/README.md b/2023/day77/README.md new file mode 100644 index 0000000000..7acf545be9 --- /dev/null +++ b/2023/day77/README.md @@ -0,0 +1,14 @@ +# Day 77 Alerting + +Grafana Alerting allows you to learn about problems in your systems moments after they occur. Create, manage, and take action on your alerts in a single, consolidated view, and improve your team’s ability to identify and resolve issues quickly. + +Grafana Alerting is available for Grafana OSS, Grafana Enterprise, or Grafana Cloud. With Mimir and Loki alert rules you can run alert expressions closer to your data and at massive scale, all managed by the Grafana UI you are already familiar with. + +## Task-01 + +- Setup [Grafana cloud](https://grafana.com/products/cloud/) +- Setup sample alerting + +Check out [this blog](https://grafana.com/docs/grafana/latest/alerting/) for more details + +[← Previous Day](../day76/README.md) | [Next Day →](../day78/README.md) diff --git a/2023/day77/tasks.md b/2023/day77/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day78/README.md b/2023/day78/README.md new file mode 100644 index 0000000000..631894de55 --- /dev/null +++ b/2023/day78/README.md @@ -0,0 +1,14 @@ +Day - 78 (Grafana Cloud) + +--- + +Task - 01 + +1. Setup alerts for EC2 instance. +2. Setup alerts for AWS Billing Alerts. + +--- + +For Reference: https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7044695663913148416-LfvD?utm_source=share&utm_medium=member_desktop + +[← Previous Day](../day77/README.md) | [Next Day →](../day79/README.md) diff --git a/2023/day78/tasks.md b/2023/day78/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day79/README.md b/2023/day79/README.md new file mode 100644 index 0000000000..4eb87c4c49 --- /dev/null +++ b/2023/day79/README.md @@ -0,0 +1,20 @@ +Day 79 - Prometheus 🔥 + +Now, the next step is to learn about the Prometheus. +It's an open-source system for monitoring services and alerts based on a time series data model. Prometheus collects data and metrics from different services and stores them according to a unique identifier—the metric name—and a time stamp. + +Tasks: + +--- + +1. What is the Architecture of Prometheus Monitoring? +2. What are the Features of Prometheus? +3. What are the Components of Prometheus? +4. What database is used by Prometheus? +5. What is the default data retention period in Prometheus? + +--- + +Ref: https://www.devopsschool.com/blog/top-50-prometheus-interview-questions-and-answers/ + +[← Previous Day](../day78/README.md) | [Next Day →](../day80/README.md) diff --git a/2023/day79/tasks.md b/2023/day79/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day8/tasks.md b/2023/day8/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day80/README.md b/2023/day80/README.md new file mode 100644 index 0000000000..edbc3ec561 --- /dev/null +++ b/2023/day80/README.md @@ -0,0 +1,15 @@ +# Project-1 + +========= + +# Project Description + +The project aims to automate the building, testing, and deployment process of a web application using Jenkins and GitHub. The Jenkins pipeline will be triggered automatically by GitHub webhook integration when changes are made to the code repository. The pipeline will include stages such as building, testing, and deploying the application, with notifications and alerts for failed builds or deployments. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7011367641952993281-DHn5?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day79/README.md) | [Next Day →](../day81/README.md) diff --git a/2023/day80/tasks.md b/2023/day80/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day81/README.md b/2023/day81/README.md new file mode 100644 index 0000000000..a10675fa1c --- /dev/null +++ b/2023/day81/README.md @@ -0,0 +1,15 @@ +# Project-2 + +========= + +# Project Description + +The project is about automating the deployment process of a web application using Jenkins and its declarative syntax. The pipeline includes stages like building, testing, and deploying to a staging environment. It also includes running acceptance tests and deploying to production if all tests pass. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7014971330496212992-6Q2m?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day80/README.md) | [Next Day →](../day82/README.md) diff --git a/2023/day81/tasks.md b/2023/day81/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day82/README.md b/2023/day82/README.md new file mode 100644 index 0000000000..a17acccd92 --- /dev/null +++ b/2023/day82/README.md @@ -0,0 +1,15 @@ +# Project-3 + +========= + +# Project Description + +The project involves hosting a static website using an AWS S3 bucket. Amazon S3 is an object storage service that provides a simple web services interface to store and retrieve any amount of data. The website files will be uploaded to an S3 bucket and configured to function as a static website. The bucket will be configured with the appropriate permissions and a unique domain name, making the website publicly accessible. Overall, the project aims to leverage the benefits of AWS S3 to host and scale a static website in a cost-effective and scalable manner. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_aws-project-devopsjobs-activity-7016427742300663808-JAQd?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day81/README.md) | [Next Day →](../day83/README.md) diff --git a/2023/day82/tasks.md b/2023/day82/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day83/README.md b/2023/day83/README.md new file mode 100644 index 0000000000..dc80aefc33 --- /dev/null +++ b/2023/day83/README.md @@ -0,0 +1,15 @@ +# Project-4 + +========= + +# Project Description + +The project aims to deploy a web application using Docker Swarm, a container orchestration tool that allows for easy management and scaling of containerized applications. The project will utilize Docker Swarm's production-ready features such as load balancing, rolling updates, and service discovery to ensure high availability and reliability of the web application. The project will involve creating a Dockerfile to package the application into a container and then deploying it onto a Swarm cluster. The Swarm cluster will be configured to provide automated failover, load balancing, and horizontal scaling to the application. The goal of the project is to demonstrate the benefits of Docker Swarm for deploying and managing containerized applications in production environments. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7034173810656296960-UjUw?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day82/README.md) | [Next Day →](../day84/README.md) diff --git a/2023/day83/tasks.md b/2023/day83/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day84/README.md b/2023/day84/README.md new file mode 100644 index 0000000000..be78b29c8b --- /dev/null +++ b/2023/day84/README.md @@ -0,0 +1,15 @@ +# Project-5 + +========= + +# Project Description + +The project involves deploying a Netflix clone web application on a Kubernetes cluster, a popular container orchestration platform that simplifies the deployment and management of containerized applications. The project will require creating Docker images of the web application and its dependencies and deploying them onto the Kubernetes cluster using Kubernetes manifests. The Kubernetes cluster will provide benefits such as high availability, scalability, and automatic failover of the application. Additionally, the project will utilize Kubernetes tools such as Kubernetes Dashboard and kubectl to monitor and manage the deployed application. Overall, the project aims to demonstrate the power and benefits of Kubernetes for deploying and managing containerized applications at scale. + +## Task-01 + +Get a netflix clone form [GitHub](https://github.com/devandres-tech/Netflix-Clone), read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7034173810656296960-UjUw?utm_source=share&utm_medium=member_desktop) and follow the Redit clone steps to similarly deploy a Netflix Clone + +Happy Learning :) + +[← Previous Day](../day83/README.md) | [Next Day →](../day85/README.md) diff --git a/2023/day84/tasks.md b/2023/day84/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day85/README.md b/2023/day85/README.md new file mode 100644 index 0000000000..0cd64c996b --- /dev/null +++ b/2023/day85/README.md @@ -0,0 +1,26 @@ +# Project-6 + +========= + +# Project Description + +The project involves deploying a Node JS app on AWS ECS Fargate and AWS ECR. +Read More about the tech stack [here](https://faun.pub/what-is-amazon-ecs-and-ecr-how-does-they-work-with-an-example-4acbf9be8415) + +## Task-01 + +- Get a NodeJs application from [GitHub](https://github.com/LondheShubham153/node-todo-cicd). + +- Build the Dockerfile present in the repo + +- Setup AWS CLI and AWS Login in order to tag and push to ECR + +- Setup an ECS cluster + +- Create a Task Definition for the node js project with ECR image + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day84/README.md) | [Next Day →](../day86/README.md) diff --git a/2023/day85/tasks.md b/2023/day85/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day86/README.md b/2023/day86/README.md new file mode 100644 index 0000000000..c8f809df7d --- /dev/null +++ b/2023/day86/README.md @@ -0,0 +1,24 @@ +# Project-7 + +========= + +# Project Description + +The project involves deploying a Portfolio app on AWS S3 using GitHub Actions. +Git Hub actions allows you to perform CICD with GitHub Repository integrated. + +## Task-01 + +- Get a Portfolio application from [GitHub](https://github.com/LondheShubham153/tws-portfolio). + +- Build the GitHub Actions Workflow + +- Setup AWS CLI and AWS Login in order to sync website to S3 (to be done as a part of YAML) + +- Follow this [video]() to understand it better + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day85/README.md) | [Next Day →](../day87/README.md) diff --git a/2023/day86/tasks.md b/2023/day86/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day87/README.md b/2023/day87/README.md new file mode 100644 index 0000000000..fa123ea638 --- /dev/null +++ b/2023/day87/README.md @@ -0,0 +1,24 @@ +# Project-8 + +========= + +# Project Description + +The project involves deploying a react application on AWS Elastic BeanStalk using GitHub Actions. +Git Hub actions allows you to perform CICD with GitHub Repository integrated. + +## Task-01 + +- Get source code from [GitHub](https://github.com/sitchatt/AWS_Elastic_BeanStalk_On_EC2.git). + +- Setup AWS Elastic BeanStalk + +- Build the GitHub Actions Workflow + +- Follow this [blog](https://www.linkedin.com/posts/sitabja-chatterjee_effortless-deployment-of-react-app-to-aws-activity-7053579065487687680-wZI8?utm_source=share&utm_medium=member_desktop) to understand it better + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day86/README.md) | [Next Day →](../day88/README.md) diff --git a/2023/day87/tasks.md b/2023/day87/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day88/README.md b/2023/day88/README.md new file mode 100644 index 0000000000..3668934da1 --- /dev/null +++ b/2023/day88/README.md @@ -0,0 +1,23 @@ +# Project-9 + +========= + +# Project Description + +The project involves deploying a Django Todo app on AWS EC2 using Kubeadm Kubernetes cluster. + +Kubernetes Cluster helps in Auto-scaling and Auto-healing of your application. + +## Task-01 + +- Get a Django Full Stack application from [GitHub](https://github.com/LondheShubham153/django-todo-cicd). + +- Setup the Kubernetes cluster using [this script](https://github.com/RishikeshOps/Scripts/blob/main/k8sss.sh) + +- Setup Deployment and Service for Kubernetes. + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day87/README.md) | [Next Day →](../day89/README.md) diff --git a/2023/day88/tasks.md b/2023/day88/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day89/README.md b/2023/day89/README.md new file mode 100644 index 0000000000..45ee46628d --- /dev/null +++ b/2023/day89/README.md @@ -0,0 +1,19 @@ +# Project-10 + +========= + +# Project Description + +The project involves Mounting of AWS S3 Bucket On Amazon EC2 Linux Using S3FS. + +This is a AWS Mini Project that will teach you AWS, S3, EC2, S3FS. + +## Task-01 + +- Create IAM user and set policies for the project resources using this [blog](https://medium.com/@chetxn/project-8-devops-implementation-8300b9ed1f2). +- Utilize and make the best use of aws-cli +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day88/README.md) | [Next Day →](../day90/README.md) diff --git a/2023/day89/tasks.md b/2023/day89/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day9/tasks.md b/2023/day9/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day90/README.md b/2023/day90/README.md new file mode 100644 index 0000000000..d28985c060 --- /dev/null +++ b/2023/day90/README.md @@ -0,0 +1,29 @@ +# Day 90: The Awesome Finale! 🎉 🎉 + +🚀 Can you believe it? You've hit the jackpot – Day 90, the grand finale of our DevOps bonanza. Time to give yourself a virtual high-five! + +### What's Next? + +While this marks the end of the official 90-day journey, remember that your learning journey in DevOps is far from over. There's always something new to explore, tools to master, and techniques to refine. We're continuing to curate more content, challenges, and resources to help you advance your DevOps expertise. + +### Share Your Achievement + +Share your journey with the world! Post about your accomplishments on social media using the hashtag #90DaysOfDevOps. Inspire others to join the DevOps movement and take charge of their learning path. + +### Keep the Momentum Going! + +The knowledge and skills you've gained during these 90 days are just the beginning. Keep practicing, experimenting, and collaborating. DevOps is a continuous journey of improvement and innovation. + +### Star the Repository + +If you've found value in this repository and the DevOps content we've curated, consider showing your appreciation by starring this repository. Your support motivates us to keep creating high-quality content and resources for the community. + +**[🌟 Star this repository](https://github.com/LondheShubham153/90DaysOfDevOps)** + +Thank you for being part of the "90 Days of DevOps" adventure. +Keep coding, automating, deploying, and innovating! 🎈 + +With gratitude, +@TrainWithShubham + +[← Previous Day](../day89/README.md) diff --git a/2023/day90/tasks.md b/2023/day90/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2024/day01/README.md b/2024/day01/README.md new file mode 100644 index 0000000000..33caf1a759 --- /dev/null +++ b/2024/day01/README.md @@ -0,0 +1,27 @@ +# Introduction - Day 1 + +Welcome to the #90DaysOfDevOps Challenge with the #TrainWithShubham Community! Today, we begin our journey into the world of DevOps. Here’s what you need to do: + +1. **Fork this Repository:** + - Go to the repository on GitHub and fork it to your own account. This will allow you to track your progress and contribute. + +2. **Start with a DevOps Roadmap:** + - Watch the introductory video on DevOps: [DevOps Roadmap](https://youtu.be/g_QHuGq3E2Y?si=fR9K56-JevZTfrBK) + +3. **Write a LinkedIn Post or a Small Article:** + - Share your understanding of DevOps based on the video and your research. Cover the following points: + + - **What is DevOps:** + + + - **What is Automation, Scaling, and Infrastructure:** + + + - **Why DevOps is Important:** + + + +4. **Engage with the Community:** + - Share your LinkedIn post or article link in the community forum or on social media using the hashtags #90DaysOfDevOps and #TrainWithShubham. + - Read and comment on posts from other participants to foster a collaborative learning environment. + diff --git a/2024/day02/readme.md b/2024/day02/readme.md new file mode 100644 index 0000000000..24bb7fe1e3 --- /dev/null +++ b/2024/day02/readme.md @@ -0,0 +1,41 @@ +## Basic linux commands + +### Listing commands +```ls option_flag arguments ```--> list the sub directories and files avaiable in the present directory + +Examples: + +- ``` ls -l ```--> list the files and directories in long list format with extra information +- ```ls -a ```--> list all including hidden files and directory +- ```ls *.sh``` --> list all the files having .sh extension. + +- ```ls -i ``` --> list the files and directories with index numbers inodes +- ``` ls -d */``` --> list only directories.(we can also specify a pattern) + +### Directoy commands +- ```pwd``` --> print work directory. Gives the present working directory. + +- ```cd path_to_directory``` --> change directory to the provided path + +- ```cd ~ ``` or just ```cd ``` --> change directory to the home directory + +- ``` cd - ``` --> Go to the last working directory. + +- ``` cd ..``` --> change directory to one step back. + +- ``` cd ../..``` --> Change directory to 2 levels back. + +- ``` mkdir directoryName``` --> to make a directory in a specific location + +Examples: +``` +mkdir newFolder # make a new folder 'newFolder' + +mkdir .NewFolder # make a hidden directory (also . before a file to make it hidden) + +mkdir A B C D #make multiple directories at the same time + +mkdir /home/user/Mydirectory # make a new folder in a specific location + +mkdir -p A/B/C/D # make a nested directory +``` diff --git a/2024/day03/README.md b/2024/day03/README.md new file mode 100644 index 0000000000..3fc984d91b --- /dev/null +++ b/2024/day03/README.md @@ -0,0 +1,20 @@ +# Day 3 Task: Basic Linux Commands with a Twist + +Task: What are the Linux commands to + +1. View the content of a file and display line numbers. +2. Change the access permissions of files to make them readable, writable, and executable by the owner only. +3. Check the last 10 commands you have run. +4. Remove a directory and all its contents. +5. Create a `fruits.txt` file, add content (one fruit per line), and display the content. +6. Add content in `devops.txt` (one in each line) - Apple, Mango, Banana, Cherry, Kiwi, Orange, Guava. Then, append "Pineapple" to the end of the file. +7. Show the first three fruits from the file in reverse order. +8. Show the bottom three fruits from the file, and then sort them alphabetically. +9. Create another file `Colors.txt`, add content (one color per line), and display the content. +10. Add content in `Colors.txt` (one in each line) - Red, Pink, White, Black, Blue, Orange, Purple, Grey. Then, prepend "Yellow" to the beginning of the file. +11. Find and display the lines that are common between `fruits.txt` and `Colors.txt`. +12. Count the number of lines, words, and characters in both `fruits.txt` and `Colors.txt`. + +Reference: [Linux Commands for DevOps Used Day-to-Day](https://www.linkedin.com/pulse/linux-commands-devops-used-day-to-day-activit-chetan-/) + +[← Previous Day](../day02/README.md) | [Next Day →](../day04/README.md) diff --git a/2024/day03/image/task 1.png b/2024/day03/image/task 1.png new file mode 100644 index 0000000000..6d43acbead Binary files /dev/null and b/2024/day03/image/task 1.png differ diff --git a/2024/day03/image/task 10.png b/2024/day03/image/task 10.png new file mode 100644 index 0000000000..bd1ad3ce03 Binary files /dev/null and b/2024/day03/image/task 10.png differ diff --git a/2024/day03/image/task 11.png b/2024/day03/image/task 11.png new file mode 100644 index 0000000000..92f1a020bf Binary files /dev/null and b/2024/day03/image/task 11.png differ diff --git a/2024/day03/image/task 12.png b/2024/day03/image/task 12.png new file mode 100644 index 0000000000..40cf2f5d66 Binary files /dev/null and b/2024/day03/image/task 12.png differ diff --git a/2024/day03/image/task 2.png b/2024/day03/image/task 2.png new file mode 100644 index 0000000000..321719e413 Binary files /dev/null and b/2024/day03/image/task 2.png differ diff --git a/2024/day03/image/task 3.png b/2024/day03/image/task 3.png new file mode 100644 index 0000000000..8264548702 Binary files /dev/null and b/2024/day03/image/task 3.png differ diff --git a/2024/day03/image/task 4.png b/2024/day03/image/task 4.png new file mode 100644 index 0000000000..f5f90b8a58 Binary files /dev/null and b/2024/day03/image/task 4.png differ diff --git a/2024/day03/image/task 5.png b/2024/day03/image/task 5.png new file mode 100644 index 0000000000..68966372f2 Binary files /dev/null and b/2024/day03/image/task 5.png differ diff --git a/2024/day03/image/task 6.png b/2024/day03/image/task 6.png new file mode 100644 index 0000000000..2ddfdbab26 Binary files /dev/null and b/2024/day03/image/task 6.png differ diff --git a/2024/day03/image/task 66.png b/2024/day03/image/task 66.png new file mode 100644 index 0000000000..5360649b4f Binary files /dev/null and b/2024/day03/image/task 66.png differ diff --git a/2024/day03/image/task 7.png b/2024/day03/image/task 7.png new file mode 100644 index 0000000000..e16aa39374 Binary files /dev/null and b/2024/day03/image/task 7.png differ diff --git a/2024/day03/image/task 8.png b/2024/day03/image/task 8.png new file mode 100644 index 0000000000..48cd782dfb Binary files /dev/null and b/2024/day03/image/task 8.png differ diff --git a/2024/day03/image/task 9.png b/2024/day03/image/task 9.png new file mode 100644 index 0000000000..8013d510c7 Binary files /dev/null and b/2024/day03/image/task 9.png differ diff --git a/2024/day03/solution.md b/2024/day03/solution.md new file mode 100644 index 0000000000..3f094c9649 --- /dev/null +++ b/2024/day03/solution.md @@ -0,0 +1,51 @@ + +# Basic Linux Commands - Day 3 + +Task 1: View the content of a file and display line numbers. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%201.png) + +Task 2: Change the access permissions of files to make them readable, writable, and executable by the owner only. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%202.png) + +Task 3: Check the last 10 commands you have run. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%203.png) + +Task 4: Remove a directory and all its contents. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%204.png) + +Task 5: Create a `fruits.txt` file, add content (one fruit per line), and display the content. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%205.png) + +Task 6: Add content in `devops.txt` (one in each line) - Apple, Mango, Banana, Cherry, Kiwi, Orange, Guava. Then, append "Pineapple" to the end of the file. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%206.png) +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%2066.png) + +Task 7: Show the first three fruits from the file in reverse order. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%207.png) + +Task 8: Show the bottom three fruits from the file, and then sort them alphabetically. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%208.png) + +Task 9: Create another file `Colors.txt`, add content (one color per line), and display the content. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%209.png) + +Task 10: Add content in `Colors.txt` (one in each line) - Red, Pink, White, Black, Blue, Orange, Purple, Grey. Then, prepend "Yellow" to the beginning of the file. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%2010.png) + +Task 11: Find and display the lines that are common between `fruits.txt` and `Colors.txt`. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%2011.png) + +Task 12: Count the number of lines, words, and characters in both `fruits.txt` and `Colors.txt`. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%2012.png) diff --git a/2024/day04/README.md b/2024/day04/README.md new file mode 100644 index 0000000000..1eca473867 --- /dev/null +++ b/2024/day04/README.md @@ -0,0 +1,31 @@ +# Day 4 Task: Basic Linux Shell Scripting for DevOps Engineers + +## What is Kernel? + +The kernel is a computer program that is the core of a computer’s operating system, with complete control over everything in the system. + +## What is Shell? + +A shell is a special user program that provides an interface for users to interact with operating system services. It accepts human-readable commands from users and converts them into instructions that the kernel can understand. The shell is a command language interpreter that executes commands read from input devices such as keyboards or from files. It starts when the user logs in or opens a terminal. + +## What is Linux Shell Scripting? + +Linux shell scripting involves writing programs (scripts) that can be run by a Linux shell, such as bash (Bourne Again Shell). These scripts automate tasks, perform system administration tasks, and facilitate the interaction between users and the operating system. + +**Tasks:** + +- Explain in your own words and with examples what Shell Scripting means for DevOps. +- What is `#!/bin/bash`? Can we write `#!/bin/sh` as well? +- Write a Shell Script that prints `I will complete #90DaysOfDevOps challenge`. +- Write a Shell Script that takes user input, input from arguments, and prints the variables. +- Provide an example of an If-Else statement in Shell Scripting by comparing two numbers. + +**Were the tasks challenging?** + +These tasks are designed to introduce you to basic concepts of Linux shell scripting for DevOps. Share your experience and solutions on LinkedIn and let me know how it went! :) + +**Article Reference:** [Click here to read basic Linux Shell Scripting](https://devopscube.com/linux-shell-scripting-for-devops/) + +**YouTube Video:** [EASIEST Shell Scripting Tutorial for DevOps Engineers](https://www.youtube.com/watch?v=_-D6gkRj7xc&list=PLlfy9GnSVerQr-Se9JRE_tZJk3OUoHCkh&index=3) + +[← Previous Day](../day03/README.md) | [Next Day →](../day05/README.md) diff --git a/2024/day04/image/task 1.png b/2024/day04/image/task 1.png new file mode 100644 index 0000000000..ffc9913f6e Binary files /dev/null and b/2024/day04/image/task 1.png differ diff --git a/2024/day04/image/task 11.png b/2024/day04/image/task 11.png new file mode 100644 index 0000000000..d4402482e1 Binary files /dev/null and b/2024/day04/image/task 11.png differ diff --git a/2024/day04/image/task 2.png b/2024/day04/image/task 2.png new file mode 100644 index 0000000000..4f7a735bc3 Binary files /dev/null and b/2024/day04/image/task 2.png differ diff --git a/2024/day04/image/task 3.png b/2024/day04/image/task 3.png new file mode 100644 index 0000000000..5baeb479a0 Binary files /dev/null and b/2024/day04/image/task 3.png differ diff --git a/2024/day04/image/task 4.png b/2024/day04/image/task 4.png new file mode 100644 index 0000000000..ea366a253a Binary files /dev/null and b/2024/day04/image/task 4.png differ diff --git a/2024/day04/image/task 5.png b/2024/day04/image/task 5.png new file mode 100644 index 0000000000..9ab2dc3eef Binary files /dev/null and b/2024/day04/image/task 5.png differ diff --git a/2024/day04/solution.md b/2024/day04/solution.md new file mode 100644 index 0000000000..b9020734eb --- /dev/null +++ b/2024/day04/solution.md @@ -0,0 +1,28 @@ + +# Day 4 Answers: Basic Linux Shell Scripting for DevOps Engineers + +Task 1: Explain in your own words and with examples what Shell Scripting means for DevOps. +- 'Shell Scripting is writing a series of commands in a script file to automate tasks in the Unix/Linux shell. For DevOps, shell scripting is crucial for automating repetitive tasks, managing system configurations, deploying applications, and integrating various tools and processes in a CI/CD pipeline. It enhances efficiency, reduces errors, and saves time.' + +Example: Automating server setup +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day04/image/task%201.png) +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day04/image/task%2011.png) + +Task 2: What is `#!/bin/bash`? Can we write `#!/bin/sh` as well? +- `#!/bin/bash` is called a "shebang" line. It indicates that the script should be run using the Bash shell. + - `#!/bin/bash`: Uses Bash as the interpreter. It supports advanced features like arrays, associative arrays, and functions. + - `#!/bin/sh`: Uses the Bourne shell. It’s more POSIX-compliant and is generally compatible with different Unix shells. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day04/image/task%202.png) + +Task 3: Write a Shell Script that prints `I will complete #90DaysOfDevOps challenge`. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day04/image/task%203.png) + +Task 4: Write a Shell Script that takes user input, input from arguments, and prints the variables. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day04/image/task%204.png) + +Task 5: Provide an example of an If-Else statement in Shell Scripting by comparing two numbers. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day04/image/task%205.png) diff --git a/2024/day05/README.md b/2024/day05/README.md new file mode 100644 index 0000000000..471ab91986 --- /dev/null +++ b/2024/day05/README.md @@ -0,0 +1,40 @@ +# Day 5 Task: Advanced Linux Shell Scripting for DevOps Engineers with User Management + +If you noticed that there are a total of 90 sub-directories in the directory '2023' of this repository, what did you think? How did I create 90 directories? Manually one by one, using a script, or a command? + +All 90 directories were created within seconds using a simple command: + +`mkdir day{1..90}` + +### Tasks + +1. **Create Directories Using Shell Script:** + - Write a bash script `createDirectories.sh` that, when executed with three arguments (directory name, start number of directories, and end number of directories), creates a specified number of directories with a dynamic directory name. + - Example 1: When executed as `./createDirectories.sh day 1 90`, it creates 90 directories as `day1 day2 day3 ... day90`. + - Example 2: When executed as `./createDirectories.sh Movie 20 50`, it creates 31 directories as `Movie20 Movie21 Movie22 ... Movie50`. + + Notes: You may need to use loops or commands (or both), based on your preference. [Check out this reference: Bash Scripting For Loop](https://www.geeksforgeeks.org/bash-scripting-for-loop/) + +2. **Create a Script to Backup All Your Work:** + - Backups are an important part of a DevOps Engineer's day-to-day activities. The video in the references will help you understand how a DevOps Engineer takes backups (it can feel a bit difficult but keep trying, nothing is impossible). + - Watch [this video](https://youtu.be/aolKiws4Joc) for guidance. + + In case of doubts, post them in the [Discord Channel for #90DaysOfDevOps](https://discord.gg/hs3Pmc5F). + +3. **Read About Cron and Crontab to Automate the Backup Script:** + - Cron is the system's main scheduler for running jobs or tasks unattended. A command called crontab allows the user to submit, edit, or delete entries to cron. A crontab file is a user file that holds the scheduling information. + - Watch this video for reference: [Cron and Crontab](https://youtu.be/aolKiws4Joc). + +4. **Read About User Management:** + - A user is an entity in a Linux operating system that can manipulate files and perform several other operations. Each user is assigned an ID that is unique within the system. IDs 0 to 999 are assigned to system users, and local user IDs start from 1000 onwards. + - Create 2 users and display their usernames. + - [Check out this reference: User Management in Linux](https://www.geeksforgeeks.org/user-management-in-linux/). + +5. **Post Your Progress:** + - Post your daily work on LinkedIn and let me know how it went! Writing an article about your experience is highly encouraged. + +**Were the tasks challenging?** + +These tasks are designed to push your skills and introduce you to advanced concepts in Linux shell scripting and user management. Share your experience and solutions on LinkedIn and let me know how it went! + +[← Previous Day](../day04/README.md) | [Next Day →](../day06/README.md) diff --git a/2024/day05/image/task 1-2.png b/2024/day05/image/task 1-2.png new file mode 100644 index 0000000000..66d467cf1d Binary files /dev/null and b/2024/day05/image/task 1-2.png differ diff --git a/2024/day05/image/task 1-3.png b/2024/day05/image/task 1-3.png new file mode 100644 index 0000000000..d5b2699043 Binary files /dev/null and b/2024/day05/image/task 1-3.png differ diff --git a/2024/day05/image/task 1.png b/2024/day05/image/task 1.png new file mode 100644 index 0000000000..1ac28abb7c Binary files /dev/null and b/2024/day05/image/task 1.png differ diff --git a/2024/day05/image/task 2-1.png b/2024/day05/image/task 2-1.png new file mode 100644 index 0000000000..f62a8a053a Binary files /dev/null and b/2024/day05/image/task 2-1.png differ diff --git a/2024/day05/image/task 2.png b/2024/day05/image/task 2.png new file mode 100644 index 0000000000..32fa6a6a33 Binary files /dev/null and b/2024/day05/image/task 2.png differ diff --git a/2024/day05/image/task 3-1.png b/2024/day05/image/task 3-1.png new file mode 100644 index 0000000000..14086027a7 Binary files /dev/null and b/2024/day05/image/task 3-1.png differ diff --git a/2024/day05/image/task 3.png b/2024/day05/image/task 3.png new file mode 100644 index 0000000000..0c6bf33d1d Binary files /dev/null and b/2024/day05/image/task 3.png differ diff --git a/2024/day05/image/task 4.png b/2024/day05/image/task 4.png new file mode 100644 index 0000000000..8145a7ab0a Binary files /dev/null and b/2024/day05/image/task 4.png differ diff --git a/2024/day05/solution.md b/2024/day05/solution.md new file mode 100644 index 0000000000..aea6029950 --- /dev/null +++ b/2024/day05/solution.md @@ -0,0 +1,41 @@ + +# Day 5 Answers: Advanced Linux Shell Scripting for DevOps Engineers with User Management + +### Tasks + +1. **Create Directories Using Shell Script:** + - Write a bash script `createDirectories.sh` that, when executed with three arguments (directory name, start number of directories, and end number of directories), creates a specified number of directories with a dynamic directory name. + - Example 1: When executed as `./createDirectories.sh day 1 90`, it creates 90 directories as `day1 day2 day3 ... day90`. + - Example 2: When executed as `./createDirectories.sh Movie 20 50`, it creates 31 directories as `Movie20 Movie21 Movie22 ... Movie50`. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%201.png) + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%201-2.png) + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%201-3.png) + +2. **Create a Script to Backup All Your Work:** + - Backups are an important part of a DevOps Engineer's day-to-day activities. The video in the references will help you understand how a DevOps Engineer takes backups (it can feel a bit difficult but keep trying, nothing is impossible). + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%202.png) + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%202-1.png) + +3. **Read About Cron and Crontab to Automate the Backup Script:** + - Cron is the system's main scheduler for running jobs or tasks unattended. A command called crontab allows the user to submit, edit, or delete entries to cron. A crontab file is a user file that holds the scheduling information. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%203.png) + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%203-1.png) + +4. **Read About User Management:** + - A user is an entity in a Linux operating system that can manipulate files and perform several other operations. Each user is assigned an ID that is unique within the system. IDs 0 to 999 are assigned to system users, and local user IDs start from 1000 onwards. + - Create 2 users and display their usernames. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%204.png) + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/). diff --git a/2024/day06/README.md b/2024/day06/README.md new file mode 100644 index 0000000000..f6e64a178d --- /dev/null +++ b/2024/day06/README.md @@ -0,0 +1,43 @@ +# Day 6 Task: File Permissions and Access Control Lists + +### Today is more on Reading, Learning, and Implementing File Permissions + +The concept of Linux file permission and ownership is important in Linux. Today, we will work on Linux permissions and ownership, and perform tasks related to both. + +## Tasks + +1. **Understanding File Permissions:** + - Create a simple file and run `ls -ltr` to see the details of the files. [Refer to Notes](https://github.com/LondheShubham153/90DaysOfDevOps/tree/master/2023/day06/notes) + - Each of the three permissions are assigned to three defined categories of users. The categories are: + - **Owner:** The owner of the file or application. + - Use `chown` to change the ownership permission of a file or directory. + - **Group:** The group that owns the file or application. + - Use `chgrp` to change the group permission of a file or directory. + - **Others:** All users with access to the system (outside the users in a group). + - Use `chmod` to change the other users' permissions of a file or directory. + - Task: Change the user permissions of the file and note the changes after running `ls -ltr`. + +2. **Writing an Article:** + - Write an article about file permissions based on your understanding from the notes. + +3. **Access Control Lists (ACL):** + - Read about ACL and try out the commands `getfacl` and `setfacl`. + - Task: Create a directory and set specific ACL permissions for different users and groups. Verify the permissions using `getfacl`. + +4. **Additional Tasks:** + - **Task:** Create a script that changes the permissions of multiple files in a directory based on user input. + - **Task:** Write a script that sets ACL permissions for a user on a given file, based on user input. + +5. **Understanding Sticky Bit, SUID, and SGID:** + - Read about sticky bit, SUID, and SGID. + - Task: Create examples demonstrating the use of sticky bit, SUID, and SGID, and explain their significance. + +6. **Backup and Restore Permissions:** + - Task: Create a script that backs up the current permissions of files in a directory to a file. + - Task: Create another script that restores the permissions from the backup file. + +In case of any doubts, post them on the [Discord Community](https://discord.gg/hs3Pmc5F). + +**Happy Learning!** + +[← Previous Day](../day05/README.md) | [Next Day →](../day07/README.md) diff --git a/2024/day06/image/task1.png b/2024/day06/image/task1.png new file mode 100644 index 0000000000..9c1d5b2bb6 Binary files /dev/null and b/2024/day06/image/task1.png differ diff --git a/2024/day06/image/task3.png b/2024/day06/image/task3.png new file mode 100644 index 0000000000..0e49d81490 Binary files /dev/null and b/2024/day06/image/task3.png differ diff --git a/2024/day06/image/task4-1.png b/2024/day06/image/task4-1.png new file mode 100644 index 0000000000..36cc2d3eec Binary files /dev/null and b/2024/day06/image/task4-1.png differ diff --git a/2024/day06/image/task4.png b/2024/day06/image/task4.png new file mode 100644 index 0000000000..cc4c72d08d Binary files /dev/null and b/2024/day06/image/task4.png differ diff --git a/2024/day06/image/task5-1.png b/2024/day06/image/task5-1.png new file mode 100644 index 0000000000..57e7b02381 Binary files /dev/null and b/2024/day06/image/task5-1.png differ diff --git a/2024/day06/image/task5-2.png b/2024/day06/image/task5-2.png new file mode 100644 index 0000000000..4a8805dc46 Binary files /dev/null and b/2024/day06/image/task5-2.png differ diff --git a/2024/day06/image/task5.png b/2024/day06/image/task5.png new file mode 100644 index 0000000000..238145ecda Binary files /dev/null and b/2024/day06/image/task5.png differ diff --git a/2024/day06/image/task6-1.png b/2024/day06/image/task6-1.png new file mode 100644 index 0000000000..2669695bfe Binary files /dev/null and b/2024/day06/image/task6-1.png differ diff --git a/2024/day06/image/task6.png b/2024/day06/image/task6.png new file mode 100644 index 0000000000..f4c5cfc449 Binary files /dev/null and b/2024/day06/image/task6.png differ diff --git a/2024/day06/solution.md b/2024/day06/solution.md new file mode 100644 index 0000000000..2a6dea82c8 --- /dev/null +++ b/2024/day06/solution.md @@ -0,0 +1,94 @@ +# Day 6 Answers: File Permissions and Access Control Lists + +### Tasks + +1. **Understanding File Permissions:** + - Create a simple file and run `ls -ltr` to see the details of the files. + - Each of the three permissions are assigned to three defined categories of users. The categories are: + - **Owner:** The owner of the file or application. + - Use `chown` to change the ownership permission of a file or directory. + - **Group:** The group that owns the file or application. + - Use `chgrp` to change the group permission of a file or directory. + - **Others:** All users with access to the system (outside the users in a group). + - Use `chmod` to change the other users' permissions of a file or directory. + - Task: Change the user permissions of the file and note the changes after running `ls -ltr`. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task1.png) + +2. **Writing an Article:** + - Write an article about file permissions based on your understanding from the notes. + + **Answer** + + - **Understanding File Permissions in Linux** + - File permissions in Linux are critical for maintaining security and proper access control. They define who can read, write, and execute a file or directory. Here, we explore the concepts and commands related to file permissions. + + - **Basic Permissions** + - Permissions in Linux are represented by a three-digit number, where each digit represents a different set of users: owner, group, and others. + + - **Highest Permission:** `7` (4+2+1) + - **Maximum Permission:** `777`, but effectively `666` for files due to security reasons, meaning no user gets execute permission. + - **Effective Permission for Directories:** `755` + - **Lowest Permission:** `000` (not recommended) + - **Minimum Effective Permission for Files:** `644` (default umask value of `022`) + - **Default Directory Permission:** Includes execute permission for navigation + + - **Categories of Users** + - Each of the three permissions are assigned to three defined categories of users: + + - **Owner**: The owner of the file or application. + - Command: `chown` is used to change the ownership of a file or directory. + - **Group**: The group that owns the file or application. + - Command: `chgrp` is used to change the group permission of a file or directory. + - **Others**: All users with access to the system. + - Command: `chmod` is used to change the permissions for other users. + + - **Special Permissions** + - **SUID (Set User ID)**: If SUID is set on an executable file and a normal user executes it, the process will have the same rights as the owner of the file being executed instead of the normal user (e.g., `passwd` command). + - **SGID (Set Group ID)**: If SGID is set on any directory, all subdirectories and files created inside will inherit the group ownership of the main directory, regardless of who creates them. + - **Sticky Bit**: Used on folders to avoid deletion of a folder and its contents by other users though they have write permissions. Only the owner and root user can delete other users' data in the folder where the sticky bit is set. + +3. **Access Control Lists (ACL):** + - Read about ACL and try out the commands `getfacl` and `setfacl`. + - Task: Create a directory and set specific ACL permissions for different users and groups. Verify the permissions using `getfacl`. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task3.png) + +4. **Additional Tasks:** + - **Task:** Create a script that changes the permissions of multiple files in a directory based on user input. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task4.png) + + - **Task:** Write a script that sets ACL permissions for a user on a given file, based on user input. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task4-1.png) + +5. **Understanding Sticky Bit, SUID, and SGID:** + - Read about sticky bit, SUID, and SGID. + - Sticky bit: Used on directories to prevent users from deleting files they do not own. + - SUID (Set User ID): Allows users to run an executable with the permissions of the executable's owner. + - SGID (Set Group ID): Allows users to run an executable with the permissions of the executable's group. + - Task: Create examples demonstrating the use of sticky bit, SUID, and SGID, and explain their significance. + + **Answer** + - Sticky bit: + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task5.png) + - SUID: + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task5-1.png) + - SGID: + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task5-2.png) + +6. **Backup and Restore Permissions:** + - Task: Create a script that backs up the current permissions of files in a directory to a file. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task6.png) + + - Task: Create another script that restores the permissions from the backup file. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task6-1.png) diff --git a/2024/day07/README.md b/2024/day07/README.md new file mode 100644 index 0000000000..9e9a2f36c1 --- /dev/null +++ b/2024/day07/README.md @@ -0,0 +1,58 @@ +# Day 7 Task: Understanding Package Manager and Systemctl + +### What is a Package Manager in Linux? + +In simpler words, a package manager is a tool that allows users to install, remove, upgrade, configure, and manage software packages on an operating system. The package manager can be a graphical application like a software center or a command line tool like apt-get or pacman. + +You’ll often find me using the term ‘package’ in tutorials and articles. To understand a package manager, you must understand what a package is. + +### What is a Package? + +A package is usually referred to as an application but it could be a GUI application, command line tool, or a software library (required by other software programs). A package is essentially an archive file containing the binary executable, configuration file, and sometimes information about the dependencies. + +### Different Kinds of Package Managers + +Package managers differ based on the packaging system but the same packaging system may have more than one package manager. + +For example, RPM has Yum and DNF package managers. For DEB, you have apt-get, aptitude command line-based package managers. + +## Tasks + +1. **Install Docker and Jenkins:** + - Install Docker and Jenkins on your system from your terminal using package managers. + +2. **Write a Blog or Article:** + - Write a small blog or article on how to install these tools using package managers on Ubuntu and CentOS. + +### Systemctl and Systemd + +Systemctl is used to examine and control the state of the “systemd” system and service manager. Systemd is a system and service manager for Unix-like operating systems (most distributions, but not all). + +## Tasks + +1. **Check Docker Service Status:** + - Check the status of the Docker service on your system (ensure you have completed the installation tasks above). + +2. **Manage Jenkins Service:** + - Stop the Jenkins service and post before and after screenshots. + +3. **Read About Systemctl vs. Service:** + - Read about the differences between the `systemctl` and `service` commands. + - Example: `systemctl status docker` vs. `service docker status`. + + For reference, read [this article](https://www.howtogeek.com/devops/how-to-check-if-the-docker-daemon-or-a-container-is-running/#:~:text=Checking%20With%20Systemctl&text=Check%20what%27s%20displayed%20under%20%E2%80%9CActive,running%20sudo%20systemctl%20start%20docker%20). + +### Additional Tasks + +4. **Automate Service Management:** + - Write a script to automate the starting and stopping of Docker and Jenkins services. + +5. **Enable and Disable Services:** + - Use systemctl to enable Docker to start on boot and disable Jenkins from starting on boot. + +6. **Analyze Logs:** + - Use journalctl to analyze the logs of the Docker and Jenkins services. Post your findings. + +#### Post about your progress and invite your friends to join the #90DaysOfDevOps challenge. + +[← Previous Day](../day06/README.md) | [Next Day →](../day08/README.md) diff --git a/2024/day07/image/task1-2.png b/2024/day07/image/task1-2.png new file mode 100644 index 0000000000..973ed75f93 Binary files /dev/null and b/2024/day07/image/task1-2.png differ diff --git a/2024/day07/image/task1.png b/2024/day07/image/task1.png new file mode 100644 index 0000000000..36c1c2e283 Binary files /dev/null and b/2024/day07/image/task1.png differ diff --git a/2024/day07/image/task4.png b/2024/day07/image/task4.png new file mode 100644 index 0000000000..2d0936df22 Binary files /dev/null and b/2024/day07/image/task4.png differ diff --git a/2024/day07/image/task5-1.png b/2024/day07/image/task5-1.png new file mode 100644 index 0000000000..7381edc4bd Binary files /dev/null and b/2024/day07/image/task5-1.png differ diff --git a/2024/day07/image/task5.png b/2024/day07/image/task5.png new file mode 100644 index 0000000000..eec8472230 Binary files /dev/null and b/2024/day07/image/task5.png differ diff --git a/2024/day07/image/task6-1.png b/2024/day07/image/task6-1.png new file mode 100644 index 0000000000..dec83cdce9 Binary files /dev/null and b/2024/day07/image/task6-1.png differ diff --git a/2024/day07/image/task6.png b/2024/day07/image/task6.png new file mode 100644 index 0000000000..9290cfc617 Binary files /dev/null and b/2024/day07/image/task6.png differ diff --git a/2024/day07/image/taskj2.png b/2024/day07/image/taskj2.png new file mode 100644 index 0000000000..3cfd509f89 Binary files /dev/null and b/2024/day07/image/taskj2.png differ diff --git a/2024/day07/solution.md b/2024/day07/solution.md new file mode 100644 index 0000000000..6ef7028b70 --- /dev/null +++ b/2024/day07/solution.md @@ -0,0 +1,167 @@ +# Day 7 Answers: Understanding Package Manager and Systemctl + +## Tasks + +1. **Install Docker and Jenkins:** + - Install Docker and Jenkins on your system from your terminal using package managers. + + **Answer** + - **First-Installing Docker** + - Update the package list and install required packages: + ```bash + sudo apt update + sudo apt install apt-transport-https ca-certificates curl software-properties-common + - Add Docker’s official GPG key: + ```bash + curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - + - Add the Docker APT repository: + ```bash + sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" + - Update the package list again: + ```bash + sudo apt update + - Install Docker: + ```bash + sudo apt install docker-ce + - Check Docker installation: + ```bash + sudo systemctl status docker + + - **Installing Jenkins** + - Add the Jenkins repository key to the system: + ```bash + curl -fsSL https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add - + - Add the Jenkins repository: + ```bash + sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list' + - Update the package list: + ```bash + sudo apt update + - Install Jenkins: + ```bash + sudo apt install jenkins + - Start Jenkins: + ```bash + sudo systemctl start jenkins + - Note: + - First, check whether JAVA is installed or not. + ```bash + java -version + - If you have not installed + ```bash + sudo apt install default-jre + + Output + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task1.png) + + Output (Jenkins-UI) + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task1-2.png) + +2. **Write a Blog or Article:** + - Write a small blog or article on how to install these tools using package managers on Ubuntu and CentOS. + + **Answer** + 1. Introduction: + - Briefly introduce Docker and Jenkins. + - Mention the operating systems (Ubuntu and CentOS) covered. + 2. Installing Docker on Ubuntu: + - List the steps as detailed above. + 3. Installing Docker on CentOS: + - Provide similar steps adjusted for CentOS. + 4. Installing Jenkins on Ubuntu: + - List the steps as detailed above. + 5. Installing Jenkins on CentOS: + - Provide similar steps adjusted for CentOS. + +### Systemctl and Systemd + +Systemctl is used to examine and control the state of the “systemd” system and service manager. Systemd is a system and service manager for Unix-like operating systems (most distributions, but not all). + +## Tasks + +1. **Check Docker Service Status:** + - Check the status of the Docker service on your system (ensure you have completed the installation tasks above). + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task5.png) + +2. **Manage Jenkins Service:** + - Stop the Jenkins service and post before and after screenshots. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/taskj2.png) + +3. **Read About Systemctl vs. Service:** + - Read about the differences between the `systemctl` and `service` commands. + - Example: `systemctl status docker` vs. `service docker status`. + + **Answer** + - Understanding the `systemctl` and `service` Commands + - Both `systemctl` and `service` commands are used to manage system services in Linux, but they differ in terms of usage, functionality, and the system architectures they support. + - **`systemctl` Command** + - `systemctl` is a command used to introspect and control the state of the `systemd` system and service manager. It is more modern and is used in systems that use `systemd` as their init system, which is common in many contemporary Linux distributions. + - Examples: + - Check the status of the Docker service: + ```bash + sudo systemctl status docker + - Start the Jenkins service: + ```bash + sudo systemctl start jenkins + - Stop the Docker service: + ```bash + sudo systemctl stop docker + - Enable the Jenkins service to start at boot: + ```bash + sudo systemctl enable jenkins + + - **`service` Command** + - 'service' is a command that works with the older 'init' systems (like SysVinit). It provides a way to start, stop, and check the status of services. While it is still available on systems using 'systemd' for backward compatibility, its usage is generally discouraged in favor of 'systemctl'. + - Examples: + - Check the status of the Docker service: + ```bash + sudo service docker status + - Start the Jenkins service: + ```bash + sudo service jenkins start + - Stop the Docker service: + ```bash + sudo service docker stop + + - **Key Differences** + - 1 System Architecture: + - `systemctl` works with `systemd`. + - `service` works with SysVinit and is compatible with `systemd` for backward compatibility. + - 2 Functionality: + - `systemctl` offers more functionality and control over services, including management of the service's state (start, stop, restart, reload), enabling/disabling services at boot, and querying detailed service status. + - `service` provides basic functionality for managing services, such as starting, stopping, and checking the status of services. + - 3 Syntax and Usage: + - `systemctl` uses a more unified syntax for managing services. + - `service` has a simpler and more traditional syntax. + +### Additional Tasks + +4. **Automate Service Management:** + - Write a script to automate the starting and stopping of Docker and Jenkins services. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task4.png) + +5. **Enable and Disable Services:** + - Use systemctl to enable Docker to start on boot and disable Jenkins from starting on boot. + + **Answer** + - Enable Docker to start on boot: + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task5.png) + + - Disable Jenkins from starting on boot: + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task5-1.png) + +6. **Analyze Logs:** + - Use journalctl to analyze the logs of the Docker and Jenkins services. Post your findings. + + **Answer** + - Docker Logs: + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task6.png) + + - Jenkins Logs: + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task6-1.png) \ No newline at end of file diff --git a/2024/day08/README.md b/2024/day08/README.md new file mode 100644 index 0000000000..0f7c48f506 --- /dev/null +++ b/2024/day08/README.md @@ -0,0 +1,29 @@ +# Day 8 Task: Shell Scripting Challenge + +### Task 1: Comments +In bash scripts, comments are used to add explanatory notes or disable certain lines of code. Your task is to create a bash script with comments explaining what the script does. + +### Task 2: Echo +The echo command is used to display messages on the terminal. Your task is to create a bash script that uses echo to print a message of your choice. + +### Task 3: Variables +Variables in bash are used to store data and can be referenced by their name. Your task is to create a bash script that declares variables and assigns values to them. + +### Task 4: Using Variables +Now that you have declared variables, let's use them to perform a simple task. Create a bash script that takes two variables (numbers) as input and prints their sum using those variables. + +### Task 5: Using Built-in Variables +Bash provides several built-in variables that hold useful information. Your task is to create a bash script that utilizes at least three different built-in variables to display relevant information. + +### Task 6: Wildcards +Wildcards are special characters used to perform pattern matching when working with files. Your task is to create a bash script that utilizes wildcards to list all the files with a specific extension in a directory. + +## Submission Instructions: +- Create a single bash script that completes all the tasks mentioned above. +- Add comments at appropriate places to explain what each part of the script does. +- Ensure that your script is well-documented and easy to understand. +- To submit your entry, create a GitHub repository and commit your script to it. + +**Good luck with Day 8 of the Bash Scripting Challenge! Tomorrow, the difficulty will increase as we move on to more advanced concepts. Happy scripting!** + +[← Previous Day](../day07/README.md) | [Next Day →](../day09/README.md) diff --git a/2024/day08/image/task1.png b/2024/day08/image/task1.png new file mode 100644 index 0000000000..c5bbcb006b Binary files /dev/null and b/2024/day08/image/task1.png differ diff --git a/2024/day08/image/task2.png b/2024/day08/image/task2.png new file mode 100644 index 0000000000..a2b9968c52 Binary files /dev/null and b/2024/day08/image/task2.png differ diff --git a/2024/day08/image/task3.png b/2024/day08/image/task3.png new file mode 100644 index 0000000000..b3ca5d7638 Binary files /dev/null and b/2024/day08/image/task3.png differ diff --git a/2024/day08/image/task4.png b/2024/day08/image/task4.png new file mode 100644 index 0000000000..451315a0b4 Binary files /dev/null and b/2024/day08/image/task4.png differ diff --git a/2024/day08/image/task5.png b/2024/day08/image/task5.png new file mode 100644 index 0000000000..6e27850692 Binary files /dev/null and b/2024/day08/image/task5.png differ diff --git a/2024/day08/image/task6.png b/2024/day08/image/task6.png new file mode 100644 index 0000000000..2c987608db Binary files /dev/null and b/2024/day08/image/task6.png differ diff --git a/2024/day08/solution.md b/2024/day08/solution.md new file mode 100644 index 0000000000..3890e5a171 --- /dev/null +++ b/2024/day08/solution.md @@ -0,0 +1,47 @@ +# Day 8 Answers: Shell Scripting Challenge + +## Tasks + +1. **Comments** + - In bash scripts, comments are used to add explanatory notes or disable certain lines of code. Your task is to create a bash script with comments explaining what the script does. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day08/image/task1.png) + +2. **Echo** + - The echo command is used to display messages on the terminal. Your task is to create a bash script that uses echo to print a message of your choice. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day08/image/task2.png) + +3. **Variables** + - Variables in bash are used to store data and can be referenced by their name. Your task is to create a bash script that declares variables and assigns values to them. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day08/image/task3.png) + +4. **Using Variables** + - Now that you have declared variables, let's use them to perform a simple task. Create a bash script that takes two variables (numbers) as input and prints their sum using those variables. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day08/image/task4.png) + +5. **Using Built-in Variables** + - Bash provides several built-in variables that hold useful information. Your task is to create a bash script that utilizes at least three different built-in variables to display relevant information. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day08/image/task5.png) + +6. **Wildcards** + - Wildcards are special characters used to perform pattern matching when working with files. Your task is to create a bash script that utilizes wildcards to list all the files with a specific extension in a directory. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day08/image/task6.png) + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day09/README.md b/2024/day09/README.md new file mode 100644 index 0000000000..33140c0997 --- /dev/null +++ b/2024/day09/README.md @@ -0,0 +1,80 @@ +# Day 9 Task: Shell Scripting Challenge Directory Backup with Rotation + + +## Challenge Description + +Your task is to create a bash script that takes a directory path as a command-line argument and performs a backup of the directory. The script should create timestamped backup folders and copy all the files from the specified directory into the backup folder. + +Additionally, the script should implement a rotation mechanism to keep only the last 3 backups. This means that if there are more than 3 backup folders, the oldest backup folders should be removed to ensure only the most recent backups are retained. + +> The script will create a timestamped backup folder inside the specified directory and copy all the files into it. It will also check for existing backup folders and remove the oldest backups to keep only the last 3 backups. + +## Example Usage + +Assume the script is named `backup_with_rotation.sh`. Here's an example of how it will look, +also assuming the script is executed with the following commands on different dates: + +1. First Execution (2023-07-30): + +``` +$ ./backup_with_rotation.sh /home/user/documents +``` + +Output: + +``` +Backup created: /home/user/documents/backup_2023-07-30_12-30-45 +Backup created: /home/user/documents/backup_2023-07-30_15-20-10 +Backup created: /home/user/documents/backup_2023-07-30_18-40-55 +``` + +After this execution, the /home/user/documents directory will contain the following items: + +``` +backup_2023-07-30_12-30-45 +backup_2023-07-30_15-20-10 +backup_2023-07-30_18-40-55 +file1.txt +file2.txt +... +``` + +2. Second Execution (2023-08-01): + +``` +$ ./backup_with_rotation.sh /home/user/documents +``` + +Output: + +``` +Backup created: /home/user/documents/backup_2023-08-01_09-15-30 +``` + +After this execution, the /home/user/documents directory will contain the following items: + +``` +backup_2023-07-30_15-20-10 +backup_2023-07-30_18-40-55 +backup_2023-08-01_09-15-30 +file1.txt +file2.txt +... +``` + +In this example, the script creates backup folders with timestamped names and retains only the last 3 backups while removing the older backups. + +## Submission Instructions + +Create a bash script named backup_with_rotation.sh that implements the Directory Backup with Rotation as described in the challenge. + +Happy Learning + +[← Previous Day](../day08/README.md) | [Next Day →](../day10/README.md) + + +Add comments in the script to explain the purpose and logic of each part. + +Submit your entry by pushing the script to your GitHub repository. + +Congratulations on completing Day 2 of the Bash Scripting Challenge! The challenge focuses on creating a backup script with rotation capabilities to manage multiple backups efficiently. Happy scripting and backing up! diff --git a/2024/day09/image/bash1.png b/2024/day09/image/bash1.png new file mode 100644 index 0000000000..480cb95551 Binary files /dev/null and b/2024/day09/image/bash1.png differ diff --git a/2024/day09/image/task1-2.png b/2024/day09/image/task1-2.png new file mode 100644 index 0000000000..7dec90a14d Binary files /dev/null and b/2024/day09/image/task1-2.png differ diff --git a/2024/day09/image/task11.png b/2024/day09/image/task11.png new file mode 100644 index 0000000000..1cc450b1b5 Binary files /dev/null and b/2024/day09/image/task11.png differ diff --git a/2024/day09/image/task2.png b/2024/day09/image/task2.png new file mode 100644 index 0000000000..e0c46c457f Binary files /dev/null and b/2024/day09/image/task2.png differ diff --git a/2024/day09/image/task3.png b/2024/day09/image/task3.png new file mode 100644 index 0000000000..2100b8ad70 Binary files /dev/null and b/2024/day09/image/task3.png differ diff --git a/2024/day09/solution.md b/2024/day09/solution.md new file mode 100644 index 0000000000..d03651b86c --- /dev/null +++ b/2024/day09/solution.md @@ -0,0 +1,45 @@ +# Day 9 Answers: Shell Scripting Challenge Directory Backup with Rotation + +## Tasks + +1. **Challenge Description** + + Your task is to create a bash script that takes a directory path as a command-line argument and performs a backup of the directory. The script should create timestamped backup folders and copy all the files from the specified directory into the backup folder. + + Additionally, the script should implement a rotation mechanism to keep only the last 3 backups. This means that if there are more than 3 backup folders, the oldest backup folders should be removed to ensure only the most recent backups are retained. + + > The script will create a timestamped backup folder inside the specified directory and copy all the files into it. It will also check for existing backup folders and remove the oldest backups to keep only the last 3 backups. + + **Answer** + + **Create a Folder And Make Some File** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day09/image/task11.png) + + - Note: + - First, check whether zip is installed or not. + ```bash + zip + - If you have not installed + ```bash + sudo apt install zip + + **Crontab Job Scheduling:** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day09/image/task2.png) + - Auto scheduling through `crontab job scheduling`: + ```bash + * 1 * * * bash /root/backup.sh /root/datafile /root/backup + + **It will take a backup every hour, and the oldest backups will be deleted, leaving only the latest three backups visible:** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day09/image/task3.png) + + **Bash Script:** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day09/image/bash1.png) + + **Reference** + [TrainWithShubham - Production Backup Rotation | Shell Scripting For DevOps Engineer](https://youtu.be/PZYJ33bMXAw?si=Zb50P67x_F32ikeO) + + [LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day10/README.md b/2024/day10/README.md new file mode 100644 index 0000000000..36a0b90a5e --- /dev/null +++ b/2024/day10/README.md @@ -0,0 +1,55 @@ +# Day 10 Task: Log Analyzer and Report Generator + +## Challenge Title: Log Analyzer and Report Generator + +## Scenario + +You are a system administrator responsible for managing a network of servers. Every day, a log file is generated on each server containing important system events and error messages. As part of your daily tasks, you need to analyze these log files, identify specific events, and generate a summary report. + +## Task + +Write a Bash script that automates the process of analyzing log files and generating a daily summary report. The script should perform the following steps: + +1. **Input:** The script should take the path to the log file as a command-line argument. + +2. **Error Count:** Analyze the log file and count the number of error messages. An error message can be identified by a specific keyword (e.g., "ERROR" or "Failed"). Print the total error count. + +3. **Critical Events:** Search for lines containing the keyword "CRITICAL" and print those lines along with the line number. + +4. **Top Error Messages:** Identify the top 5 most common error messages and display them along with their occurrence count. + +5. **Summary Report:** Generate a summary report in a separate text file. The report should include: + - Date of analysis + - Log file name + - Total lines processed + - Total error count + - Top 5 error messages with their occurrence count + - List of critical events with line numbers + +6. **Optional Enhancement:** Add a feature to automatically archive or move processed log files to a designated directory after analysis. + +## Tips + +- Use `grep`, `awk`, and other command-line tools to process the log file. +- Utilize arrays or associative arrays to keep track of error messages and their counts. +- Use appropriate error handling to handle cases where the log file doesn't exist or other issues arise. + +## Sample Log File + +A sample log file named `sample_log.log` has been provided in the same directory as this challenge file. You can use this file to test your script or use [this](https://github.com/logpai/loghub/blob/master/Zookeeper/Zookeeper_2k.log) + +## How to Participate + +1. Clone this repository or download the challenge file from the provided link. +2. Write your Bash script to complete the log analyzer and report generator task. +3. Use the provided `sample_log.log` or create your own log files for testing. +4. Test your script with various log files and scenarios to ensure accuracy. +5. Submit your completed script by the end of Day 10 of the 90-day DevOps challenge. + +## Submission + +Submit your completed script by [creating a pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request) or sending the script file to the challenge organizer. + +Good luck and happy scripting! + +[← Previous Day](../day09/README.md) | [Next Day →](../day11/README.md) diff --git a/2024/day10/image/output.png b/2024/day10/image/output.png new file mode 100644 index 0000000000..9cc079f6ab Binary files /dev/null and b/2024/day10/image/output.png differ diff --git a/2024/day10/image/task1.png b/2024/day10/image/task1.png new file mode 100644 index 0000000000..c4e888729e Binary files /dev/null and b/2024/day10/image/task1.png differ diff --git a/2024/day10/image/task2.png b/2024/day10/image/task2.png new file mode 100644 index 0000000000..24d646220b Binary files /dev/null and b/2024/day10/image/task2.png differ diff --git a/2024/day10/solution.md b/2024/day10/solution.md new file mode 100644 index 0000000000..803b46c7d7 --- /dev/null +++ b/2024/day10/solution.md @@ -0,0 +1,53 @@ +# Day 10 Answers: Log Analyzer and Report Generator + +## Scenario + +You are a system administrator responsible for managing a network of servers. Every day, a log file is generated on each server containing important system events and error messages. As part of your daily tasks, you need to analyze these log files, identify specific events, and generate a summary report. + +## Task + +Write a Bash script that automates the process of analyzing log files and generating a daily summary report. The script should perform the following steps: + +1. **Input:** The script should take the path to the log file as a command-line argument. + +2. **Error Count:** Analyze the log file and count the number of error messages. An error message can be identified by a specific keyword (e.g., "ERROR" or "Failed"). Print the total error count. + +3. **Critical Events:** Search for lines containing the keyword "CRITICAL" and print those lines along with the line number. + +4. **Top Error Messages:** Identify the top 5 most common error messages and display them along with their occurrence count. + +5. **Summary Report:** Generate a summary report in a separate text file. The report should include: + - Date of analysis + - Log file name + - Total lines processed + - Total error count + - Top 5 error messages with their occurrence count + - List of critical events with line numbers + +

Answer

+ + - **First created a folder and then a log file.** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day10/image/task1.png) + + - **Bash Code for Reference.** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day10/image/task2.png) + + -

Output

+ + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day10/image/output.png) + +6. **Optional Enhancement:** Add a feature to automatically archive or move processed log files to a designated directory after analysis. + +## Tips + +- Use `grep`, `awk`, and other command-line tools to process the log file. +- Utilize arrays or associative arrays to keep track of error messages and their counts. +- Use appropriate error handling to handle cases where the log file doesn't exist or other issues arise. + +## Sample Log File + +A sample log file named `sample_log.log` has been provided in the same directory as this challenge file. You can use this file to test your script or use [this](https://github.com/logpai/loghub/blob/master/Zookeeper/Zookeeper_2k.log) + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day11/README.md b/2024/day11/README.md new file mode 100644 index 0000000000..192cbbb6c8 --- /dev/null +++ b/2024/day11/README.md @@ -0,0 +1,68 @@ +# Day 11 Task: Error Handling in Shell Scripting + +## Learning Objectives +Understanding how to handle errors in shell scripts is crucial for creating robust and reliable scripts. Today, you'll learn how to use various techniques to handle errors effectively in your bash scripts. + +## Topics to Cover +1. **Understanding Exit Status**: Every command returns an exit status (0 for success and non-zero for failure). Learn how to check and use exit statuses. +2. **Using `if` Statements for Error Checking**: Learn how to use `if` statements to handle errors. +3. **Using `trap` for Cleanup**: Understand how to use the `trap` command to handle unexpected errors and perform cleanup. +4. **Redirecting Errors**: Learn how to redirect errors to a file or `/dev/null`. +5. **Creating Custom Error Messages**: Understand how to create meaningful error messages for debugging and user information. + +## Tasks + +### Task 1: Checking Exit Status +- Write a script that attempts to create a directory and checks if the command was successful. If not, print an error message. + +### Task 2: Using `if` Statements for Error Checking +- Modify the script from Task 1 to include more commands (e.g., creating a file inside the directory) and use `if` statements to handle errors at each step. + +### Task 3: Using `trap` for Cleanup +- Write a script that creates a temporary file and sets a `trap` to delete the file if the script exits unexpectedly. + +### Task 4: Redirecting Errors +- Write a script that tries to read a non-existent file and redirects the error message to a file called `error.log`. + +### Task 5: Creating Custom Error Messages +- Modify one of the previous scripts to include custom error messages that provide more context about what went wrong. + +## Example Scripts + +### Example 1: Checking Exit Status +```bash +#!/bin/bash +mkdir /tmp/mydir +if [ $? -ne 0 ]; then + echo "Failed to create directory /tmp/mydir" +fi +``` + +### Example 2: Trap +```bash +#!/bin/bash +tempfile=$(mktemp) +trap "rm -f $tempfile" EXIT + +echo "This is a temporary file." > $tempfile +cat $tempfile +# Simulate an error +exit 1 +``` + +### Example 3: Redirecting Errors +```bash +#!/bin/bash +cat non_existent_file.txt 2> error.log +``` + +### Example 4: Custom Error Messages +```bash +#!/bin/bash +mkdir /tmp/mydir +if [ $? -ne 0 ]; then + echo "Error: Directory /tmp/mydir could not be created. Check if you have the necessary permissions." +fi +``` + +[← Previous Day](../day10/README.md) | [Next Day →](../day12/README.md) diff --git a/2024/day11/image/task1.png b/2024/day11/image/task1.png new file mode 100644 index 0000000000..5b22fef75d Binary files /dev/null and b/2024/day11/image/task1.png differ diff --git a/2024/day11/image/task2.png b/2024/day11/image/task2.png new file mode 100644 index 0000000000..568e60540a Binary files /dev/null and b/2024/day11/image/task2.png differ diff --git a/2024/day11/image/task3.png b/2024/day11/image/task3.png new file mode 100644 index 0000000000..a79ed5cdb4 Binary files /dev/null and b/2024/day11/image/task3.png differ diff --git a/2024/day11/image/task4.png b/2024/day11/image/task4.png new file mode 100644 index 0000000000..60f6fbe28b Binary files /dev/null and b/2024/day11/image/task4.png differ diff --git a/2024/day11/image/task5.png b/2024/day11/image/task5.png new file mode 100644 index 0000000000..8a1b45091d Binary files /dev/null and b/2024/day11/image/task5.png differ diff --git a/2024/day11/image/task5ka1.png b/2024/day11/image/task5ka1.png new file mode 100644 index 0000000000..ea7a3b5d21 Binary files /dev/null and b/2024/day11/image/task5ka1.png differ diff --git a/2024/day11/solution.md b/2024/day11/solution.md new file mode 100644 index 0000000000..55dd5dfd94 --- /dev/null +++ b/2024/day11/solution.md @@ -0,0 +1,92 @@ +# Day 11 Answers: Error Handling in Shell Scripting + +## Learning Objectives +Understanding how to handle errors in shell scripts is crucial for creating robust and reliable scripts. Today, you'll learn how to use various techniques to handle errors effectively in your bash scripts. + +## Topics to Cover +1. **Understanding Exit Status**: Every command returns an exit status (0 for success and non-zero for failure). Learn how to check and use exit statuses. +2. **Using `if` Statements for Error Checking**: Learn how to use `if` statements to handle errors. +3. **Using `trap` for Cleanup**: Understand how to use the `trap` command to handle unexpected errors and perform cleanup. +4. **Redirecting Errors**: Learn how to redirect errors to a file or `/dev/null`. +5. **Creating Custom Error Messages**: Understand how to create meaningful error messages for debugging and user information. + +## Tasks with Answers + +### Task 1: Checking Exit Status +- Write a script that attempts to create a directory and checks if the command was successful. If not, print an error message. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day11/image/task1.png) + +### Task 2: Using `if` Statements for Error Checking +- Modify the script from Task 1 to include more commands (e.g., creating a file inside the directory) and use `if` statements to handle errors at each step. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day11/image/task2.png) + +### Task 3: Using `trap` for Cleanup +- Write a script that creates a temporary file and sets a `trap` to delete the file if the script exits unexpectedly. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day11/image/task3.png) + +### Task 4: Redirecting Errors +- Write a script that tries to read a non-existent file and redirects the error message to a file called `error.log`. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day11/image/task4.png) + +### Task 5: Creating Custom Error Messages +- Modify one of the previous scripts to include custom error messages that provide more context about what went wrong. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day11/image/task5.png) + + - **I also intentionally created an error by not creating the file, so it showed me this error. I did this for reference.** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day11/image/task5ka1.png) + +## Example Scripts + +### Example 1: Checking Exit Status +```bash +#!/bin/bash +mkdir /tmp/mydir +if [ $? -ne 0 ]; then + echo "Failed to create directory /tmp/mydir" +fi +``` + +### Example 2: Trap +```bash +#!/bin/bash +tempfile=$(mktemp) +trap "rm -f $tempfile" EXIT + +echo "This is a temporary file." > $tempfile +cat $tempfile +# Simulate an error +exit 1 +``` + +### Example 3: Redirecting Errors +```bash +#!/bin/bash +cat non_existent_file.txt 2> error.log +``` + +### Example 4: Custom Error Messages +```bash +#!/bin/bash +mkdir /tmp/mydir +if [ $? -ne 0 ]; then + echo "Error: Directory /tmp/mydir could not be created. Check if you have the necessary permissions." +fi +``` + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day12/README.md b/2024/day12/README.md new file mode 100644 index 0000000000..342e048218 --- /dev/null +++ b/2024/day12/README.md @@ -0,0 +1,26 @@ +# Day 12 Task: Deep Dive in Git & GitHub for DevOps Engineers + +## Find the answers by your understandings (Shouldn't be copied from the internet & use hand-made diagrams) of the questions below and write a blog on it. + +1. What is Git and why is it important? +2. What is the difference between Main Branch and Master Branch? +3. Can you explain the difference between Git and GitHub? +4. How do you create a new repository on GitHub? +5. What is the difference between a local & remote repository? How to connect local to remote? + +## Tasks + +### Task 1: +- Set your user name and email address, which will be associated with your commits. + +### Task 2: +- Create a repository named "DevOps" on GitHub. +- Connect your local repository to the repository on GitHub. +- Create a new file in Devops/Git/Day-02.txt & add some content to it. +- Push your local commits to the repository on GitHub. + +Reference: [YouTube Video](https://youtu.be/AT1uxOLsCdk) + +Note: These steps assume that you have already installed Git on your computer and have created a GitHub account. If you need help with these prerequisites, you can refer to the [guide](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). + +[← Previous Day](../day11/README.md) | [Next Day →](../day13/README.md) diff --git a/2024/day12/image/connect_your_local_repository_to_the_repository_on_github.png b/2024/day12/image/connect_your_local_repository_to_the_repository_on_github.png new file mode 100644 index 0000000000..a1718a646f Binary files /dev/null and b/2024/day12/image/connect_your_local_repository_to_the_repository_on_github.png differ diff --git a/2024/day12/image/create_a_new_file.png b/2024/day12/image/create_a_new_file.png new file mode 100644 index 0000000000..41f14eeebc Binary files /dev/null and b/2024/day12/image/create_a_new_file.png differ diff --git a/2024/day12/image/create_a_new_repository.png b/2024/day12/image/create_a_new_repository.png new file mode 100644 index 0000000000..922ce166a9 Binary files /dev/null and b/2024/day12/image/create_a_new_repository.png differ diff --git a/2024/day12/image/gitui.png b/2024/day12/image/gitui.png new file mode 100644 index 0000000000..26479aee99 Binary files /dev/null and b/2024/day12/image/gitui.png differ diff --git a/2024/day12/image/gitui1.png b/2024/day12/image/gitui1.png new file mode 100644 index 0000000000..6c7b26718a Binary files /dev/null and b/2024/day12/image/gitui1.png differ diff --git a/2024/day12/image/gitui2.png b/2024/day12/image/gitui2.png new file mode 100644 index 0000000000..ab87db3a81 Binary files /dev/null and b/2024/day12/image/gitui2.png differ diff --git a/2024/day12/image/push_repository.png b/2024/day12/image/push_repository.png new file mode 100644 index 0000000000..070391b49a Binary files /dev/null and b/2024/day12/image/push_repository.png differ diff --git a/2024/day12/image/set_user_name_and_email_address.png b/2024/day12/image/set_user_name_and_email_address.png new file mode 100644 index 0000000000..e01f65ad95 Binary files /dev/null and b/2024/day12/image/set_user_name_and_email_address.png differ diff --git a/2024/day12/solution.md b/2024/day12/solution.md new file mode 100644 index 0000000000..5d6c2884df --- /dev/null +++ b/2024/day12/solution.md @@ -0,0 +1,94 @@ +# Day 12 Answers: Deep Dive in Git & GitHub for DevOps Engineers + +## Find the answers by your understandings (Shouldn't be copied from the internet & use hand-made diagrams) of the questions below and write a blog on it. + +1. What is Git and why is it important? + - **Git** is a distributed version control system that allows multiple developers to work on a project simultaneously without overwriting each other's changes. It helps track changes in source code during software development, enabling collaboration, version control, and efficient management of code changes. + + **Importance of Git:** + - **Version Control:** Keeps track of changes, allowing you to revert to previous versions if needed. + - **Collaboration:** Multiple developers can work on the same project simultaneously. + - **Branching:** Allows you to work on different features or fixes in isolation. + - **Backup::** Acts as a backup of your codebase. + +2. What is the difference between Main Branch and Master Branch? + - Traditionally, **master** was the default branch name in Git repositories. However, many communities have moved to using **main** as the default branch name to be more inclusive and avoid potentially offensive terminology. + + - Main Branch vs. Master Branch: + - **Main Branch:** The new default branch name used in many modern repositories. + - **Master Branch:** The traditional default branch name used in older repositories. + + The traditional default branch name used in older repositories. + + +3. Can you explain the difference between Git and GitHub? + - **Git** is a version control system, while **GitHub** is a web-based platform that uses Git for version control and adds collaboration features like pull requests, issue tracking, and project management. + - Git: + - Command-line tool. + - Manages local repositories. + - GitHub: + - Hosting service for Git repositories. + - Adds collaboration tools and user interfaces. + +4. How do you create a new repository on GitHub? + 1. Go to GitHub. + 2. Click on the + icon in the top right corner. + 3. Select New repository. + 4. Enter a repository name (e.g., "DevOps"). + 5. Click Create repository. + +5. What is the difference between a local & remote repository? How to connect local to remote? + - Local Repository: + - Stored on your local machine. + - Contains your working directory and Git database. + - Remote Repository: + - Hosted on a server (e.g., GitHub). + - Allows collaboration with other developers. + - Connecting Local to Remote: + 1. Initialize a local repository: `git init` + 2. Add a remote: `git remote add origin ` + +## Tasks with Answers + +### Task 1: +- Set your user name and email address, which will be associated with your commits. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/set_user_name_and_email_address.png) + +### Task 2: +- Create a repository named "DevOps" on GitHub. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/create_a_new_repository.png) + +- Connect your local repository to the repository on GitHub. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/connect_your_local_repository_to_the_repository_on_github.png) + +- Create a new file in Devops/Git/Day-12.txt & add some content to it. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/create_a_new_file.png) + +- Push your local commits to the repository on GitHub. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/push_repository.png) + +**After that if you check it on GitHub then it's output will look like this** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/gitui.png) + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/gitui1.png) + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/gitui2.png) + + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day13/README.md b/2024/day13/README.md new file mode 100644 index 0000000000..01595118d0 --- /dev/null +++ b/2024/day13/README.md @@ -0,0 +1,99 @@ +# Day 13 Task: Advance Git & GitHub for DevOps Engineers + +## Git Branching + +Branches are a core concept in Git that allow you to isolate development work without affecting other parts of your repository. Each repository has one default branch, and can have multiple other branches. You can merge a branch into another branch using a pull request. + +Branches let you develop features, fix bugs, or safely experiment with new ideas in a contained area of your repository. + +## Git Revert and Reset + +Git reset and git revert are two commonly used commands that allow you to remove or edit changes you’ve made in the code in previous commits. Both commands can be very useful in different scenarios. + +## Git Rebase and Merge + +### What Is Git Rebase? + +Git rebase is a command that lets users integrate changes from one branch to another, and the commit history is modified once the action is complete. Git rebase helps keep a clean project history. + +### What Is Git Merge? + +Git merge is a command that allows developers to merge Git branches while keeping the logs of commits on branches intact. Even though merging and rebasing do similar things, they handle commit logs differently. + +For a better understanding of Git Rebase and Merge, check out this [article](https://www.simplilearn.com/git-rebase-vs-merge-article). + +## Tasks + +### Task 1: Feature Development with Branches + +1. **Create a Branch and Add a Feature:** + - Add a text file called `version01.txt` inside the `Devops/Git/` directory with “This is the first feature of our application” written inside. + - Create a new branch from `master`. + ```bash + git checkout -b dev + ``` + - Commit your changes with a message reflecting the added feature. + ```bash + git add Devops/Git/version01.txt + git commit -m "Added new feature" + ``` + +2. **Push Changes to GitHub:** + - Push your local commits to the repository on GitHub. + ```bash + git push origin dev + ``` + +3. **Add More Features with Separate Commits:** + - Update `version01.txt` with the following lines, committing after each change: + - 1st line: `This is the bug fix in development branch` + ```bash + echo "This is the bug fix in development branch" >> Devops/Git/version01.txt + git commit -am "Added feature2 in development branch" + ``` + - 2nd line: `This is gadbad code` + ```bash + echo "This is gadbad code" >> Devops/Git/version01.txt + git commit -am "Added feature3 in development branch" + ``` + - 3rd line: `This feature will gadbad everything from now` + ```bash + echo "This feature will gadbad everything from now" >> Devops/Git/version01.txt + git commit -am "Added feature4 in development branch" + ``` + +4. **Restore the File to a Previous Version:** + - Revert or reset the file to where the content should be “This is the bug fix in development branch”. + ```bash + git revert HEAD~2 + ``` + +### Task 2: Working with Branches + +1. **Demonstrate Branches:** + - Create 2 or more branches and take screenshots to show the branch structure. + +2. **Merge Changes into Master:** + - Make some changes to the `dev` branch and merge it into `master`. + ```bash + git checkout master + git merge dev + ``` + +3. **Practice Rebase:** + - Try rebasing and observe the differences. + ```bash + git rebase master + ``` + +## Note: + +Following best practices for branching is important. Check out these [best practices](https://www.flagship.io/git-branching-strategies/) that the industry follows. + +Simple Reference on branching: [video](https://youtu.be/NzjK9beT_CY) + +Advanced Reference on branching: [video](https://youtu.be/7xhkEQS3dXw) + +Share your learnings from this task on LinkedIn using #90DaysOfDevOps Challenge. Happy Learning! + +[← Previous Day](../day12/README.md) | [Next Day →](../day14/README.md) diff --git a/2024/day13/image/1 Create a Branch and Add a Feature.png b/2024/day13/image/1 Create a Branch and Add a Feature.png new file mode 100644 index 0000000000..e8eb507afa Binary files /dev/null and b/2024/day13/image/1 Create a Branch and Add a Feature.png differ diff --git a/2024/day13/image/10 Screenshot of branch structure.png b/2024/day13/image/10 Screenshot of branch structure.png new file mode 100644 index 0000000000..f5977c5c15 Binary files /dev/null and b/2024/day13/image/10 Screenshot of branch structure.png differ diff --git a/2024/day13/image/11 Merge Changes into Master_main.png b/2024/day13/image/11 Merge Changes into Master_main.png new file mode 100644 index 0000000000..7f55293ba6 Binary files /dev/null and b/2024/day13/image/11 Merge Changes into Master_main.png differ diff --git a/2024/day13/image/12 Practice Rebase.png b/2024/day13/image/12 Practice Rebase.png new file mode 100644 index 0000000000..58ba6de2e3 Binary files /dev/null and b/2024/day13/image/12 Practice Rebase.png differ diff --git a/2024/day13/image/2 Create a new branch.png b/2024/day13/image/2 Create a new branch.png new file mode 100644 index 0000000000..ea4441ec7e Binary files /dev/null and b/2024/day13/image/2 Create a new branch.png differ diff --git a/2024/day13/image/3 Commit your changes with a message reflecting.png b/2024/day13/image/3 Commit your changes with a message reflecting.png new file mode 100644 index 0000000000..9f6cdcffda Binary files /dev/null and b/2024/day13/image/3 Commit your changes with a message reflecting.png differ diff --git a/2024/day13/image/4 Push your local commits to the repository on GitHub.png b/2024/day13/image/4 Push your local commits to the repository on GitHub.png new file mode 100644 index 0000000000..510598ce73 Binary files /dev/null and b/2024/day13/image/4 Push your local commits to the repository on GitHub.png differ diff --git a/2024/day13/image/5 This is the bug fix in development branch.png b/2024/day13/image/5 This is the bug fix in development branch.png new file mode 100644 index 0000000000..642b6da7d0 Binary files /dev/null and b/2024/day13/image/5 This is the bug fix in development branch.png differ diff --git a/2024/day13/image/6 This is gadbad code.png b/2024/day13/image/6 This is gadbad code.png new file mode 100644 index 0000000000..c0727a3fa1 Binary files /dev/null and b/2024/day13/image/6 This is gadbad code.png differ diff --git a/2024/day13/image/7 This feature will gadbad everything from now.png b/2024/day13/image/7 This feature will gadbad everything from now.png new file mode 100644 index 0000000000..ee362b2ab5 Binary files /dev/null and b/2024/day13/image/7 This feature will gadbad everything from now.png differ diff --git a/2024/day13/image/8 Restore the File to a Previous Version.png b/2024/day13/image/8 Restore the File to a Previous Version.png new file mode 100644 index 0000000000..cf13f6b475 Binary files /dev/null and b/2024/day13/image/8 Restore the File to a Previous Version.png differ diff --git a/2024/day13/image/9 Create 2 or more branches.png b/2024/day13/image/9 Create 2 or more branches.png new file mode 100644 index 0000000000..b3e0ef69b3 Binary files /dev/null and b/2024/day13/image/9 Create 2 or more branches.png differ diff --git a/2024/day13/solution.md b/2024/day13/solution.md new file mode 100644 index 0000000000..ab95d67568 --- /dev/null +++ b/2024/day13/solution.md @@ -0,0 +1,140 @@ +# Day 13 Answers: Advance Git & GitHub for DevOps Engineers + +## Git Branching + +Branches are a core concept in Git that allow you to isolate development work without affecting other parts of your repository. Each repository has one default branch, and can have multiple other branches. You can merge a branch into another branch using a pull request. + +Branches let you develop features, fix bugs, or safely experiment with new ideas in a contained area of your repository. + +## Git Revert and Reset + +Git reset and git revert are two commonly used commands that allow you to remove or edit changes you’ve made in the code in previous commits. Both commands can be very useful in different scenarios. + +## Git Rebase and Merge + +### What Is Git Rebase? + +Git rebase is a command that lets users integrate changes from one branch to another, and the commit history is modified once the action is complete. Git rebase helps keep a clean project history. + +### What Is Git Merge? + +Git merge is a command that allows developers to merge Git branches while keeping the logs of commits on branches intact. Even though merging and rebasing do similar things, they handle commit logs differently. + +For a better understanding of Git Rebase and Merge, check out this [article](https://www.simplilearn.com/git-rebase-vs-merge-article). + +## Tasks with Answers + +### Task 1: Feature Development with Branches + +1. **Create a Branch and Add a Feature:** + - Add a text file called `version01.txt` inside the `Devops/Git/` directory with “This is the first feature of our application” written inside. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/1%20Create%20a%20Branch%20and%20Add%20a%20Feature.png) + + - Create a new branch from `master`. + ```bash + git checkout -b dev + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/2%20Create%20a%20new%20branch.png) + + - Commit your changes with a message reflecting the added feature. + ```bash + git add Devops/Git/version01.txt + git commit -m "Added new feature" + ``` + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/3%20Commit%20your%20changes%20with%20a%20message%20reflecting.png) + +2. **Push Changes to GitHub:** + - Push your local commits to the repository on GitHub. + ```bash + git push origin dev + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/4%20Push%20your%20local%20commits%20to%20the%20repository%20on%20GitHub.png) + +3. **Add More Features with Separate Commits:** + - Update `version01.txt` with the following lines, committing after each change: + - 1st line: `This is the bug fix in development branch` + ```bash + echo "This is the bug fix in development branch" >> Devops/Git/version01.txt + git commit -am "Added feature2 in development branch" + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/5%20This%20is%20the%20bug%20fix%20in%20development%20branch.png) + + - 2nd line: `This is gadbad code` + ```bash + echo "This is gadbad code" >> Devops/Git/version01.txt + git commit -am "Added feature3 in development branch" + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/6%20This%20is%20gadbad%20code.png) + + - 3rd line: `This feature will gadbad everything from now` + ```bash + echo "This feature will gadbad everything from now" >> Devops/Git/version01.txt + git commit -am "Added feature4 in development branch" + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/7%20This%20feature%20will%20gadbad%20everything%20from%20now.png) + +4. **Restore the File to a Previous Version:** + - Revert or reset the file to where the content should be “This is the bug fix in development branch”. + ```bash + git revert HEAD~2 + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/8%20Restore%20the%20File%20to%20a%20Previous%20Version.png) + +This command reverts the last two commits, effectively removing the "gadbad code" and "gadbad everything" lines. + +### Task 2: Working with Branches + +1. **Demonstrate Branches:** + - Create 2 or more branches and take screenshots to show the branch structure. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/9%20Create%202%20or%20more%20branches.png) + +2. **Merge Changes into Master:** + - Make some changes to the `dev` branch and merge it into `master`. + ```bash + git checkout master + git merge dev + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/11%20Merge%20Changes%20into%20Master_main.png) + + - Screenshot of branch structure: + - To visualize the branch structure, you can use `git log` with graph options or a graphical tool like GitKraken. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/10%20Screenshot%20of%20branch%20structure.png) + +3. **Practice Rebase:** + - Try rebasing and observe the differences. + ```bash + git rebase master + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/12%20Practice%20Rebase.png) + + - During a rebase, Git re-applies commits from the current branch (in this case, dev) onto the target branch (master). This results in a linear commit history. + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day14/Git_cheat_sheet_rajat.pdf b/2024/day14/Git_cheat_sheet_rajat.pdf new file mode 100644 index 0000000000..db74f1d531 Binary files /dev/null and b/2024/day14/Git_cheat_sheet_rajat.pdf differ diff --git a/2024/day14/Linux_cheat_sheet_rajat.pdf b/2024/day14/Linux_cheat_sheet_rajat.pdf new file mode 100644 index 0000000000..3a9dd4008a Binary files /dev/null and b/2024/day14/Linux_cheat_sheet_rajat.pdf differ diff --git a/2024/day14/README.md b/2024/day14/README.md new file mode 100644 index 0000000000..597a8ed666 --- /dev/null +++ b/2024/day14/README.md @@ -0,0 +1,32 @@ +# Day 14 Task: Create a Linux & Git-GitHub Cheat Sheet + +## Finally!! 🎉 + +You have completed the Linux & Git-GitHub hands-on tasks, and I hope you have learned something interesting from it. 🙌 + +Now, let's create an interesting 😉 assignment that will not only help you in the future but also benefit the DevOps community! + +## Task: Create a Cheat Sheet + +Let’s make a well-articulated and documented **cheat sheet** with all the commands you learned so far in Linux and Git-GitHub, along with a brief description of their usage. + +Show us your knowledge mixed with your creativity 😎. + +### Guidelines + +- The cheat sheet should be unique and reflect your understanding. +- Include all the important commands you have learned. +- Provide a brief description of each command's usage. +- Make it visually appealing and easy to understand. + +### Reference + +For your reference, check out this [cheat sheet](https://education.github.com/git-cheat-sheet-education.pdf). However, ensure that your cheat sheet is unique. + +### Share Your Work + +Post your cheat sheet on LinkedIn and spread the knowledge. 😃 + +**Happy Learning! :)** + +[← Previous Day](../day13/README.md) | [Next Day →](../day15/README.md) diff --git a/2024/day14/solution.md b/2024/day14/solution.md new file mode 100644 index 0000000000..ab0ff2c138 --- /dev/null +++ b/2024/day14/solution.md @@ -0,0 +1,81 @@ +# Day 14 Answers: Create a Linux & Git-GitHub Cheat Sheet + +## Finally!! 🎉 + +You have completed the Linux & Git-GitHub hands-on tasks, and I hope you have learned something interesting from it. 🙌 + +Now, let's create an interesting 😉 assignment that will not only help you in the future but also benefit the DevOps community! + +## Tasks with Answers: Create a Cheat Sheet + +Let’s make a well-articulated and documented **cheat sheet** with all the commands you learned so far in Linux and Git-GitHub, along with a brief description of their usage. + +Show us your knowledge mixed with your creativity 😎. + +### Guidelines + +- The cheat sheet should be unique and reflect your understanding. +- Include all the important commands you have learned. +- Provide a brief description of each command's usage. +- Make it visually appealing and easy to understand. + +## Linux Commands / Git Commands + +### File and Directory Management +- `ls` - Lists files and directories. +- `cd ` - Changes the directory. +- `pwd` - Prints current directory. +- `mkdir ` - Creates a new directory. +- `rm ` - Removes a file. +- `rm -r ` - Removes a directory and its contents. +- `cp ` - Copies files or directories. +- `mv ` - Moves or renames files or directories. +- `touch ` - Creates or updates a file. + +### Viewing and Editing Files +- `cat ` - Displays file content. +- `less ` - Views file content one screen at a time. +- `nano ` - Edits files using nano editor. +- `vim ` - Edits files using vim editor. + +### System Information +- `uname -a` - Displays system information. +- `top` - Shows real-time system processes. +- `df -h` - Displays disk usage. +- `free -h` - Displays memory usage. + +### Permissions +- `chmod ` - Changes file permissions. +- `chown : ` - Changes file owner and group. + +### Networking +- `ping ` - Sends ICMP echo requests. +- `ifconfig` - Displays or configures network interfaces. + +## Git Commands + +### Configuration +- `git config --global user.name "Your Name"` - Sets global user name. +- `git config --global user.email "your.email@example.com"` - Sets global user email. + +### Repository Management +- `git init` - Initializes a new repository. +- `git clone ` - Clones a repository. + +### Basic Operations +- `git status` - Shows working tree status. +- `git add ` - Stages changes. +- `git commit -m "message"` - Commits changes. +- `git push` - Pushes changes to remote repository. +- `git checkout -b dev` - Create a new branch from `master`. +- `git checkout` - switch to another branch and check it out into your working directory. +- `git log --oneline --graph --all` - visualize the branch structure. +- `git push origin dev` - Push Changes to GitHub. +- `git merge dev` - merge it into `master/main`. +- `git log` - show all commits in the current branch’s history. + +### Reference + +For your reference, check out this [cheat sheet](https://education.github.com/git-cheat-sheet-education.pdf). However, ensure that your cheat sheet is unique. + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) \ No newline at end of file diff --git a/2024/day15/README.md b/2024/day15/README.md new file mode 100644 index 0000000000..51132d5ef0 --- /dev/null +++ b/2024/day15/README.md @@ -0,0 +1,33 @@ +# Day 15 Task: Basics of Python for DevOps Engineers + +## Hello Dosto + +Let's start with the basics of Python, as this is also important for DevOps Engineers to build logic and programs. + +### What is Python? + +- Python is an open-source, general-purpose, high-level, and object-oriented programming language. +- It was created by **Guido van Rossum**. +- Python consists of vast libraries and various frameworks like Django, TensorFlow, Flask, Pandas, Keras, etc. + +### How to Install Python? + +You can install Python on your system, whether it is Windows, macOS, Ubuntu, CentOS, etc. Below are the links for the installation: + +- [Windows Installation](https://www.python.org/downloads/) +- Ubuntu: `apt-get install python3.6` + +## Tasks + +### Task 1: + +1. Install Python on your respective OS, and check the version. +2. Read about different data types in Python. + +You can get the complete playlist [here](https://www.youtube.com/watch?v=abPgj_3hzVY&list=PLlfy9GnSVerS_L5z0COaF7rsbgWmJXTOM) 🙌 + +Don't forget to share your journey over LinkedIn. Let the community know that you have started another chapter of your journey. + +**Happy Learning, Ruko Mat Phod do! 😃** + +[← Previous Day](../day14/README.md) | [Next Day →](../day16/README.md) diff --git a/2024/day15/image/Installation_Python.png b/2024/day15/image/Installation_Python.png new file mode 100644 index 0000000000..b6813fdb25 Binary files /dev/null and b/2024/day15/image/Installation_Python.png differ diff --git a/2024/day15/solution.md b/2024/day15/solution.md new file mode 100644 index 0000000000..b7cc4fc986 --- /dev/null +++ b/2024/day15/solution.md @@ -0,0 +1,85 @@ +# Day 15 Answers: Basics of Python for DevOps Engineers + +## What is Python? + +Python is an open-source, general-purpose, high-level, and object-oriented programming language created by Guido van Rossum. It has a vast ecosystem of libraries and frameworks, such as Django, TensorFlow, Flask, Pandas, Keras, and many more. + +## How to Install Python + +### Windows Installation + +1. Go to the [Python website](https://www.python.org/downloads/). +2. Download the latest version of Python. +3. Run the installer and follow the instructions. +4. Check the installation by opening a command prompt and typing: + ```bash + python --version + +### Ubuntu Installation + - `sudo apt-get update` + - `sudo apt-get install python3.6` + +### macOS Installation + +1. Download the installer from the [Python website](https://www.python.org/downloads/macos/). +2. Follow the installation instructions. +3. Check the installation by opening a terminal and typing: + - `python3 --version` + +## Tasks with Answers + +### Task 1: + +1. Install Python on your respective OS, and check the version. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day15/image/Installation_Python.png) + +### 2. Read about different data types in Python. + - Python supports several data types, which can be categorized as follows: + - **Numeric Types:** + - **int:** Integer values + - `x = 10` + + - **float:** Floating-point values + - `y = 10.5` + + - **complex:** Complex numbers + - `z = 3 + 5j` + + - **Sequence Types:** + - **str:** String values + - `name = "bhavin"` + + - **list:** Ordered collection of items + - `fruits = ["apple", "banana", "cherry"]` + + - **tuple:** Ordered, immutable collection of items + - `coordinates = (10.0, 20.0)` + + - **Mapping Types:** + - **dict:** Key-value pairs + - `person = {"name": "bhavin", "age": 24}` + + - **Set Types:** + - **set:** Unordered collection of unique items + - `unique_numbers = {1, 2, 3, 4, 5}` + + - **frozenset:** Immutable set + - `frozen_numbers = frozenset([1, 2, 3, 4, 5])` + + - **Boolean Type:** + - **bool:** Boolean values + - `is_active = True` + + - **None Type:** + - **NoneType:** Represents the absence of a value + - `data = None` + + +You can get the complete playlist [here](https://www.youtube.com/watch?v=abPgj_3hzVY&list=PLlfy9GnSVerS_L5z0COaF7rsbgWmJXTOM) 🙌 + +**Happy Learning, Ruko Mat Phod do! 😃** + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day16/README.md b/2024/day16/README.md new file mode 100644 index 0000000000..1c353ede6d --- /dev/null +++ b/2024/day16/README.md @@ -0,0 +1,32 @@ +# Day 16 Task: Docker for DevOps Engineers + +### Docker + +Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run, including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any environment and know your code will run. + +## Tasks + +As you have already installed Docker in previous tasks, now is the time to run Docker commands. + +- Use the `docker run` command to start a new container and interact with it through the command line. [Hint: `docker run hello-world`] + +- Use the `docker inspect` command to view detailed information about a container or image. + +- Use the `docker port` command to list the port mappings for a container. + +- Use the `docker stats` command to view resource usage statistics for one or more containers. + +- Use the `docker top` command to view the processes running inside a container. + +- Use the `docker save` command to save an image to a tar archive. + +- Use the `docker load` command to load an image from a tar archive. + +These tasks involve simple operations that can be used to manage images and containers. + +For reference, you can watch this video: +https://youtu.be/Tevxhn6Odc8 + +You can post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. Happy Learning :) + +[← Previous Day](../day15/README.md) | [Next Day →](../day17/README.md) diff --git a/2024/day16/image/1_Start_a_New_Container.png b/2024/day16/image/1_Start_a_New_Container.png new file mode 100644 index 0000000000..0e94004c2e Binary files /dev/null and b/2024/day16/image/1_Start_a_New_Container.png differ diff --git a/2024/day16/image/2_docker_inspect.png b/2024/day16/image/2_docker_inspect.png new file mode 100644 index 0000000000..727afa83c8 Binary files /dev/null and b/2024/day16/image/2_docker_inspect.png differ diff --git a/2024/day16/image/3_docker_port.png b/2024/day16/image/3_docker_port.png new file mode 100644 index 0000000000..ab925e2ba4 Binary files /dev/null and b/2024/day16/image/3_docker_port.png differ diff --git a/2024/day16/image/4_docker_stats.png b/2024/day16/image/4_docker_stats.png new file mode 100644 index 0000000000..80658299fe Binary files /dev/null and b/2024/day16/image/4_docker_stats.png differ diff --git a/2024/day16/image/5_docker_top.png b/2024/day16/image/5_docker_top.png new file mode 100644 index 0000000000..ed4a28ca2a Binary files /dev/null and b/2024/day16/image/5_docker_top.png differ diff --git a/2024/day16/image/6_docker_save.png b/2024/day16/image/6_docker_save.png new file mode 100644 index 0000000000..8d05ed92ee Binary files /dev/null and b/2024/day16/image/6_docker_save.png differ diff --git a/2024/day16/image/7_docker_load.png b/2024/day16/image/7_docker_load.png new file mode 100644 index 0000000000..6a2ab833f4 Binary files /dev/null and b/2024/day16/image/7_docker_load.png differ diff --git a/2024/day16/solution.md b/2024/day16/solution.md new file mode 100644 index 0000000000..82723386d6 --- /dev/null +++ b/2024/day16/solution.md @@ -0,0 +1,63 @@ +# Day 16 Answers: Docker for DevOps Engineers + +### Docker + +Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run, including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any environment and know your code will run. + +## Tasks with Answers + +As you have already installed Docker in previous tasks, now is the time to run Docker commands. + +### 1. Use the `docker run` command to start a new container and interact with it through the command line. [Hint: `docker run hello-world`] + +**Answer** + - This command runs the `hello-world` image, which prints a message confirming that Docker is working correctly. +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day16/image/1_Start_a_New_Container.png) + +### 2. Use the `docker inspect` command to view detailed information about a container or image. + +**Answer** + - View Detailed Information About a Container or Image: + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day16/image/2_docker_inspect.png) + +### 3. Use the `docker port` command to list the port mappings for a container. + +**Answer** + - This command maps port 8181 on the host to port 82 in the container and lists the port mappings. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day16/image/3_docker_port.png) + +### 4. Use the `docker stats` command to view resource usage statistics for one or more containers. + +**Answer** + - This command provides a live stream of resource usage statistics for all running containers. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day16/image/4_docker_stats.png) + +### 5. Use the `docker top` command to view the processes running inside a container. + +**Answer** + - This command lists the processes running inside the `my_container2` container. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day16/image/5_docker_top.png) + +### 6. Use the `docker save` command to save an image to a tar archive. + +**Answer** + - This command saves the `nginx` image to a tar archive named `my_image.tar`. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day16/image/6_docker_save.png) + +### 7. Use the `docker load` command to load an image from a tar archive. + +**Answer** + - This command loads the image from the `my_image.tar` archive into Docker. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day16/image/7_docker_load.png) + +These tasks involve simple operations that can be used to manage images and containers. + +For reference, you can watch this video: [Docker Tutorial on AWS EC2 as DevOps Engineer // DevOps Project Bootcamp Day 2](https://youtu.be/Tevxhn6Odc8). + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day17/README.md b/2024/day17/README.md new file mode 100644 index 0000000000..fcb70606dc --- /dev/null +++ b/2024/day17/README.md @@ -0,0 +1,28 @@ +## Day 17 Task: Docker Project for DevOps Engineers + +### You people are doing just amazing in **#90daysofdevops**. Today's challenge is so special because you are going to do a DevOps project with Docker. Are you excited? 😍 + +# Dockerfile + +Docker is a tool that makes it easy to run applications in containers. Containers are like small packages that hold everything an application needs to run. To create these containers, developers use something called a Dockerfile. + +A Dockerfile is like a set of instructions for making a container. It tells Docker what base image to use, what commands to run, and what files to include. For example, if you were making a container for a website, the Dockerfile might tell Docker to use an official web server image, copy the files for your website into the container, and start the web server when the container starts. + +For more about Dockerfile, visit [here](https://rushikesh-mashidkar.hashnode.dev/dockerfile-docker-compose-swarm-and-volumes). + +## Task + +- Create a Dockerfile for a simple web application (e.g. a Node.js or Python app) +- Build the image using the Dockerfile and run the container +- Verify that the application is working as expected by accessing it in a web browser +- Push the image to a public or private repository (e.g. Docker Hub) + +For a reference project, visit [here](https://youtu.be/Tevxhn6Odc8). + +If you want to dive further, watch this [bootcamp](https://youtube.com/playlist?list=PLlfy9GnSVerRqYJgVYO0UiExj5byjrW8u). + +You can share your learning with everyone over LinkedIn and tag us along. 😃 + +Happy Learning :) + +[← Previous Day](../day16/README.md) | [Next Day →](../day18/README.md) diff --git a/2024/day17/code.txt b/2024/day17/code.txt new file mode 100644 index 0000000000..0abb085a27 --- /dev/null +++ b/2024/day17/code.txt @@ -0,0 +1,49 @@ +root@Bhavin-Savaliya:~/flask-app# history + + 1 clear + 2 ls + 3 docker ps + 4 docker + 5 docker --version + 6 systemctl status docker + 7 clear + 8 ls + 9 mkdir flask-app + 10 ls + 11 cd flask-app + 12 vim app.py + 13 ls + 14 cat app.py + 15 clear + 16 ls + 17 vim requirements.txt + 18 ls + 19 cat requirements.txt + 20 clear + 21 ls + 22 vim Dockerfile + 23 ls + 24 cat Dockerfile + 25 clear + 26 ls + 27 docker build -t flask-app . + 28 RUN pip install + 29 apt pip install + 30 pip + 31 apt install python3-pip + 32 pip + 33 python --version + 34 python3 --version + 35 docker build -t flask-app . + 36 pip install -r requirements.txt + 37 ls + 38 vim requirements.txt + 39 cat requirements.txt + 40 clear + 41 docker build -t flask-app . + 42 docker run -d -p 5000:5000 flask-app + 43 docker tag flask-app bhavin1998/flask-app + 44 docker push bhavin1998/flask-app + 45 docker login + 46 docker push bhavin1998/flask-app + 47 history diff --git a/2024/day17/image/1_Create_a_new_directory.png b/2024/day17/image/1_Create_a_new_directory.png new file mode 100644 index 0000000000..d362313e63 Binary files /dev/null and b/2024/day17/image/1_Create_a_new_directory.png differ diff --git a/2024/day17/image/2_app_py.png b/2024/day17/image/2_app_py.png new file mode 100644 index 0000000000..972f4781ea Binary files /dev/null and b/2024/day17/image/2_app_py.png differ diff --git a/2024/day17/image/3_Create_a_requirements_file.png b/2024/day17/image/3_Create_a_requirements_file.png new file mode 100644 index 0000000000..1de8f30ace Binary files /dev/null and b/2024/day17/image/3_Create_a_requirements_file.png differ diff --git a/2024/day17/image/4_Create_a_Dockerfile.png b/2024/day17/image/4_Create_a_Dockerfile.png new file mode 100644 index 0000000000..1b3e55ceb8 Binary files /dev/null and b/2024/day17/image/4_Create_a_Dockerfile.png differ diff --git a/2024/day17/image/5_build_the_docker_image.png b/2024/day17/image/5_build_the_docker_image.png new file mode 100644 index 0000000000..385925cd99 Binary files /dev/null and b/2024/day17/image/5_build_the_docker_image.png differ diff --git a/2024/day17/image/6_Run_the_Container.png b/2024/day17/image/6_Run_the_Container.png new file mode 100644 index 0000000000..ce1021f569 Binary files /dev/null and b/2024/day17/image/6_Run_the_Container.png differ diff --git a/2024/day17/image/7_Verify_the_Application.png b/2024/day17/image/7_Verify_the_Application.png new file mode 100644 index 0000000000..399638cf61 Binary files /dev/null and b/2024/day17/image/7_Verify_the_Application.png differ diff --git a/2024/day17/image/8_Tag_the_Image.png b/2024/day17/image/8_Tag_the_Image.png new file mode 100644 index 0000000000..a74fc1826f Binary files /dev/null and b/2024/day17/image/8_Tag_the_Image.png differ diff --git a/2024/day17/image/9_Push_the_Image.png b/2024/day17/image/9_Push_the_Image.png new file mode 100644 index 0000000000..afdd53c17e Binary files /dev/null and b/2024/day17/image/9_Push_the_Image.png differ diff --git a/2024/day17/solution.md b/2024/day17/solution.md new file mode 100644 index 0000000000..37d3cf61bf --- /dev/null +++ b/2024/day17/solution.md @@ -0,0 +1,87 @@ +# Day 17 Answers: Docker Project for DevOps Engineers + +### You people are doing just amazing in **#90daysofdevops**. Today's challenge is so special because you are going to do a DevOps project with Docker. Are you excited? 😍 + +# Dockerfile + +Docker is a tool that makes it easy to run applications in containers. Containers are like small packages that hold everything an application needs to run. To create these containers, developers use something called a Dockerfile. + +A Dockerfile is like a set of instructions for making a container. It tells Docker what base image to use, what commands to run, and what files to include. For example, if you were making a container for a website, the Dockerfile might tell Docker to use an official web server image, copy the files for your website into the container, and start the web server when the container starts. + +For more about Dockerfile, visit [here](https://rushikesh-mashidkar.hashnode.dev/dockerfile-docker-compose-swarm-and-volumes). + +## Tasks with Answers + +**1. Create a Dockerfile for a simple web application (e.g. a Node.js or Python app)** + - **1. Create a Simple Flask Application** + - Create a new directory for your project and navigate into it: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/1_Create_a_new_directory.png) + + - Create a new file named `app.py` and add the following content: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/2_app_py.png) + + - Create a requirements file named `requirements.txt` and add the following content: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/3_Create_a_requirements_file.png) + + - **2. Create a Dockerfile** + - Create a file named `Dockerfile` in the same directory and add the following content: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/4_Create_a_Dockerfile.png) + +**2. Build the image using the Dockerfile and run the container** + - To build the Docker image, run the following command in the directory containing the Dockerfile: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/5_build_the_docker_image.png) + + - Run the Container + - To run the container, use the following command: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/6_Run_the_Container.png) + +**3. Verify that the application is working as expected by accessing it in a web browser** + - Open your web browser and navigate to `http://localhost:5000`. You should see the message "Hello, World!". + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/7_Verify_the_Application.png) + +**4. Push the image to a public or private repository (e.g. Docker Hub)** + - To push the image to Docker Hub, you need to tag it with your Docker Hub username and repository name, then push it. + - **1. Tag the Image** + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/8_Tag_the_Image.png) + + - **2. Push the Image** + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/9_Push_the_Image.png) + +For a reference project, visit [here](https://youtu.be/Tevxhn6Odc8). + +If you want to dive further, watch this [bootcamp](https://youtube.com/playlist?list=PLlfy9GnSVerRqYJgVYO0UiExj5byjrW8u). + +You can share your learning with everyone over LinkedIn and tag us along. 😃 + +Happy Learning :) + +[Code for Reference](https://raw.githubusercontent.com/Bhavin213/90DaysOfDevOps/master/2024/day17/code.txt) + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day18/README.md b/2024/day18/README.md new file mode 100644 index 0000000000..94b0a0c850 --- /dev/null +++ b/2024/day18/README.md @@ -0,0 +1,42 @@ +# Day 18 Task: Docker for DevOps Engineers + +Till now you have created a Dockerfile and pushed it to the repository. Let's move forward and dig deeper into other Docker concepts. Today, let's study Docker Compose! 😃 + +## Docker Compose + +- Docker Compose is a tool that was developed to help define and share multi-container applications. +- With Compose, we can create a YAML file to define the services and, with a single command, spin everything up or tear it all down. +- Learn more about Docker Compose [here](https://tecadmin.net/tutorial/docker/docker-compose/). + +## What is YAML? + +- YAML is a data serialization language that is often used for writing configuration files. Depending on whom you ask, YAML stands for "Yet Another Markup Language" or "YAML Ain’t Markup Language" (a recursive acronym), which emphasizes that YAML is for data, not documents. +- YAML is a popular programming language because it is human-readable and easy to understand. +- YAML files use a .yml or .yaml extension. +- Read more about it [here](https://www.redhat.com/en/topics/automation/what-is-yaml). + +## Task 1 + +Learn how to use the docker-compose.yml file to set up the environment, configure the services and links between different containers, and also to use environment variables in the docker-compose.yml file. + +[Sample docker-compose.yml file](https://github.com/LondheShubham153/90DaysOfDevOps/blob/master/2023/day18/docker-compose.yaml) + +## Task 2 + +- Pull a pre-existing Docker image from a public repository (e.g. Docker Hub) and run it on your local machine. Run the container as a non-root user (Hint: Use the `usermod` command to give the user permission to Docker). Make sure you reboot the instance after giving permission to the user. +- Inspect the container's running processes and exposed ports using the `docker inspect` command. +- Use the `docker logs` command to view the container's log output. +- Use the `docker stop` and `docker start` commands to stop and start the container. +- Use the `docker rm` command to remove the container when you're done. + +## How to Run Docker Commands Without Sudo? + +- Make sure Docker is installed and the system is updated (This was already completed as part of previous tasks): +- `sudo usermod -a -G docker $USER` +- Reboot the machine. + +For reference, you can watch this [video](https://youtu.be/Tevxhn6Odc8). + +You can post on LinkedIn and let us know what you have learned from this task by using #90DaysOfDevOps Challenge. Happy Learning! :) + +[← Previous Day](../day17/README.md) | [Next Day →](../day19/README.md) diff --git a/2024/day18/image/10_Remove_the_container.png b/2024/day18/image/10_Remove_the_container.png new file mode 100644 index 0000000000..d12da96fde Binary files /dev/null and b/2024/day18/image/10_Remove_the_container.png differ diff --git a/2024/day18/image/1_docker_compose_yml_file.png b/2024/day18/image/1_docker_compose_yml_file.png new file mode 100644 index 0000000000..360cec6f4b Binary files /dev/null and b/2024/day18/image/1_docker_compose_yml_file.png differ diff --git a/2024/day18/image/2_Pull_the_Docker_image.png b/2024/day18/image/2_Pull_the_Docker_image.png new file mode 100644 index 0000000000..9b89256509 Binary files /dev/null and b/2024/day18/image/2_Pull_the_Docker_image.png differ diff --git a/2024/day18/image/3_Add_the_current_user_to_the_Docker_group.png b/2024/day18/image/3_Add_the_current_user_to_the_Docker_group.png new file mode 100644 index 0000000000..4913b0619a Binary files /dev/null and b/2024/day18/image/3_Add_the_current_user_to_the_Docker_group.png differ diff --git a/2024/day18/image/4_Reboot_the_machine_to_apply_the_changes.png b/2024/day18/image/4_Reboot_the_machine_to_apply_the_changes.png new file mode 100644 index 0000000000..413557ff26 Binary files /dev/null and b/2024/day18/image/4_Reboot_the_machine_to_apply_the_changes.png differ diff --git a/2024/day18/image/5_Run_the_Docker_container.png b/2024/day18/image/5_Run_the_Docker_container.png new file mode 100644 index 0000000000..d336fb981f Binary files /dev/null and b/2024/day18/image/5_Run_the_Docker_container.png differ diff --git a/2024/day18/image/6_Inspect_the_container.png b/2024/day18/image/6_Inspect_the_container.png new file mode 100644 index 0000000000..83314b3bc4 Binary files /dev/null and b/2024/day18/image/6_Inspect_the_container.png differ diff --git a/2024/day18/image/7_View_the_logs.png b/2024/day18/image/7_View_the_logs.png new file mode 100644 index 0000000000..517655a0d4 Binary files /dev/null and b/2024/day18/image/7_View_the_logs.png differ diff --git a/2024/day18/image/8_Stop_the_container.png b/2024/day18/image/8_Stop_the_container.png new file mode 100644 index 0000000000..79b905025a Binary files /dev/null and b/2024/day18/image/8_Stop_the_container.png differ diff --git a/2024/day18/image/9_Start_the_container.png b/2024/day18/image/9_Start_the_container.png new file mode 100644 index 0000000000..05c4d0cd00 Binary files /dev/null and b/2024/day18/image/9_Start_the_container.png differ diff --git a/2024/day18/image/task1.png b/2024/day18/image/task1.png new file mode 100644 index 0000000000..8b13789179 --- /dev/null +++ b/2024/day18/image/task1.png @@ -0,0 +1 @@ + diff --git a/2024/day18/solution.md b/2024/day18/solution.md new file mode 100644 index 0000000000..dbbe507b09 --- /dev/null +++ b/2024/day18/solution.md @@ -0,0 +1,108 @@ +# Day 18 Answers: Docker for DevOps Engineers + +Till now you have created a Dockerfile and pushed it to the repository. Let's move forward and dig deeper into other Docker concepts. Today, let's study Docker Compose! 😃 + +## Docker Compose + +- Docker Compose is a tool that was developed to help define and share multi-container applications. +- With Compose, we can create a YAML file to define the services and, with a single command, spin everything up or tear it all down. +- Learn more about Docker Compose [here](https://tecadmin.net/tutorial/docker/docker-compose/). + +## What is YAML? + +- YAML is a data serialization language that is often used for writing configuration files. Depending on whom you ask, YAML stands for "Yet Another Markup Language" or "YAML Ain’t Markup Language" (a recursive acronym), which emphasizes that YAML is for data, not documents. +- YAML is a popular programming language because it is human-readable and easy to understand. +- YAML files use a .yml or .yaml extension. +- Read more about it [here](https://www.redhat.com/en/topics/automation/what-is-yaml). + +## Tasks with Answers + +## Task 1 + +Learn how to use the docker-compose.yml file to set up the environment, configure the services and links between different containers, and also to use environment variables in the docker-compose.yml file. + +[Sample docker-compose.yml file](https://github.com/LondheShubham153/90DaysOfDevOps/blob/master/2023/day18/docker-compose.yaml) + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/1_docker_compose_yml_file.png) + +## Task 2 + + - **1. Pull a pre-existing Docker image from a public repository (e.g. Docker Hub) and run it on your local machine. Run the container as a non-root user (Hint: Use the `usermod` command to give the user permission to Docker). Make sure you reboot the instance after giving permission to the user.** + - Pull the Docker image: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/2_Pull_the_Docker_image.png) + + - Add the current user to the Docker group: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/3_Add_the_current_user_to_the_Docker_group.png) + + - Reboot the machine to apply the changes: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/4_Reboot_the_machine_to_apply_the_changes.png) + + - Run the Docker container: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/5_Run_the_Docker_container.png) + + - **2. Inspect the container's running processes and exposed ports using the `docker inspect` command.** + - Inspect the container: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/6_Inspect_the_container.png) + + - **3. Use the `docker logs` command to view the container's log output.** + - View the logs: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/7_View_the_logs.png) + + - **4. Use the `docker stop` and `docker start` commands to stop and start the container.** + - Stop the container: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/8_Stop_the_container.png) + + - Start the container: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/9_Start_the_container.png) + + - **5. Use the `docker rm` command to remove the container when you're done.** + - Remove the container: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/10_Remove_the_container.png) + +## How to Run Docker Commands Without Sudo? + +- Make sure Docker is installed and the system is updated (This was already completed as part of previous tasks): + - `sudo usermod -a -G docker $USER` + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/3_Add_the_current_user_to_the_Docker_group.png) + + - Reboot the machine. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/4_Reboot_the_machine_to_apply_the_changes.png) + +For reference, you can watch this [video](https://youtu.be/Tevxhn6Odc8). + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day19/README.md b/2024/day19/README.md new file mode 100644 index 0000000000..2f6e8a3fad --- /dev/null +++ b/2024/day19/README.md @@ -0,0 +1,37 @@ +# Day 19 Task: Docker for DevOps Engineers + +**So far, you've learned how to create a docker-compose.yml file and push it to the repository. Let's move forward and explore more Docker Compose concepts. Today, let's study Docker Volume and Docker Network!** 😃 + +## Docker Volume + +Docker allows you to create volumes, which are like separate storage areas that can be accessed by containers. They enable you to store data, like a database, outside the container, so it doesn't get deleted when the container is removed. You can also mount the same volume to multiple containers, allowing them to share data. For more details, check out this [reference](https://docs.docker.com/storage/volumes/). + +## Docker Network + +Docker allows you to create virtual networks, where you can connect multiple containers together. This way, the containers can communicate with each other and with the host machine. Each container has its own storage space, but if we want to share storage between containers, we need to use volumes. For more details, check out this [reference](https://docs.docker.com/network/). + +## Task 1 + +Create a multi-container docker-compose file that will bring up and bring down containers in a single shot (e.g., create application and database containers). + +### Hints: + +- Use the `docker-compose up` command with the `-d` flag to start a multi-container application in detached mode. +- Use the `docker-compose scale` command to increase or decrease the number of replicas for a specific service. You can also add [`replicas`](https://stackoverflow.com/questions/63408708/how-to-scale-from-within-docker-compose-file) in the deployment file for auto-scaling. +- Use the `docker-compose ps` command to view the status of all containers, and `docker-compose logs` to view the logs of a specific service. +- Use the `docker-compose down` command to stop and remove all containers, networks, and volumes associated with the application. + +## Task 2 + +- Learn how to use Docker Volumes and Named Volumes to share files and directories between multiple containers. +- Create two or more containers that read and write data to the same volume using the `docker run --mount` command. +- Verify that the data is the same in all containers by using the `docker exec` command to run commands inside each container. +- Use the `docker volume ls` command to list all volumes and the `docker volume rm` command to remove the volume when you're done. + +## Project Opportunity + +You can use this task as a project to add to your resume. + +You can post on LinkedIn and let us know what you have learned from this task by using #90DaysOfDevOps Challenge. Happy Learning! 🙂 + +[← Previous Day](../day18/README.md) | [Next Day →](../day20/README.md) diff --git a/2024/day19/images/Screenshot (113).png b/2024/day19/images/Screenshot (113).png new file mode 100644 index 0000000000..300e47838f Binary files /dev/null and b/2024/day19/images/Screenshot (113).png differ diff --git a/2024/day19/images/Screenshot (114).png b/2024/day19/images/Screenshot (114).png new file mode 100644 index 0000000000..07c61f9b00 Binary files /dev/null and b/2024/day19/images/Screenshot (114).png differ diff --git a/2024/day19/images/Screenshot (116).png b/2024/day19/images/Screenshot (116).png new file mode 100644 index 0000000000..e759cd57cb Binary files /dev/null and b/2024/day19/images/Screenshot (116).png differ diff --git a/2024/day19/images/Screenshot (117).png b/2024/day19/images/Screenshot (117).png new file mode 100644 index 0000000000..ac701b1583 Binary files /dev/null and b/2024/day19/images/Screenshot (117).png differ diff --git a/2024/day19/images/Screenshot (118).png b/2024/day19/images/Screenshot (118).png new file mode 100644 index 0000000000..de7e89538a Binary files /dev/null and b/2024/day19/images/Screenshot (118).png differ diff --git a/2024/day19/images/Screenshot (119).png b/2024/day19/images/Screenshot (119).png new file mode 100644 index 0000000000..eaf96af7f1 Binary files /dev/null and b/2024/day19/images/Screenshot (119).png differ diff --git a/2024/day19/images/Screenshot (120).png b/2024/day19/images/Screenshot (120).png new file mode 100644 index 0000000000..b9d7ee8ca2 Binary files /dev/null and b/2024/day19/images/Screenshot (120).png differ diff --git a/2024/day20/Docker_cheat_sheet.pdf b/2024/day20/Docker_cheat_sheet.pdf new file mode 100644 index 0000000000..230a961288 Binary files /dev/null and b/2024/day20/Docker_cheat_sheet.pdf differ diff --git a/2024/day20/README.md b/2024/day20/README.md new file mode 100644 index 0000000000..045045a634 --- /dev/null +++ b/2024/day20/README.md @@ -0,0 +1,17 @@ +# Day 20 Task: Docker for DevOps Engineers + +## Finally!! 🎉 + +You have completed ✅ the Docker hands-on sessions, and I hope you have learned something valuable from it. 🙌 + +Now it's time to take your Docker skills to the next level by creating a comprehensive cheat-sheet of all the commands you've learned so far. This cheat-sheet should include commands for both Docker and Docker Compose, along with brief explanations of their usage. Not only will this cheat-sheet help you in the future, but it will also serve as a valuable resource for the DevOps community. 😊🙌 + +So, put your knowledge and creativity to the test and create a cheat-sheet that truly stands out! 🚀 + +For reference, I have added a [cheatsheet](https://cdn.hashnode.com/res/hashnode/image/upload/v1670863735841/r6xdXpsap.png?auto=compress,format&format=webp). Make sure your cheat-sheet is UNIQUE. + +Post it on LinkedIn and share your knowledge with the community. 😃 + +**Happy Learning :)** + +[← Previous Day](../day19/README.md) | [Next Day →](../day21/README.md) diff --git a/2024/day21/README.md b/2024/day21/README.md new file mode 100644 index 0000000000..304185270a --- /dev/null +++ b/2024/day21/README.md @@ -0,0 +1,44 @@ +# Day 21 Task: Important Docker Interview Questions + +## Docker Interview + +Docker is a crucial topic for DevOps Engineer interviews, especially for freshers. Here are some essential questions to help you prepare and ace your Docker interviews: + +## Questions + +- What is the difference between an Image, Container, and Engine? +- What is the difference between the Docker command COPY vs ADD? +- What is the difference between the Docker command CMD vs RUN? +- How will you reduce the size of a Docker image? +- Why and when should you use Docker? +- Explain the Docker components and how they interact with each other. +- Explain the terminology: Docker Compose, Dockerfile, Docker Image, Docker Container. +- In what real scenarios have you used Docker? +- Docker vs Hypervisor? +- What are the advantages and disadvantages of using Docker? +- What is a Docker namespace? +- What is a Docker registry? +- What is an entry point? +- How to implement CI/CD in Docker? +- Will data on the container be lost when the Docker container exits? +- What is a Docker swarm? +- What are the Docker commands for the following: + - Viewing running containers + - Running a container under a specific name + - Exporting a Docker image + - Importing an existing Docker image + - Deleting a container + - Removing all stopped containers, unused networks, build caches, and dangling images? +- What are the common Docker practices to reduce the size of Docker images? +- How do you troubleshoot a Docker container that is not starting? +- Can you explain the Docker networking model? +- How do you manage persistent storage in Docker? +- How do you secure a Docker container? +- What is Docker overlay networking? +- How do you handle environment variables in Docker? + +These questions will help you in your next DevOps interview. Write a blog and share it on LinkedIn to showcase your knowledge. + +**Happy Learning :)** + +[← Previous Day](../day20/README.md) | [Next Day →](../day22/README.md) diff --git a/2024/day22/README.md b/2024/day22/README.md new file mode 100644 index 0000000000..80e0bdaa43 --- /dev/null +++ b/2024/day22/README.md @@ -0,0 +1,38 @@ +# Day-22 : Getting Started with Jenkins 😃 +**Linux, Git, Git-Hub, Docker finish ho chuka hai to chaliye seekhte hai inko deploy krne ke lye CI-CD tool:** + +## What is Jenkins? +- Jenkins is an open source continuous integration-continuous delivery and deployment (CI/CD) automation software DevOps tool written in the Java programming language. It is used to implement CI/CD workflows, called pipelines. + +- Jenkins is a tool that is used for automation, and it is an open-source server that allows all the developers to build, test and deploy software. It works or runs on java as it is written in java. By using Jenkins we can make a continuous integration of projects(jobs) or end-to-endpoint automation. + +- Jenkins achieves Continuous Integration with the help of plugins. Plugins allow the integration of Various DevOps stages. If you want to integrate a particular tool, you need to install the plugins for that tool. For example Git, Maven 2 project, Amazon EC2, HTML publisher etc. + +### Let us do discuss the necessity of this tool before going ahead to the procedural part for installation: + +- Nowadays, humans are becoming lazy😴 day by day so even having digital screens and just one click button in front of us then also need some automation. + +- Here, I’m referring to that part of automation where we need not have to look upon a process(here called a job) for completion and after it doing another job. For that, we have Jenkins with us. + +Note: By now Jenkins should be installed on your machine(as it was a part of previous tasks, if not follow [Installation Guide](https://youtu.be/OkVtBKqMt7I)) + +## Tasks: + +### Task 1: Write a small article in your own words about +- what Jenkins is and why it is used. Avoid copying directly from the internet. +- Reflect on how Jenkins integrates into the DevOps lifecycle and its benefits. +- Discuss the role of Jenkins in automating the build, test, and deployment processes. + +### Task 2: Create a Freestyle Pipeline to Print "Hello World" + +Create a freestyle pipeline in Jenkins that: +- Prints "Hello World" +- Prints the current date and time +- Clones a GitHub repository and lists its contents +- Configure the pipeline to run periodically (e.g., every hour). + +### Share Your Progress + +Don't forget to post your progress on LinkedIn to share your learning journey with others. Happy learning and good luck with your DevOps challenge! + +[← Previous Day](../day21/README.md) | [Next Day →](../day23/README.md) diff --git a/2024/day23/README.md b/2024/day23/README.md new file mode 100644 index 0000000000..ae0717014a --- /dev/null +++ b/2024/day23/README.md @@ -0,0 +1,39 @@ +# Day 23 Task: Jenkins Freestyle Project for DevOps Engineers + +The community is absolutely crushing it in the #90daysofdevops journey. Today's challenge is particularly exciting as it involves creating a Jenkins Freestyle Project, an excellent opportunity for DevOps engineers to showcase their skills and push their limits. Who's ready to dive in and make it happen? 😍 + +## What is CI/CD? + +- **CI (Continuous Integration)** is the practice of automating the integration of code changes from multiple developers into a single codebase. It involves developers frequently committing their work into a central code repository (such as GitHub or Stash). Automated tools then build the newly committed code and perform tasks like code review, ensuring that the code is integrated smoothly. The key goals of Continuous Integration are to find and address bugs quickly, make the integration process easier across a team of developers, improve software quality, and reduce the time it takes to release new features. + +- **CD (Continuous Delivery)** follows Continuous Integration and ensures that new changes can be released to customers quickly and without errors. This includes running integration and regression tests in a staging environment (similar to production) to ensure the final release is stable. Continuous Delivery automates the release process, ensuring a release-ready product at all times and allowing deployment at any moment. + +## What Is a Build Job? + +A Jenkins build job contains the configuration for automating specific tasks or steps in the application building process. These tasks include gathering dependencies, compiling, archiving, transforming code, testing, and deploying code in different environments. + +Jenkins supports several types of build jobs, such as freestyle projects, pipelines, multi-configuration projects, folders, multibranch pipelines, and organization folders. + +## What is a Freestyle Project? 🤔 + +A freestyle project in Jenkins is a type of project that allows you to build, test, and deploy software using various options and configurations. Here are a few tasks you could complete with a freestyle project in Jenkins: + +### Task 1 + +- Create an agent for your app (which you deployed using Docker in a previous task). +- Create a new Jenkins freestyle project for your app. +- In the "Build" section of the project, add a build step to run the `docker build` command to build the image for the container. +- Add a second step to run the `docker run` command to start a container using the image created in the previous step. + +### Task 2 + +- Create a Jenkins project to run the `docker-compose up -d` command to start multiple containers defined in the compose file (Hint: use the application and database docker-compose file from Day 19). +- Set up a cleanup step in the Jenkins project to run the `docker-compose down` command to stop and remove the containers defined in the compose file. + +For reference on Jenkins Freestyle Projects, visit [here](https://youtu.be/wwNWgG5htxs). + +You can post on LinkedIn and let us know what you have learned from this task as part of the #90DaysOfDevOps Challenge. + +**Happy Learning :)** + +[← Previous Day](../day22/README.md) | [Next Day →](../day24/README.md) diff --git a/2024/day24/README.md b/2024/day24/README.md new file mode 100644 index 0000000000..50a518e7de --- /dev/null +++ b/2024/day24/README.md @@ -0,0 +1,29 @@ +# Day 24 Task: Complete Jenkins CI/CD Project + +Let's create a comprehensive CI/CD pipeline for your Node.js application! 😍 + +## Did you finish Day 23? + +- Day 23 focused on Jenkins CI/CD, ensuring you understood the basics. Today, you'll take it a step further by completing a full project from start to finish, which you can proudly add to your resume. +- As you've already worked with Docker and Docker Compose, you'll be integrating these tools into a live project. + +## Task 1 + +1. Fork [this repository](https://github.com/LondheShubham153/node-todo-cicd.git). +2. Set up a connection between your Jenkins job and your GitHub repository through GitHub Integration. +3. Learn about [GitHub WebHooks](https://betterprogramming.pub/how-too-add-github-webhook-to-a-jenkins-pipeline-62b0be84e006) and ensure you have the CI/CD setup configured. +4. Refer to [this video](https://youtu.be/nplH3BzKHPk) for a step-by-step guide on the entire project. + +## Task 2 + +1. In the "Execute Shell" section of your Jenkins job, run the application using Docker Compose. +2. Create a Docker Compose file for this project (a valuable open-source contribution). +3. Run the project and celebrate your accomplishment! 🎉 + +For a detailed walkthrough and hands-on experience with the project, visit [this video](https://youtu.be/nplH3BzKHPk). + +You can post on LinkedIn and share your experiences and learnings from this task using the #90DaysOfDevOps Challenge. + +**Happy Learning :)** + +[← Previous Day](../day23/README.md) | [Next Day →](../day25/README.md) diff --git a/2024/day25/README.md b/2024/day25/README.md new file mode 100644 index 0000000000..6b0d0f32ad --- /dev/null +++ b/2024/day25/README.md @@ -0,0 +1,31 @@ +# Day 25 Task: Complete Jenkins CI/CD Project - Continued with Documentation + +You've been making amazing progress, so let's take a moment to catch up and refine our work. Today's focus is on completing the Jenkins CI/CD project from Day 24 and creating thorough documentation for it. + +## Did you finish Day 24? + +- Day 24 provided an end-to-end project experience, and adding this to your resume will be a significant achievement. + +- Take your time to finish the project, create comprehensive documentation, and make sure to highlight it in your resume and share your experience. + +## Task 1 + +- Document the entire process from cloning the repository to adding webhooks, deployment, and more. Create a detailed README file for your project. You can refer to [this example](https://github.com/LondheShubham153/fynd-my-movie/blob/master/README.md) for inspiration. + +- A well-written README file will not only help others understand your project but also make it easier for you to revisit and use the project in the future. + +## Task 2 + +- As it's a lighter day, set a small goal for yourself. Consider something you've been meaning to accomplish and use this time to focus on it. + +- Share your goal and how you plan to achieve it using [this template](https://www.linkedin.com/posts/shubhamlondhe1996_taking-resolutions-and-having-goals-for-an-activity-7023858409762373632-s2J8?utm_source=share&utm_medium=member_desktop). + +- Having small, achievable goals and strategies for reaching them is essential. Don't forget to reward yourself for your efforts! + +For a detailed walkthrough and project guidance, visit [this video](https://youtu.be/nplH3BzKHPk). + +You can post on LinkedIn and let us know what you have learned from this task using the #90DaysOfDevOps Challenge. + +**Happy Learning :)** + +[← Previous Day](../day24/README.md) | [Next Day →](../day26/README.md) diff --git a/2024/day26/README.md b/2024/day26/README.md new file mode 100644 index 0000000000..b0d65accb6 --- /dev/null +++ b/2024/day26/README.md @@ -0,0 +1,59 @@ +# Day 26 Task: Jenkins Declarative Pipeline + +One of the most important parts of your DevOps and CICD journey is a Declarative Pipeline Syntax of Jenkins + +## Some terms for your Knowledge + +**What is Pipeline -** A pipeline is a collection of steps or jobs interlinked in a sequence. + +**Declarative:** Declarative is a more recent and advanced implementation of a pipeline as a code. + +**Scripted:** Scripted was the first and most traditional implementation of the pipeline as a code in Jenkins. It was designed as a general-purpose DSL (Domain Specific Language) built with Groovy. + +# Why you should have a Pipeline + +The definition of a Jenkins Pipeline is written into a text file (called a [`Jenkinsfile`](https://www.jenkins.io/doc/book/pipeline/jenkinsfile)) which in turn can be committed to a project’s source control repository. +This is the foundation of "Pipeline-as-code"; treating the CD pipeline as a part of the application to be versioned and reviewed like any other code. + +**Creating a `Jenkinsfile` and committing it to source control provides a number of immediate benefits:** + +- Automatically creates a Pipeline build process for all branches and pull requests. +- Code review/iteration on the Pipeline (along with the remaining source code). + +# Pipeline syntax + +```groovy +pipeline { + agent any + stages { + stage('Build') { + steps { + // + } + } + stage('Test') { + steps { + // + } + } + stage('Deploy') { + steps { + // + } + } + } +} +``` + +# Task-01 + +- Create a New Job, this time select Pipeline instead of Freestyle Project. +- Follow the Official Jenkins [Hello world example](https://www.jenkins.io/doc/pipeline/tour/hello-world/) +- Complete the example using the Declarative pipeline +- In case of any issues feel free to post on any Groups, [Discord](https://discord.gg/Q6ntmMtH) or [Telegram](https://t.me/trainwithshubham) + +You can post your progress on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. + +Happy Learning:) + +[← Previous Day](../day25/README.md) | [Next Day →](../day27/README.md) diff --git a/2024/day27/README.md b/2024/day27/README.md new file mode 100644 index 0000000000..277a2db069 --- /dev/null +++ b/2024/day27/README.md @@ -0,0 +1,43 @@ +# Day 27 Task: Jenkins Declarative Pipeline with Docker + +Day 26 was all about a Declarative pipeline, now its time to level up things, let's integrate Docker and your Jenkins declarative pipeline + +## Use your Docker Build and Run Knowledge + +**docker build -** you can use `sh 'docker build . -t ' ` in your pipeline stage block to run the docker build command. (Make sure you have docker installed with correct permissions. + +**docker run:** you can use `sh 'docker run -d '` in your pipeline stage block to build the container. + +**How will the stages look** + +```groovy +stages { + stage('Build') { + steps { + sh 'docker build -t trainwithshubham/django-app:latest' + } + } + } +``` + +# Task-01 + +- Create a docker-integrated Jenkins declarative pipeline +- Use the above-given syntax using `sh` inside the stage block +- You will face errors in case of running a job twice, as the docker container will be already created, so for that do task 2 + +# Task-02 + +- Create a docker-integrated Jenkins declarative pipeline using the `docker` groovy syntax inside the stage block. +- You won't face errors, you can Follow [this documentation](https://tempora-mutantur.github.io/jenkins.io/github_pages_test/doc/book/pipeline/docker/) + +- Complete your previous projects using this Declarative pipeline approach + +- In case of any issues feel free to post on any Groups, [Discord](https://discord.gg/Q6ntmMtH) or [Telegram](https://t.me/trainwithshubham) + +Are you enjoying the #90DaysOfDevOps Challenge? +Let me know how are feeling after 4 weeks of DevOps Learnings, + +Happy Learning:) + +[← Previous Day](../day26/README.md) | [Next Day →](../day28/README.md) diff --git a/2024/day28/README.md b/2024/day28/README.md new file mode 100644 index 0000000000..fc4bdc9f6c --- /dev/null +++ b/2024/day28/README.md @@ -0,0 +1,52 @@ +# Day 28 Task: Jenkins Agents + +## Jenkins Master (Server) + +The Jenkins master server is the central control unit that manages the overall orchestration of workflows defined in pipelines. It handles tasks such as scheduling jobs, monitoring job status, and managing configurations. The master serves the Jenkins UI and acts as the control node, delegating job execution to agents. + +## Jenkins Agent + +A Jenkins agent is a separate machine or container that executes the tasks defined in Jenkins jobs. When a job is triggered on the master, the actual execution occurs on the assigned agent. Each agent is identified by a unique label, allowing the master to delegate jobs to the appropriate agent. + +For small teams or projects, a single Jenkins installation may suffice. However, as the number of projects grows, it becomes necessary to scale. Jenkins supports this by allowing a master to connect with multiple agents, enabling distributed job execution. + +

+ +## Pre-requisites + +To set up an agent, you'll need a fresh Ubuntu 22.04 Linux installation. Ensure Java (the same version as on the Jenkins master server) and Docker are installed on the agent machine. + +*Note: While creating an agent, ensure that permissions, rights, and ownership are appropriately set for Jenkins users.* + +## Task 01 + +1. **Create an Agent:** + - Set up a new node in Jenkins by creating an agent. + +2. **AWS EC2 Instance Setup:** + - Create a new AWS EC2 instance and connect it to the master (where Jenkins is installed). + +3. **Master-Agent Connection:** + - Establish a connection between the master and agent using SSH and a public-private key pair exchange. + - Verify the agent's status in the "Nodes" section. + + You can follow [this article](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7017885886461698048-os5f?utm_source=share&utm_medium=member_android) for detailed instructions. + +## Task 02 + +1. **Run Previous Jobs on the New Agent:** + - Use the agent to run the Jenkins jobs you built on Day 26 and Day 27. + +2. **Labeling:** + - Assign labels to the agent and configure your master server to trigger builds on the appropriate agent based on these labels. + +3. **Support:** + - If you encounter any issues, feel free to seek help on [Discord](https://discord.gg/Q6ntmMtH) or [Telegram](https://t.me/trainwithshubham). + +## Reflection + +Are you enjoying the #90DaysOfDevOps Challenge? Share your thoughts and experiences after four weeks of learning DevOps. + +**Happy Learning! :)** + +[← Previous Day](../day27/README.md) | [Next Day →](../day29/README.md) diff --git a/2024/day29/README.md b/2024/day29/README.md new file mode 100644 index 0000000000..87ea08aae8 --- /dev/null +++ b/2024/day29/README.md @@ -0,0 +1,43 @@ +## Day 29 Task: Jenkins Important Interview Questions + +

+ +## Jenkins Interview + +Here are some Jenkins-specific questions related to Docker and other DevOps concepts that can be useful during a DevOps Engineer interview: + +### General Questions + +1. **What’s the difference between continuous integration, continuous delivery, and continuous deployment?** +2. **Benefits of CI/CD.** +3. **What is meant by CI-CD?** +4. **What is Jenkins Pipeline?** +5. **How do you configure a job in Jenkins?** +6. **Where do you find errors in Jenkins?** +7. **In Jenkins, how can you find log files?** +8. **Jenkins workflow and write a script for this workflow?** +9. **How to create continuous deployment in Jenkins?** +10. **How to build a job in Jenkins?** +11. **Why do we use pipelines in Jenkins?** +12. **Is Jenkins alone sufficient for automation?** +13. **How will you handle secrets in Jenkins?** +14. **Explain the different stages in a CI-CD setup.** +15. **Name some of the plugins in Jenkins.** + +### Scenario-Based Questions + +1. **You have a Jenkins pipeline that deploys to a staging environment. Suddenly, the deployment failed due to a missing configuration file. How would you troubleshoot and resolve this issue?** +2. **Imagine you have a Jenkins job that is taking significantly longer to complete than expected. What steps would you take to identify and mitigate the issue?** +3. **You need to implement a secure method to manage environment-specific secrets for different stages (development, staging, production) in your Jenkins pipeline. How would you approach this?** +4. **Suppose your Jenkins master node is under heavy load and build times are increasing. What strategies can you use to distribute the load and ensure efficient build processing?** +5. **A developer commits a code change that breaks the build. How would you set up Jenkins to automatically handle such scenarios and notify the relevant team members?** +6. **You are tasked with setting up a Jenkins pipeline for a multi-branch project. How would you handle different configurations and build steps for different branches?** +7. **How would you implement a rollback strategy in a Jenkins pipeline to revert to a previous stable version if the deployment fails?** +8. **In a scenario where you have multiple teams working on different projects, how would you structure Jenkins jobs and pipelines to ensure efficient resource utilization and manage permissions?** +9. **Your Jenkins agents are running in a cloud environment, and you notice that build times fluctuate due to varying resource availability. How would you optimize the performance and cost of these agents?** + +These questions will help you prepare for your next DevOps interview. Consider writing a blog and sharing your experiences and knowledge on LinkedIn. + +**Happy Learning! :)** + +[← Previous Day](../day28/README.md) | [Next Day →](../day30/README.md) diff --git a/2024/day30/README.md b/2024/day30/README.md new file mode 100644 index 0000000000..af4d37aa2f --- /dev/null +++ b/2024/day30/README.md @@ -0,0 +1,29 @@ +## Day 30 Task: Kubernetes Architecture + +

+ +## Kubernetes Overview + +With the widespread adoption of [containers](https://cloud.google.com/containers) among organizations, Kubernetes, the container-centric management software, has become a standard to deploy and operate containerized applications and is one of the most important parts of DevOps. + +Originally developed at Google and released as open-source in 2014. Kubernetes builds on 15 years of running Google's containerized workloads and the valuable contributions from the open-source community. Inspired by Google’s internal cluster management system, [Borg](https://research.google.com/pubs/pub43438.html), + +## Tasks + +1. What is Kubernetes? Write in your own words and why do we call it k8s? + +2. What are the benefits of using k8s? + +3. Explain the architecture of Kubernetes, refer to [this video](https://youtu.be/FqfoDUhzyDo) + +4. What is Control Plane? + +5. Write the difference between kubectl and kubelets. + +6. Explain the role of the API server. + +Kubernetes architecture is important, so make sure you spend a day understanding it. [This video](https://youtu.be/FqfoDUhzyDo) will surely help you. + +_Happy Learning :)_ + +[← Previous Day](../day29/README.md) | [Next Day →](../day31/README.md) diff --git a/2024/day31/README.md b/2024/day31/README.md new file mode 100644 index 0000000000..5b2a6b79e5 --- /dev/null +++ b/2024/day31/README.md @@ -0,0 +1,65 @@ +## Day 31 Task: Launching your First Kubernetes Cluster with Nginx running + +### Awesome! You learned the architecture of one of the top most important tool "Kubernetes" in your previous task. + +## What about doing some hands-on now? + +Let's read about minikube and implement _k8s_ in our local machine + +1. **What is minikube?** + +_Ans_:- Minikube is a tool which quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. It can deploy as a VM, a container, or on bare-metal. + +Minikube is a pared-down version of Kubernetes that gives you all the benefits of Kubernetes with a lot less effort. + +This makes it an interesting option for users who are new to containers, and also for projects in the world of edge computing and the Internet of Things. + +2. **Features of minikube** + +_Ans_ :- + +(a) Supports the latest Kubernetes release (+6 previous minor versions) + +(b) Cross-platform (Linux, macOS, Windows) + +(c) Deploy as a VM, a container, or on bare-metal + +(d) Multiple container runtimes (CRI-O, containerd, docker) + +(e) Direct API endpoint for blazing fast image load and build + +(f) Advanced features such as LoadBalancer, filesystem mounts, FeatureGates, and network policy + +(g) Addons for easily installed Kubernetes applications + +(h) Supports common CI environments + +## Task-01: + +## Install minikube on your local + +For installation, you can Visit [this page](https://minikube.sigs.k8s.io/docs/start/). + +If you want to try an alternative way, you can check [this](https://k8s-docs.netlify.app/en/docs/tasks/tools/install-minikube/). + +## Let's understand the concept **pod** + +_Ans:-_ + +Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. + +A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled. + +You can read more about pod from [here](https://kubernetes.io/docs/concepts/workloads/pods/) . + +## Task-02: + +## Create your first pod on Kubernetes through minikube. + +We are suggesting you make an nginx pod, but you can always show your creativity and do it on your own. + +**Having an issue? Don't worry, adding a sample yaml file for pod creation, you can always refer that.** + +_Happy Learning :)_ + +[← Previous Day](../day30/README.md) | [Next Day →](../day32/README.md) diff --git a/2024/day31/pod.yml b/2024/day31/pod.yml new file mode 100644 index 0000000000..cfc02a372d --- /dev/null +++ b/2024/day31/pod.yml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx +spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + + +# After creating this file , run below command: +# kubectl apply -f diff --git a/2024/day32/Deployment.yml b/2024/day32/Deployment.yml new file mode 100644 index 0000000000..8f3814196b --- /dev/null +++ b/2024/day32/Deployment.yml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: todo-app + labels: + app: todo +spec: + replicas: 2 + selector: + matchLabels: + app: todo + template: + metadata: + labels: + app: todo + spec: + containers: + - name: todo + image: rishikeshops/todo-app + ports: + - containerPort: 3000 diff --git a/2024/day32/README.md b/2024/day32/README.md new file mode 100644 index 0000000000..eb2ee9c304 --- /dev/null +++ b/2024/day32/README.md @@ -0,0 +1,27 @@ +## Day 32 Task: Launching your Kubernetes Cluster with Deployment + +### Congratulation ! on your learning on K8s on Day-31 + +## What is Deployment in k8s + +A Deployment provides a configuration for updates for Pods and ReplicaSets. + +You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new replicas for scaling, or to remove existing Deployments and adopt all their resources with new Deployments. + +## Today's task let's keep it very simple. + +## Task-1: + +**Create one Deployment file to deploy a sample todo-app on K8s using "Auto-healing" and "Auto-Scaling" feature** + +- add a deployment.yml file (sample is kept in the folder for your reference) +- apply the deployment to your k8s (minikube) cluster by command + `kubectl apply -f deployment.yml` + +Let's make your resume shine with one more project ;) + +**Having an issue? Don't worry, adding a sample deployment file , you can always refer that or wathch [this video](https://youtu.be/ONrbWFJXLLk)** + +Happy Learning :) + +[← Previous Day](../day31/README.md) | [Next Day →](../day33/README.md) diff --git a/2024/day33/README.md b/2024/day33/README.md new file mode 100644 index 0000000000..984842c527 --- /dev/null +++ b/2024/day33/README.md @@ -0,0 +1,34 @@ +# Day 33 Task: Working with Namespaces and Services in Kubernetes + +### Congrats🎊🎉 on updating your Deployment yesterday💥🙌 + +## What are Namespaces and Services in k8s + +In Kubernetes, Namespaces are used to create isolated environments for resources. Each Namespace is like a separate cluster within the same physical cluster. Services are used to expose your Pods and Deployments to the network. Read more about Namespace [Here](https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/) + +# Today's task: + +## Task 1: + +- Create a Namespace for your Deployment + +- Use the command `kubectl create namespace ` to create a Namespace + +- Update the deployment.yml file to include the Namespace + +- Apply the updated deployment using the command: + `kubectl apply -f deployment.yml -n ` + +- Verify that the Namespace has been created by checking the status of the Namespaces in your cluster. + +## Task 2: + +- Read about Services, Load Balancing, and Networking in Kubernetes. Refer official documentation of kubernetes [Link](https://kubernetes.io/docs/concepts/services-networking/) + +Need help with Namespaces? Check out this [video](https://youtu.be/K3jNo4z5Jx8) for assistance. + +Keep growing your Kubernetes knowledge💥🙌 + +Happy Learning! :) + +[← Previous Day](../day32/README.md) | [Next Day →](../day34/README.md) diff --git a/2024/day34/README.md b/2024/day34/README.md new file mode 100644 index 0000000000..9753f7ff1f --- /dev/null +++ b/2024/day34/README.md @@ -0,0 +1,36 @@ +# Day 34 Task: Working with Services in Kubernetes + +### Congratulation🎊 on your learning on Deployments in K8s on Day-33 + +## What are Services in K8s + +In Kubernetes, Services are objects that provide stable network identities to Pods and abstract away the details of Pod IP addresses. Services allow Pods to receive traffic from other Pods, Services, and external clients. + +## Task-1: + +- Create a Service for your todo-app Deployment from Day-32 +- Create a Service definition for your todo-app Deployment in a YAML file. +- Apply the Service definition to your K8s (minikube) cluster using the `kubectl apply -f service.yml -n ` command. +- Verify that the Service is working by accessing the todo-app using the Service's IP and Port in your Namespace. + +## Task-2: + +- Create a ClusterIP Service for accessing the todo-app from within the cluster +- Create a ClusterIP Service definition for your todo-app Deployment in a YAML file. +- Apply the ClusterIP Service definition to your K8s (minikube) cluster using the `kubectl apply -f cluster-ip-service.yml -n ` command. +- Verify that the ClusterIP Service is working by accessing the todo-app from another Pod in the cluster in your Namespace. + +## Task-3: + +- Create a LoadBalancer Service for accessing the todo-app from outside the cluster +- Create a LoadBalancer Service definition for your todo-app Deployment in a YAML file. +- Apply the LoadBalancer Service definition to your K8s (minikube) cluster using the `kubectl apply -f load-balancer-service.yml -n ` command. +- Verify that the LoadBalancer Service is working by accessing the todo-app from outside the cluster in your Namespace. + +Struggling with Services? Take a look at this video for a step-by-step [guide](https://youtu.be/OJths_RojFA). + +Need help with Services in Kubernetes? Check out the Kubernetes [documentation](https://kubernetes.io/docs/concepts/services-networking/service/) for assistance. + +Happy Learning :) + +[← Previous Day](../day33/README.md) | [Next Day →](../day35/README.md) diff --git a/2024/day35/README.md b/2024/day35/README.md new file mode 100644 index 0000000000..160e0030b2 --- /dev/null +++ b/2024/day35/README.md @@ -0,0 +1,37 @@ +# Day 35: Mastering ConfigMaps and Secrets in Kubernetes🔒🔑🛡️ + +### 👏🎉 Yay! Yesterday we conquered Namespaces and Services 💪💻🔗🚀 + +## What are ConfigMaps and Secrets in k8s + +In Kubernetes, ConfigMaps and Secrets are used to store configuration data and secrets, respectively. ConfigMaps store configuration data as key-value pairs, while Secrets store sensitive data in an encrypted form. + +- _Example :- Imagine you're in charge of a big spaceship (Kubernetes cluster) with lots of different parts (containers) that need information to function properly. + ConfigMaps are like a file cabinet where you store all the information each part needs in simple, labeled folders (key-value pairs). + Secrets, on the other hand, are like a safe where you keep the important, sensitive information that shouldn't be accessible to just anyone (encrypted data). + So, using ConfigMaps and Secrets, you can ensure each part of your spaceship (Kubernetes cluster) has the information it needs to work properly and keep sensitive information secure! 🚀_ +- Read more about [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) & [Secret](https://kubernetes.io/docs/concepts/configuration/secret/). + +## Today's task: + +## Task 1: + +- Create a ConfigMap for your Deployment +- Create a ConfigMap for your Deployment using a file or the command line +- Update the deployment.yml file to include the ConfigMap +- Apply the updated deployment using the command: `kubectl apply -f deployment.yml -n ` +- Verify that the ConfigMap has been created by checking the status of the ConfigMaps in your Namespace. + +## Task 2: + +- Create a Secret for your Deployment +- Create a Secret for your Deployment using a file or the command line +- Update the deployment.yml file to include the Secret +- Apply the updated deployment using the command: `kubectl apply -f deployment.yml -n ` +- Verify that the Secret has been created by checking the status of the Secrets in your Namespace. + +Need help with ConfigMaps and Secrets? Check out this [video](https://youtu.be/FAnQTgr04mU) for assistance. + +Keep learning and expanding your knowledge of Kubernetes💥🙌 + +[← Previous Day](../day34/README.md) | [Next Day →](../day36/README.md) diff --git a/2024/day36/Deployment.yml b/2024/day36/Deployment.yml new file mode 100644 index 0000000000..3c9c1c7cbc --- /dev/null +++ b/2024/day36/Deployment.yml @@ -0,0 +1,26 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: todo-app-deployment +spec: + replicas: 1 + selector: + matchLabels: + app: todo-app + template: + metadata: + labels: + app: todo-app + spec: + containers: + - name: todo-app + image: rishikeshops/todo-app + ports: + - containerPort: 8000 + volumeMounts: + - name: todo-app-data + mountPath: /app + volumes: + - name: todo-app-data + persistentVolumeClaim: + claimName: pvc-todo-app diff --git a/2024/day36/README.md b/2024/day36/README.md new file mode 100644 index 0000000000..2079e66d65 --- /dev/null +++ b/2024/day36/README.md @@ -0,0 +1,51 @@ +# Day 36 Task: Managing Persistent Volumes in Your Deployment 💥 + +🙌 Kudos to you for conquering ConfigMaps and Secrets in Kubernetes yesterday. + +🔥 You're on fire! 🔥 + +## What are Persistent Volumes in k8s + +In Kubernetes, a Persistent Volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. A Persistent Volume Claim (PVC) is a request for storage by a user. The PVC references the PV, and the PV is bound to a specific node. Read official documentation of [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). + +⏰ Wait, wait, wait! 📣 Attention all #90daysofDevOps Challengers. 💪 + +Before diving into today's task, don't forget to share your thoughts on the #90daysofDevOps challenge 💪 Fill out our feedback form (https://lnkd.in/gcgvrq8b) to help us improve and provide the best experience 🌟 Your participation and support is greatly appreciated 🙏 Let's continue to grow together 🌱 + +## Today's tasks: + +### Task 1: + +Add a Persistent Volume to your Deployment todo app. + +- Create a Persistent Volume using a file on your node. [Template](https://github.com/LondheShubham153/90DaysOfDevOps/blob/94e3970819e097a5b8edea40fe565d583419f912/2023/day36/pv.yml) + +- Create a Persistent Volume Claim that references the Persistent Volume. [Template](https://github.com/LondheShubham153/90DaysOfDevOps/blob/94e3970819e097a5b8edea40fe565d583419f912/2023/day36/pvc.yml) + +- Update your deployment.yml file to include the Persistent Volume Claim. After Applying pv.yml pvc.yml your deployment file look like this [Template](https://github.com/LondheShubham153/90DaysOfDevOps/blob/94e3970819e097a5b8edea40fe565d583419f912/2023/day36/Deployment.yml) + +- Apply the updated deployment using the command: `kubectl apply -f deployment.yml` + +- Verify that the Persistent Volume has been added to your Deployment by checking the status of the Pods and Persistent Volumes in your cluster. Use this commands `kubectl get pods` , + +`kubectl get pv` + +⚠️ Don't forget: To apply changes or create files in your Kubernetes deployments, each file must be applied separately. ⚠️ + +### Task 2: + +Accessing data in the Persistent Volume, + +- Connect to a Pod in your Deployment using command : `kubectl exec -it -- /bin/bash + +` + +- Verify that you can access the data stored in the Persistent Volume from within the Pod + +Need help with Persistent Volumes? Check out this [video](https://youtu.be/U0_N3v7vJys) for assistance. + +Keep up the excellent work🙌💥 + +Happy Learning :) + +[← Previous Day](../day35/README.md) | [Next Day →](../day37/README.md) diff --git a/2024/day36/pv.yml b/2024/day36/pv.yml new file mode 100644 index 0000000000..9546aba56a --- /dev/null +++ b/2024/day36/pv.yml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pv-todo-app +spec: + capacity: + storage: 1Gi + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: "/tmp/data" diff --git a/2024/day36/pvc.yml b/2024/day36/pvc.yml new file mode 100644 index 0000000000..3d9dce14d8 --- /dev/null +++ b/2024/day36/pvc.yml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: pvc-todo-app +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi diff --git a/2024/day37/README.md b/2024/day37/README.md new file mode 100644 index 0000000000..1300e335ae --- /dev/null +++ b/2024/day37/README.md @@ -0,0 +1,43 @@ +## Day 37 Task: Kubernetes Important interview Questions. + +## Questions + +1.What is Kubernetes and why it is important? + +2.What is difference between docker swarm and kubernetes? + +3.How does Kubernetes handle network communication between containers? + +4.How does Kubernetes handle scaling of applications? + +5.What is a Kubernetes Deployment and how does it differ from a ReplicaSet? + +6.Can you explain the concept of rolling updates in Kubernetes? + +7.How does Kubernetes handle network security and access control? + +8.Can you give an example of how Kubernetes can be used to deploy a highly available application? + +9.What is namespace is kubernetes? Which namespace any pod takes if we don't specify any namespace? + +10.How ingress helps in kubernetes? + +11.Explain different types of services in kubernetes? + +12.Can you explain the concept of self-healing in Kubernetes and give examples of how it works? + +13.How does Kubernetes handle storage management for containers? + +14.How does the NodePort service work? + +15.What is a multinode cluster and single-node cluster in Kubernetes? + +16.Difference between create and apply in kubernetes? + +## These questions will help you in your next DevOps Interview. + +_Write a Blog and share it on LinkedIn._ + +**_Happy Learning :)_** + +[← Previous Day](../day36/README.md) | [Next Day →](../day38/README.md) diff --git a/2024/day38/README.md b/2024/day38/README.md new file mode 100644 index 0000000000..8f51187e87 --- /dev/null +++ b/2024/day38/README.md @@ -0,0 +1,30 @@ +# Day 38 Getting Started with AWS Basics☁ + +![AWS](https://user-images.githubusercontent.com/115981550/217238286-6c6bc6e7-a1ac-4d12-98f3-f95ff5bf53fc.png) + +Congratulations!!!! you have come so far. Don't let your excuses break your consistency. Let's begin our new Journey with Cloud☁. By this time you have created multiple EC2 instances, if not let's begin the journey: + +## AWS: + +Amazon Web Services is one of the most popular Cloud Provider that has free tier too for students and Cloud enthutiasts for their Handson while learning (Create your free account today to explore more on it). + +Read from [here](https://aws.amazon.com/what-is-aws/) + +## IAM: + +AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. +Read from [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) + +Get to know IAM more deeply [Click Here!!](https://www.youtube.com/watch?v=ORB4eY8EydA) + +### Task1: + +Create an IAM user with username of your own wish and grant EC2 Access. Launch your Linux instance through the IAM user that you created now and install jenkins and docker on your machine via single Shell Script. + +### Task2: + +In this task you need to prepare a devops team of avengers. Create 3 IAM users of avengers and assign them in devops groups with IAM policy. + +Post your progress on Linkedin. Till then Happy Learning :) + +[← Previous Day](../day37/README.md) | [Next Day →](../day39/README.md) diff --git a/2024/day39/README.md b/2024/day39/README.md new file mode 100644 index 0000000000..9a7e3e934f --- /dev/null +++ b/2024/day39/README.md @@ -0,0 +1,41 @@ +# Day 39 AWS and IAM Basics☁ + +![AWS](https://miro.medium.com/max/1400/0*dIzXLQn6aBClm1TJ.png) + +By this time you have created multiple EC2 instances, and post installation manually installed applications like Jenkins, docker etc. +Now let's switch to little automation part. Sounds interesting??🤯 + +## AWS: + +Amazon Web Services is one of the most popular Cloud Provider that has free tier too for students and Cloud enthutiasts for their Handson while learning (Create your free account today to explore more on it). + +Read from [here](https://aws.amazon.com/what-is-aws/) + +## User Data in AWS: + +- When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives. +- You can also pass this data into the launch instance wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls). +- This will save time and manual effort everytime you launch an instance and want to install any application on it like apache, docker, Jenkins etc + +Read more from [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) + +## IAM: + +AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. +Read from [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) + +Get to know IAM more deeply🏊[Click Here!!](https://www.youtube.com/watch?v=ORB4eY8EydA) + +### Task1: + +- Launch EC2 instance with already installed Jenkins on it. Once server shows up in console, hit the IP address in browser and you Jenkins page should be visible. +- Take screenshot of Userdata and Jenkins page, this will verify the task completion. + +### Task2: + +- Read more on IAM Roles and explain the IAM Users, Groups and Roles in your own terms. +- Create three Roles named: DevOps-User, Test-User and Admin. + +Post your progress on Linkedin. Till then Happy Learning :) + +[← Previous Day](../day38/README.md) | [Next Day →](../day40/README.md) diff --git a/2024/day40/README.md b/2024/day40/README.md new file mode 100644 index 0000000000..ce2dbcfda3 --- /dev/null +++ b/2024/day40/README.md @@ -0,0 +1,49 @@ +# Day 40 AWS EC2 Automation ☁ + +![AWS](https://www.eginnovations.com/blog/wp-content/uploads/2021/09/Amazon-AWS-Cloud-Topimage-1.jpg) + +I hope your journey with AWS cloud and automation is going well [](https://emojipedia.org/emoji/%F0%9F%98%8D/) + +### 😍 + +## Automation in EC2: + +Amazon EC2 or Amazon Elastic Compute Cloud can give you secure, reliable, high-performance, and cost-effective computing infrastructure to meet demanding business needs. + +Also, if you know a few things, you can automate many things. + +Read from [here](https://aws.amazon.com/ec2/) + +## Launch template in AWS EC2: + +- You can make a launch template with the configuration information you need to start an instance. You can save launch parameters in launch templates so you don't have to type them in every time you start a new instance. +- For example, a launch template can have the AMI ID, instance type, and network settings that you usually use to launch instances. +- You can tell the Amazon EC2 console to use a certain launch template when you start an instance. + +Read more from [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html) + +## Instance Types: + +Amazon EC2 has a large number of instance types that are optimised for different uses. The different combinations of CPU, memory, storage and networking capacity in instance types give you the freedom to choose the right mix of resources for your apps. Each instance type comes with one or more instance sizes, so you can adjust your resources to meet the needs of the workload you want to run. + +Read from [here](https://aws.amazon.com/ec2/instance-types/?trk=32f4fbd0-ffda-4695-a60c-8857fab7d0dd&sc_channel=ps&s_kwcid=AL!4422!3!536392685920!e!!g!!ec2%20instance%20types&ef_id=CjwKCAiA0JKfBhBIEiwAPhZXD_O1-3qZkRa-KScynbwjvHd3l4UHSTfKuigd5ZPukXoDXu-v3MtC7hoCafEQAvD_BwE:G:s&s_kwcid=AL!4422!3!536392685920!e!!g!!ec2%20instance%20types) + +## AMI: + +An Amazon Machine Image (AMI) is an image that AWS supports and keeps up to date. It contains the information needed to start an instance. When you launch an instance, you must choose an AMI. When you need multiple instances with the same configuration, you can launch them from a single AMI. + +### Task1: + +- Create a launch template with Amazon Linux 2 AMI and t2.micro instance type with Jenkins and Docker setup (You can use the Day 39 User data script for installing the required tools. + +- Create 3 Instances using Launch Template, there must be an option that shows number of instances to be launched ,can you find it? :) + +- You can go one step ahead and create an auto-scaling group, sounds tough? + +Check [this](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html#create-launch-template-for-auto-scaling) out + +Post your progress on Linkedin. + +Happy Learning :) + +[← Previous Day](../day39/README.md) | [Next Day →](../day41/README.md) diff --git a/2024/day41/README.md b/2024/day41/README.md new file mode 100644 index 0000000000..0a1488f068 --- /dev/null +++ b/2024/day41/README.md @@ -0,0 +1,53 @@ +# Day 41: Setting up an Application Load Balancer with AWS EC2 🚀 ☁ + +![LB2](https://user-images.githubusercontent.com/115981550/218145297-d55fe812-32b7-4242-a4f8-eb66312caa2c.png) + +### Hi, I hope you had a great day yesterday learning about the launch template and instances in EC2. Today, we are going to dive into one of the most important concepts in EC2: Load Balancing. + +## What is Load Balancing? + +Load balancing is the distribution of workloads across multiple servers to ensure consistent and optimal resource utilization. It is an essential aspect of any large-scale and scalable computing system, as it helps you to improve the reliability and performance of your applications. + +## Elastic Load Balancing: + +**Elastic Load Balancing (ELB)** is a service provided by Amazon Web Services (AWS) that automatically distributes incoming traffic across multiple EC2 instances. ELB provides three types of load balancers: + +Read more from [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) + +1. **Application Load Balancer (ALB)** - _operates at layer 7 of the OSI model and is ideal for applications that require advanced routing and microservices._ + +- Read more from [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) + +2. **Network Load Balancer (NLB)** - _operates at layer 4 of the OSI model and is ideal for applications that require high throughput and low latency._ + +- Read more from [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) + +3. **Classic Load Balancer (CLB)** - _operates at layer 4 of the OSI model and is ideal for applications that require basic load balancing features._ + +- Read more [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html) + +## 🎯 Today's Tasks: + +### Task 1: + +- launch 2 EC2 instances with an Ubuntu AMI and use User Data to install the Apache Web Server. +- Modify the index.html file to include your name so that when your Apache server is hosted, it will display your name also do it for 2nd instance which include " TrainWithShubham Community is Super Aweasome :) ". +- Copy the public IP address of your EC2 instances. +- Open a web browser and paste the public IP address into the address bar. +- You should see a webpage displaying information about your PHP installation. + +### Task 2: + +- Create an Application Load Balancer (ALB) in EC2 using the AWS Management Console. +- Add EC2 instances which you launch in task-1 to the ALB as target groups. +- Verify that the ALB is working properly by checking the health status of the target instances and testing the load balancing capabilities. + +![LoadBalancer](https://user-images.githubusercontent.com/115981550/218143557-26ec33ce-99a7-4db6-a46f-1cf48ed77ae0.png) + +Need help with task? Check out this [Blog for assistance](https://rushikesh-mashidkar.hashnode.dev/create-an-application-load-balancer-elastic-load-balancing-using-aws-ec2-instance). + +Don't forget to share your progress on LinkedIn and have a great day🙌💥 + +Happy Learning! 😃 + +[← Previous Day](../day40/README.md) | [Next Day →](../day42/README.md) diff --git a/2024/day42/README.md b/2024/day42/README.md new file mode 100644 index 0000000000..5f8a37ff09 --- /dev/null +++ b/2024/day42/README.md @@ -0,0 +1,28 @@ +# Day 42: IAM Programmatic access and AWS CLI 🚀 ☁ + +Today is more of a reading excercise and getting some programmatic access for your AWS account + +## IAM Programmatic access + +In order to access your AWS account from a terminal or system, you can use AWS Access keys and AWS Secret Access keys +Watch [this video](https://youtu.be/XYKqL5GFI-I) for more details. + +## AWS CLI + +The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. + +The AWS CLI v2 offers several new features including improved installers, new configuration options such as AWS IAM Identity Center (successor to AWS SSO), and various interactive features. + +## Task-01 + +- Create AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY from AWS Console. + +## Task-02 + +- Setup and install AWS CLI and configure your account credentials + +Let me know if you have any issues while doing the task. + +Happy Learning :) + +[← Previous Day](../day41/README.md) | [Next Day →](../day43/README.md) diff --git a/2024/day43/README.md b/2024/day43/README.md new file mode 100644 index 0000000000..b838d01544 --- /dev/null +++ b/2024/day43/README.md @@ -0,0 +1,32 @@ +# Day 43: S3 Programmatic access with AWS-CLI 💻 📁 + +Hi, I hope you had a great day yesterday. Today as part of the #90DaysofDevOps Challenge we will be exploring most commonly used service in AWS i.e S3. + +![s3](https://user-images.githubusercontent.com/115981550/218308379-a2e841cf-6b77-4d02-bfbe-20d1bae09b20.png) + +# S3 + +Amazon Simple Storage Service (Amazon S3) is an object storage service that provides a secure and scalable way to store and access data on the cloud. It is designed for storing any kind of data, such as text files, images, videos, backups, and more. +Read more [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) + +## Task-01 + +- Launch an EC2 instance using the AWS Management Console and connect to it using Secure Shell (SSH). +- Create an S3 bucket and upload a file to it using the AWS Management Console. +- Access the file from the EC2 instance using the AWS Command Line Interface (AWS CLI). + +Read more about S3 using aws-cli [here](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html) + +## Task-02 + +- Create a snapshot of the EC2 instance and use it to launch a new EC2 instance. +- Download a file from the S3 bucket using the AWS CLI. +- Verify that the contents of the file are the same on both EC2 instances. + +Added Some Useful commands to complete the task. [Click here for commands](https://github.com/LondheShubham153/90DaysOfDevOps/blob/833a67ac4ec17b992934cd6878875dccc4274f56/2023/day43/aws-cli.md) + +Let me know if you have any questions or face any issues while doing the tasks.🚀 + +Happy Learning :) + +[← Previous Day](../day42/README.md) | [Next Day →](../day44/README.md) diff --git a/2024/day43/aws-cli.md b/2024/day43/aws-cli.md new file mode 100644 index 0000000000..8c0f23fe2f --- /dev/null +++ b/2024/day43/aws-cli.md @@ -0,0 +1,21 @@ +Here are some commonly used AWS CLI commands for Amazon S3: + +`aws s3 ls` - This command lists all of the S3 buckets in your AWS account. + +`aws s3 mb s3://bucket-name` - This command creates a new S3 bucket with the specified name. + +`aws s3 rb s3://bucket-name` - This command deletes the specified S3 bucket. + +`aws s3 cp file.txt s3://bucket-name` - This command uploads a file to an S3 bucket. + +`aws s3 cp s3://bucket-name/file.txt .` - This command downloads a file from an S3 bucket to your local file system. + +`aws s3 sync local-folder s3://bucket-name` - This command syncs the contents of a local folder with an S3 bucket. + +`aws s3 ls s3://bucket-name` - This command lists the objects in an S3 bucket. + +`aws s3 rm s3://bucket-name/file.txt` - This command deletes an object from an S3 bucket. + +`aws s3 presign s3://bucket-name/file.txt` - This command generates a pre-signed URL for an S3 object, which can be used to grant temporary access to the object. + +`aws s3api list-buckets` - This command retrieves a list of all S3 buckets in your AWS account, using the S3 API. diff --git a/2024/day44/README.md b/2024/day44/README.md new file mode 100644 index 0000000000..c836c86b29 --- /dev/null +++ b/2024/day44/README.md @@ -0,0 +1,23 @@ +# Day 44: Relational Database Service in AWS + +Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud + +## Task-01 + +- Create a Free tier RDS instance of MySQL +- Create an EC2 instance +- Create an IAM role with RDS access +- Assign the role to EC2 so that your EC2 Instance can connect with RDS +- Once the RDS instance is up and running, get the credentials and connect your EC2 instance using a MySQL client. + +Hint: + +You should install mysql client on EC2, and connect the Host and Port of RDS with this client. + +Post the screenshots once your EC2 instance can connect a MySQL server, that will be a small win for you. + +Watch [this video](https://youtu.be/MrA6Rk1Y82E) for reference. + +Happy Learning + +[← Previous Day](../day43/README.md) | [Next Day →](../day45/README.md) diff --git a/2024/day45/README.md b/2024/day45/README.md new file mode 100644 index 0000000000..c2c11a93b2 --- /dev/null +++ b/2024/day45/README.md @@ -0,0 +1,18 @@ +# Day 45: Deploy Wordpress website on AWS + +Over 30% of all websites on the internet use WordPress as their content management system (CMS). It is most often used to run blogs, but it can also be used to run e-commerce sites, message boards, and many other popular things. This guide will show you how to set up a WordPress blog site. + +## Task-01 + +- As WordPress requires a MySQL database to store its data ,create an RDS as you did in Day 44 + +To configure this WordPress site, you will create the following resources in AWS: + +- An Amazon EC2 instance to install and host the WordPress application. +- An Amazon RDS for MySQL database to store your WordPress data. +- Setup the server and post your new Wordpress app. + +Read [this](https://aws.amazon.com/getting-started/hands-on/deploy-wordpress-with-amazon-rds/) for a detailed explanation +Happy Learning :) + +[← Previous Day](../day44/README.md) | [Next Day →](../day46/README.md) diff --git a/2024/day46/README.md b/2024/day46/README.md new file mode 100644 index 0000000000..a44ae2f101 --- /dev/null +++ b/2024/day46/README.md @@ -0,0 +1,35 @@ +# Day-46: Set up CloudWatch alarms and SNS topic in AWS + +Hey learners, you have been using aws services atleast for last 45 days. Have you ever wondered what happen if for any service is charging you bill continously and you don't know till you loose all your pocket money ? + +Hahahaha😁, Well! we, as a responsible community ,always try to make it under free tier , but it's good to know and setup something , which will inform you whenever bill touches a Threshold. + +## What is Amazon CloudWatch? + +Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. + +Read more about cloudwatch from the official documentation [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) + +## What is Amazon SNS? + +Amazon Simple Notification Service is a notification service provided as part of Amazon Web Services since 2010. It provides a low-cost infrastructure for mass delivery of messages, predominantly to mobile users. + +Read more about it [here](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) + +## Task : + +- Create a CloudWatch alarm that monitors your billing and send an email to you when a it reaches $2. + +(You can keep it for your future use) + +- Delete your billing Alarm that you created now. + +(Now you also know how to delete as well. ) + +Need help with Cloudwatch? Check out this [official documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html) for assistance. + +Keep growing your AWS knowledge💥🙌 + +Happy Learning! :) + +[← Previous Day](../day45/README.md) | [Next Day →](../day47/README.md) diff --git a/2024/day47/README.md b/2024/day47/README.md new file mode 100644 index 0000000000..7d3dc37e37 --- /dev/null +++ b/2024/day47/README.md @@ -0,0 +1,64 @@ +# Day 47: AWS Elastic Beanstalk +Today, we explore the new AWS service- Elastic Beanstalk. We'll also cover deploying a small web application (game) on this platform + +## What is AWS Elastic Beanstalk? +![image](https://github.com/Simbaa815/90DaysOfDevOps/assets/112085387/75f69087-d769-4586-b4a7-99a87feaec92) + +- AWS Elastic Beanstalk is a service used to deploy and scale web applications developed by developers. +- It supports multiple programming languages and runtime environments such as Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. + +## Why do we need AWS Elastic Beanstalk? +- Previously, developers faced challenges in sharing software modules across geographically separated teams. +- AWS Elastic Beanstalk solves this problem by providing a service to easily share applications across different devices. + +## Advantages of AWS Elastic Beanstalk +- Highly scalable +- Fast and simple to begin +- Quick deployment +- Supports multi-tenant architecture +- Simplifies operations +- Cost efficient + +## Components of AWS Elastic Beanstalk +- Application Version: Represents a specific iteration or release of an application's codebase. +- Environment Tier: Defines the infrastructure resources allocated for an environment (e.g., web server environment, worker environment). +- Environment: Represents a collection of AWS resources running an application version. +- Configuration Template: Defines the settings for an environment, including instance types, scaling options, and more. + +## Elastic Beanstalk Environment +There are two types of environments: web server and worker. + +- Web server environments are front-end facing, accessed directly by clients using a URL. + +- Worker environments support backend applications or micro apps. + +## Task-01 +Deploy the [2048-game](https://github.com/Simbaa815/2048-game) using the AWS Elastic Beanstalk. + +If you ever find yourself facing a challenge, feel free to refer to this helpful [blog](https://devxblog.hashnode.dev/aws-elastic-beanstalk-deploying-the-2048-game) post for guidance and support. + +--- + +# Additional work + +## Test Knowledge on aws 💻 📈 +Today, we will be test the aws knowledge on services in AWS, as part of the 90 Days of DevOps Challenge. + + +## Task-01 + +- Launch an EC2 instance using the AWS Management Console and connect to it using SSH. +- Install a web server on the EC2 instance and deploy a simple web application. +- Monitor the EC2 instance using Amazon CloudWatch and troubleshoot any issues that arise. + +## Task-02 +- Create an Auto Scaling group using the AWS Management Console and configure it to launch EC2 instances in response to changes in demand. +- Use Amazon CloudWatch to monitor the performance of the Auto Scaling group and the EC2 instances and troubleshoot any issues that arise. +- Use the AWS CLI to view the state of the Auto Scaling group and the EC2 instances and verify that the correct number of instances are running. + + +We hope that these tasks will give you hands-on experience with aws services and help you understand how these services work together. If you have any questions or face any issues while doing the tasks, please let us know. + +Happy Learning :) + +[← Previous Day](../day46/README.md) | [Next Day →](../day48/README.md) diff --git a/2024/day48/README.md b/2024/day48/README.md new file mode 100644 index 0000000000..01836eac4e --- /dev/null +++ b/2024/day48/README.md @@ -0,0 +1,40 @@ +# Day-48 - ECS + +Today will be a great learning for sure. I know many of you may not know about the term "ECS". As you know, 90 Days Of DevOps Challenge is mostly about 'learning new' , let's learn then ;) + +## What is ECS ? + +- ECS (Elastic Container Service) is a fully-managed container orchestration service provided by Amazon Web Services (AWS). It allows you to run and manage Docker containers on a cluster of virtual machines (EC2 instances) without having to manage the underlying infrastructure. + +With ECS, you can easily deploy, manage, and scale your containerized applications using the AWS Management Console, the AWS CLI, or the API. ECS supports both "Fargate" and "EC2 launch types", which means you can run your containers on AWS-managed infrastructure or your own EC2 instances. + +ECS also integrates with other AWS services, such as Elastic Load Balancing, Auto Scaling, and Amazon VPC, allowing you to build scalable and highly available applications. Additionally, ECS has support for Docker Compose and Kubernetes, making it easy to adopt existing container workflows. + +Overall, ECS is a powerful and flexible container orchestration service that can help simplify the deployment and management of containerized applications in AWS. + +## Difference between EKS and ECS ? + +- EKS (Elastic Kubernetes Service) and ECS (Elastic Container Service) are both container orchestration platforms provided by Amazon Web Services (AWS). While both platforms allow you to run containerized applications in the AWS cloud, there are some differences between the two. + +**Architecture**: +ECS is based on a centralized architecture, where there is a control plane that manages the scheduling of containers on EC2 instances. On the other hand, EKS is based on a distributed architecture, where the Kubernetes control plane is distributed across multiple EC2 instances. + +**Kubernetes Support**: +EKS is a fully managed Kubernetes service, meaning that it supports Kubernetes natively and allows you to run your Kubernetes workloads on AWS without having to manage the Kubernetes control plane. ECS, on the other hand, has its own orchestration engine and does not support Kubernetes natively. + +**Scaling**: +EKS is designed to automatically scale your Kubernetes cluster based on demand, whereas ECS requires you to configure scaling policies for your tasks and services. + +**Flexibility**: +EKS provides more flexibility than ECS in terms of container orchestration, as it allows you to customize and configure Kubernetes to meet your specific requirements. ECS is more restrictive in terms of the options available for container orchestration. + +**Community**: +Kubernetes has a large and active open-source community, which means that EKS benefits from a wide range of community-driven development and support. ECS, on the other hand, has a smaller community and is largely driven by AWS itself. + +In summary, EKS is a good choice if you want to use Kubernetes to manage your containerized workloads on AWS, while ECS is a good choice if you want a simpler, more managed platform for running your containerized applications. + +# Task : + +Set up ECS (Elastic Container Service) by setting up Nginx on ECS. + +[← Previous Day](../day47/README.md) | [Next Day →](../day49/README.md) diff --git a/2024/day49/README.md b/2024/day49/README.md new file mode 100644 index 0000000000..ecc603177a --- /dev/null +++ b/2024/day49/README.md @@ -0,0 +1,25 @@ +# Day 49 - INTERVIEW QUESTIONS ON AWS + +Hey people, we have listened to your suggestions and we are looking forward to get more! +As you people have asked to put more interview based questions as part of Daily Task, So here it it :) + +## INTERVIEW QUESTIONS: + +- Name 5 aws services you have used and what's the use cases? +- What are the tools used to send logs to the cloud environment? +- What are IAM Roles? How do you create /manage them? +- How to upgrade or downgrade a system with zero downtime? +- What is infrastructure as code and how do you use it? +- What is a load balancer? Give scenarios of each kind of balancer based on your experience. +- What is CloudFormation and why is it used for? +- Difference between AWS CloudFormation and AWS Elastic Beanstalk? +- What are the kinds of security attacks that can occur on the cloud? And how can we minimize them? +- Can we recover the EC2 instance when we have lost the key? +- What is a gateway? +- What is the difference between the Amazon Rds, Dynamodb, and Redshift? +- Do you prefer to host a website on S3? What's the reason if your answer is either yes or no? + +Let's share your answer on LinkedIn in best possible way thinking you are in a interview table. +Happy Learning !! :) + +[← Previous Day](../day48/README.md) | [Next Day →](../day50/README.md) diff --git a/2024/day50/README.md b/2024/day50/README.md new file mode 100644 index 0000000000..0340a36b09 --- /dev/null +++ b/2024/day50/README.md @@ -0,0 +1,30 @@ +# Day 50: Your CI/CD pipeline on AWS - Part-1 🚀 ☁ + +What if I tell you, in next 4 days, you'll be making a CI/CD pipeline on AWS with these tools. + +- CodeCommit +- CodeBuild +- CodeDeploy +- CodePipeline +- S3 + +## What is CodeCommit ? + +- CodeCommit is a managed source control service by AWS that allows users to store, manage, and version their source code and artifacts securely and at scale. It supports Git, integrates with other AWS services, enables collaboration through branch and merge workflows, and provides audit logs and compliance reports to meet regulatory requirements and track changes. Overall, CodeCommit provides developers with a reliable and efficient way to manage their codebase and set up a CI/CD pipeline for their software development projects. + +# Task-01 : + +- Set up a code repository on CodeCommit and clone it on your local. +- You need to setup GitCredentials in your AWS IAM. +- Use those credentials in your local and then clone the repository from CodeCommit + +# Task-02 : + +- Add a new file from local and commit to your local branch +- Push the local changes to CodeCommit repository. + +For more details watch [this](https://youtu.be/p5i3cMCQ760) video. + +Happy Learning :) + +[← Previous Day](../day49/README.md) | [Next Day →](../day51/README.md) diff --git a/2024/day51/README.md b/2024/day51/README.md new file mode 100644 index 0000000000..01f0b70262 --- /dev/null +++ b/2024/day51/README.md @@ -0,0 +1,30 @@ +# Day 51: Your CI/CD pipeline on AWS - Part 2 🚀 ☁ + +On your journey of making a CI/CD pipeline on AWS with these tools, you completed AWS CodeCommit. + +Next few days you'll learn these tools/services: + +- CodeBuild +- CodeDeploy +- CodePipeline +- S3 + +## What is CodeBuild ? + +- AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. + +# Task-01 : + +- Read about Buildspec file for Codebuild. +- create a simple index.html file in CodeCommit Repository +- you have to build the index.html using nginx server + +# Task-02 : + +- Add buildspec.yaml file to CodeCommit Repository and complete the build process. + +For more details watch [this](https://youtu.be/p5i3cMCQ760) video. + +Happy Learning :) + +[← Previous Day](../day50/README.md) | [Next Day →](../day52/README.md) diff --git a/2024/day52/README.md b/2024/day52/README.md new file mode 100644 index 0000000000..52dffd62ae --- /dev/null +++ b/2024/day52/README.md @@ -0,0 +1,31 @@ +# Day 52: Your CI/CD pipeline on AWS - Part 3 🚀 ☁ + +On your journey of making a CI/CD pipeline on AWS with these tools, you completed AWS CodeCommit & CodeBuild. + +Next few days you'll learn these tools/services: + +- CodeDeploy +- CodePipeline +- S3 + +## What is CodeDeploy ? + +- AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services. + +CodeDeploy can deploy application content that runs on a server and is stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. CodeDeploy can also deploy a serverless Lambda function. You do not need to make changes to your existing code before you can use CodeDeploy. + +# Task-01 : + +- Read about Appspec.yaml file for CodeDeploy. +- Deploy index.html file on EC2 machine using nginx +- you have to setup a CodeDeploy agent in order to deploy code on EC2 + +# Task-02 : + +- Add appspec.yaml file to CodeCommit Repository and complete the deployment process. + +For more details watch [this](https://youtu.be/IUF-pfbYGvg) video. + +Happy Learning :) + +[← Previous Day](../day51/README.md) | [Next Day →](../day53/README.md) diff --git a/2024/day53/README.md b/2024/day53/README.md new file mode 100644 index 0000000000..2139f0cb5d --- /dev/null +++ b/2024/day53/README.md @@ -0,0 +1,21 @@ +# Day 53: Your CI/CD pipeline on AWS - Part 4 🚀 ☁ + +On your journey of making a CI/CD pipeline on AWS with these tools, you completed AWS CodeCommit, CodeBuild & CodeDeploy. + +Finish Off in style with AWS CodePipeline + +## What is CodePipeline ? + +- CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define. + Think of it as a CI/CD Pipeline service + +# Task-01 : + +- Create a Deployment group of Ec2 Instance. +- Create a CodePipeline that gets the code from CodeCommit, Builds the code using CodeBuild and deploys it to a Deployment Group. + +For more details watch [this](https://youtu.be/IUF-pfbYGvg) video. + +Happy Learning :) + +[← Previous Day](../day52/README.md) | [Next Day →](../day54/README.md) diff --git a/2024/day54/README.md b/2024/day54/README.md new file mode 100644 index 0000000000..f134a32bf1 --- /dev/null +++ b/2024/day54/README.md @@ -0,0 +1,19 @@ +# Day 54: Understanding Infrastructure as Code and Configuration Management + +## What's the difference bhaiyya? + +When it comes to the cloud, Infrastructure as Code (IaC) and Configuration Management (CM) are inseparable. With IaC, a descriptive model is used for infrastructure management. To name a few examples of infrastructure: networks, virtual computers, and load balancers. Using an IaC model always results in the same setting. + +Throughout the lifecycle of a product, Configuration Management (CM) ensures that the performance, functional and physical inputs, requirements, design, and operations of that product remain consistent. + +# Task-01 + +- Read more about IaC and Config. Management Tools +- Give differences on both with suitable examples +- What are most commont IaC and Config management Tools? + +Write a blog on this topic in the most creative way and post it on linkedIn :) + +happy learning... + +[← Previous Day](../day53/README.md) | [Next Day →](../day55/README.md) diff --git a/2024/day55/README.md b/2024/day55/README.md new file mode 100644 index 0000000000..5df87b107a --- /dev/null +++ b/2024/day55/README.md @@ -0,0 +1,28 @@ +# Day 55: Understanding Configuration Management with Ansible + +## What's this Ansible? + +Ansible is an open-source automation tool, or platform, used for IT tasks such as configuration management, application deployment, intraservice orchestration, and provisioning + +# Task-01 + +- Installation of Ansible on AWS EC2 (Master Node) + `sudo apt-add-repository ppa:ansible/ansible` `sudo apt update` + `sudo apt install ansible` + +# Task-02 + +- read more about Hosts file + `sudo nano /etc/ansible/hosts ansible-inventory --list -y` + +# Task-03 + +- Setup 2 more EC2 instances with same Private keys as the previous instance (Node) +- Copy the private key to master server where Ansible is setup +- Try a ping command using ansible to the Nodes. + +Write a blog on this topic with screenshots in the most creative way and post it on linkedIn :) + +happy learning... + +[← Previous Day](../day54/README.md) | [Next Day →](../day56/README.md) diff --git a/2024/day56/README.md b/2024/day56/README.md new file mode 100644 index 0000000000..853372bae2 --- /dev/null +++ b/2024/day56/README.md @@ -0,0 +1,18 @@ +# Day 56: Understanding Ad-hoc commands in Ansible + +Ansible ad hoc commands are one-liners designed to achieve a very specific task they are like quick snippets and your compact swiss army knife when you want to do a quick task across multiple machines. + +To put simply, Ansible ad hoc commands are one-liner Linux shell commands and playbooks are like a shell script, a collective of many commands with logic. + +Ansible ad hoc commands come handy when you want to perform a quick task. + +# Task-01 + +- write an ansible ad hoc ping command to ping 3 servers from inventory file +- Write an ansible ad hoc command to check uptime + +- You can refer to [this](https://www.middlewareinventory.com/blog/ansible-ad-hoc-commands/) blog to understand the different examples of ad-hoc commands and try out them, post the screenshots in a blog with an explanation. + +happy Learning :) + +[← Previous Day](../day55/README.md) | [Next Day →](../day57/README.md) diff --git a/2024/day57/README.md b/2024/day57/README.md new file mode 100644 index 0000000000..4866eecf58 --- /dev/null +++ b/2024/day57/README.md @@ -0,0 +1,13 @@ +# Day 57: Ansible Hands-on with video + +Ansible is fun, you saw in last few days how easy it is. + +Let's make it fun now, by using a video explanation for Ansible. + +# Task-01 + +- Write a Blog explanation for the [ansible video](https://youtu.be/SGB7EdiP39E) + +happy Learning :) + +[← Previous Day](../day56/README.md) | [Next Day →](../day58/README.md) diff --git a/2024/day58/README.md b/2024/day58/README.md new file mode 100644 index 0000000000..f8facae4b7 --- /dev/null +++ b/2024/day58/README.md @@ -0,0 +1,23 @@ +# Day 58: Ansible Playbooks + +Ansible playbooks run multiple tasks, assign roles, and define configurations, deployment steps, and variables. If you’re using multiple servers, Ansible playbooks organize the steps between the assembled machines or servers and get them organized and running in the way the users need them to. Consider playbooks as the equivalent of instruction manuals. + +# Task-01 + +- Write an ansible playbook to create a file on a different server + +- Write an ansible playbook to create a new user. + +- Write an ansible playbook to install docker on a group of servers + +Watch [this](https://youtu.be/089mRKoJTzo) video to learn about ansible Playbooks + +# Task-02 + +- Write a blog about writing ansible playbooks with the best practices. + +Let me or anyone in the community know if you face any challenges + +happy Learning :) + +[← Previous Day](../day57/README.md) | [Next Day →](../day59/README.md) diff --git a/2024/day59/README.md b/2024/day59/README.md new file mode 100644 index 0000000000..f8bf4d0908 --- /dev/null +++ b/2024/day59/README.md @@ -0,0 +1,26 @@ +# Day 59: Ansible Project 🔥 + +Ansible playbooks are amazing, as you learned yesterday. +What if you deploy a simple web app using ansible, sounds like a good project, right? + +# Task-01 + +- create 3 EC2 instances . make sure all three are created with same key pair + +- Install Ansible in host server + +- copy the private key from local to Host server (Ansible_host) at (/home/ubuntu/.ssh) + +- access the inventory file using sudo vim /etc/ansible/hosts + +- Create a playbook to install Nginx + +- deploy a sample webpage using the ansible playbook + +Read [this](https://medium.com/@sandeep010498/learn-ansible-with-real-time-project-cf6a0a512d45) Blog by [Sandeep Singh](https://medium.com/@sandeep010498) to clear all your doubts + +Let me or anyone in the community know if you face any challenges + +happy Learning :) + +[← Previous Day](../day58/README.md) | [Next Day →](../day60/README.md) diff --git a/2024/day60/README.md b/2024/day60/README.md new file mode 100644 index 0000000000..ecae296195 --- /dev/null +++ b/2024/day60/README.md @@ -0,0 +1,31 @@ +# Day 60 - Terraform🔥 + +Hello Learners , you guys are doing every task by creating an ec2 instance (mostly). Today let’s automate this process . How to do it ? Well Terraform is the solution . + +## What is Terraform? + +Terraform is an infrastructure as code (IaC) tool that allows you to create, manage, and update infrastructure +resources such as virtual machines, networks, and storage in a repeatable, scalable, and automated way. + +## Task 1: + +Install Terraform on your system +Refer this [link](https://phoenixnap.com/kb/how-to-install-terraform) for installation + +## Task 2: Answer below questions + +- Why we use terraform? +- What is Infrastructure as Code (IaC)? +- What is Resource? +- What is Provider? +- Whats is State file in terraform? What’s the importance of it ? +- What is Desired and Current State? + +You can prepare for tomorrow's task from [here](https://www.youtube.com/live/965CaSveIEI?feature=share)🚀🚀 + +We Hope this tasks will help you understand how to write a basic Terraform configuration file and basic commands on Terraform. + +Don’t forget to post in on LinkedIn. +Happy Learning:) + +[← Previous Day](../day59/README.md) | [Next Day →](../day61/README.md) diff --git a/2024/day61/README.md b/2024/day61/README.md new file mode 100644 index 0000000000..9d518b70db --- /dev/null +++ b/2024/day61/README.md @@ -0,0 +1,37 @@ +# Day 61- Terraform🔥 + +Hope you've already got the gist of What Working with Terraform would be like . Lets begin +with day 2 of Terraform ! + +## Task 1: + +find purpose of basic Terraform commands which you'll use often + +1. `terraform init` + +2. `terraform init -upgrade` + +3. `terraform plan` + +4. `terraform apply` + +5. `terraform validate` + +6. `terraform fmt` + +7. `terraform destroy` + +Also along with these tasks its important to know about Terraform in general- +Who are Terraform's main competitors? +The main competitors are: + +Ansible +Packer +Cloud Foundry +Kubernetes + +Want a Free video Course for terraform? Click [here](https://bit.ly/tws-terraform) + +Don't forget to share your learnings on Linkedin ! Happy Learning :) + +[← Previous Day](../day60/README.md) | [Next Day →](../day62/README.md) diff --git a/2024/day62/README.md b/2024/day62/README.md new file mode 100644 index 0000000000..76f61b708a --- /dev/null +++ b/2024/day62/README.md @@ -0,0 +1,79 @@ +# Day 62 - Terraform and Docker 🔥 + +Terraform needs to be told which provider to be used in the automation, hence we need to give the provider name with source and version. +For Docker, we can use this block of code in your main.tf + +## Blocks and Resources in Terraform + +## Terraform block + +## Task-01 + +- Create a Terraform script with Blocks and Resources + +``` +terraform { + required_providers { + docker = { + source = "kreuzwerker/docker" + version = "~> 2.21.0" +} +} +} +``` + +### Note: kreuzwerker/docker, is shorthand for registry.terraform.io/kreuzwerker/docker. + +## Provider Block + +The provider block configures the specified provider, in this case, docker. A provider is a plugin that Terraform uses to create and manage your resources. + +``` +provider "docker" {} +``` + +## Resource + +Use resource blocks to define components of your infrastructure. A resource might be a physical or virtual component such as a Docker container, or it can be a logical resource such as a Heroku application. + +Resource blocks have two strings before the block: the resource type and the resource name. In this example, the first resource type is docker_image and the name is Nginx. + +## Task-02 + +- Create a resource Block for an nginx docker image + +Hint: + +``` +resource "docker_image" "nginx" { + name = "nginx:latest" + keep_locally = false +} +``` + +- Create a resource Block for running a docker container for nginx + +``` +resource "docker_container" "nginx" { + image = docker_image.nginx.latest + name = "tutorial" + ports { + internal = 80 + external = 80 + } +} +``` + +Note: In case Docker is not installed + +`sudo apt-get install docker.io` +`sudo docker ps` +`sudo chown $USER /var/run/docker.sock` + +# Video Course + +I can imagine, Terraform can be tricky, so best to use a Free video Course for terraform [here](https://bit.ly/tws-terraform) + +Happy Learning :) + +[← Previous Day](../day61/README.md) | [Next Day →](../day63/README.md) diff --git a/2024/day63/README.md b/2024/day63/README.md new file mode 100644 index 0000000000..e4338fb906 --- /dev/null +++ b/2024/day63/README.md @@ -0,0 +1,62 @@ +# Day 63 - Terraform Variables + +variables in Terraform are quite important, as you need to hold values of names of instance, configs , etc. + +We can create a variables.tf file which will hold all the variables. + +``` +variable "filename" { +default = "/home/ubuntu/terrform-tutorials/terraform-variables/demo-var.txt" +} +``` + +``` +variable "content" { +default = "This is coming from a variable which was updated" +} +``` + +These variables can be accessed by var object in main.tf + +## Task-01 + +- Create a local file using Terraform + Hint: + +``` +resource "local_file" "devops" { +filename = var.filename +content = var.content +} +``` + +## Data Types in Terraform + +## Map + +``` +variable "file_contents" { +type = map +default = { +"statement1" = "this is cool" +"statement2" = "this is cooler" +} +} +``` + +## Task-02 + +- Use terraform to demonstrate usage of List, Set and Object datatypes +- Put proper screenshots of the outputs + +Use `terraform refresh` + +To refresh the state by your configuration file, reloads the variables + +# Video Course + +I can imagine, Terraform can be tricky, so best to use a Free video Course for terraform [here](https://bit.ly/tws-terraform) + +Happy Learning :) + +[← Previous Day](../day62/README.md) | [Next Day →](../day64/README.md) diff --git a/2024/day64/README.md b/2024/day64/README.md new file mode 100644 index 0000000000..d30e1048d9 --- /dev/null +++ b/2024/day64/README.md @@ -0,0 +1,67 @@ +# Day 64 - Terraform with AWS + +Provisioning on AWS is quite easy and straightforward with Terraform. + +## Prerequisites + +### AWS CLI installed + +The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. + +### AWS IAM user + +IAM (Identity Access Management) AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. + +_In order to connect your AWS account and Terraform, you need the access keys and secret access keys exported to your machine._ + +``` +export AWS_ACCESS_KEY_ID= +export AWS_SECRET_ACCESS_KEY= +``` + +### Install required providers + +``` +terraform { + required_providers { + aws = { + source = "hashicorp/aws" + version = "~> 4.16" +} +} + required_version = ">= 1.2.0" +} +``` + +Add the region where you want your instances to be + +``` +provider "aws" { +region = "us-east-1" +} +``` + +## Task-01 + +- Provision an AWS EC2 instance using Terraform + +Hint: + +``` +resource "aws_instance" "aws_ec2_test" { + count = 4 + ami = "ami-08c40ec9ead489470" + instance_type = "t2.micro" + tags = { + Name = "TerraformTestServerInstance" + } +} +``` + +# Video Course + +I can imagine, Terraform can be tricky, so best to use a Free video Course for terraform [here](https://bit.ly/tws-terraform) + +Happy Learning :) + +[← Previous Day](../day63/README.md) | [Next Day →](../day65/README.md) diff --git a/2024/day65/README.md b/2024/day65/README.md new file mode 100644 index 0000000000..904c6c1158 --- /dev/null +++ b/2024/day65/README.md @@ -0,0 +1,67 @@ +# Day 65 - Working with Terraform Resources 🚀 + +Yesterday, we saw how to create a Terraform script with Blocks and Resources. Today, we will dive deeper into Terraform resources. + +## Understanding Terraform Resources + +A resource in Terraform represents a component of your infrastructure, such as a physical server, a virtual machine, a DNS record, or an S3 bucket. Resources have attributes that define their properties and behaviors, such as the size and location of a virtual machine or the domain name of a DNS record. + +When you define a resource in Terraform, you specify the type of resource, a unique name for the resource, and the attributes that define the resource. Terraform uses the resource block to define resources in your Terraform configuration. + +## Task 1: Create a security group + +To allow traffic to the EC2 instance, you need to create a security group. Follow these steps: + +In your main.tf file, add the following code to create a security group: + +``` +resource "aws_security_group" "web_server" { + name_prefix = "web-server-sg" + + ingress { + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["0.0.0.0/0"] + } +} +``` + +- Run terraform init to initialize the Terraform project. + +- Run terraform apply to create the security group. + +## Task 2: Create an EC2 instance + +- Now you can create an EC2 instance with Terraform. Follow these steps: + +- In your main.tf file, add the following code to create an EC2 instance: + +``` +resource "aws_instance" "web_server" { + ami = "ami-0557a15b87f6559cf" + instance_type = "t2.micro" + key_name = "my-key-pair" + security_groups = [ + aws_security_group.web_server.name + ] + + user_data = <<-EOF + #!/bin/bash + echo "

Welcome to my website!

" > index.html + nohup python -m SimpleHTTPServer 80 & + EOF +} +``` + +Note: Replace the ami and key_name values with your own. You can find a list of available AMIs in the AWS documentation. + +Run terraform apply to create the EC2 instance. + +## Task 3: Access your website + +- Now that your EC2 instance is up and running, you can access the website you just hosted on it. Follow these steps: + +Happy Terraforming! + +[← Previous Day](../day64/README.md) | [Next Day →](../day66/README.md) diff --git a/2024/day66/README.md b/2024/day66/README.md new file mode 100644 index 0000000000..630837a5ff --- /dev/null +++ b/2024/day66/README.md @@ -0,0 +1,26 @@ +# Day 66 - Terraform Hands-on Project - Build Your Own AWS Infrastructure with Ease using Infrastructure as Code (IaC) Techniques(Interview Questions) ☁️ + +Welcome back to your Terraform journey. + +In the previous tasks, you have learned about the basics of Terraform, its configuration file, and creating an EC2 instance using Terraform. Today, we will explore more about Terraform and create multiple resources. + +## Task: + +- Create a VPC (Virtual Private Cloud) with CIDR block 10.0.0.0/16 +- Create a public subnet with CIDR block 10.0.1.0/24 in the above VPC. +- Create a private subnet with CIDR block 10.0.2.0/24 in the above VPC. +- Create an Internet Gateway (IGW) and attach it to the VPC. +- Create a route table for the public subnet and associate it with the public subnet. This route table should have a route to the Internet Gateway. +- Launch an EC2 instance in the public subnet with the following details: +- AMI: ami-0557a15b87f6559cf +- Instance type: t2.micro +- Security group: Allow SSH access from anywhere +- User data: Use a shell script to install Apache and host a simple website +- Create an Elastic IP and associate it with the EC2 instance. +- Open the website URL in a browser to verify that the website is hosted successfully. + +#### This Terraform hands-on task is designed to test your proficiency in using Terraform for Infrastructure as Code (IaC) on AWS. You will be tasked with creating a VPC, subnets, an internet gateway, and launching an EC2 instance with a web server running on it. This task will showcase your skills in automating infrastructure deployment using Terraform. It's a popular interview question for companies looking for candidates with hands-on experience in Terraform. That's it for today. + +Happy Terraforming:) + +[← Previous Day](../day65/README.md) | [Next Day →](../day67/README.md) diff --git a/2024/day67/README.md b/2024/day67/README.md new file mode 100644 index 0000000000..62e6f35476 --- /dev/null +++ b/2024/day67/README.md @@ -0,0 +1,22 @@ +# Day 67: AWS S3 Bucket Creation and Management + +## AWS S3 Bucket + +Amazon S3 (Simple Storage Service) is an object storage service that offers industry-leading scalability, data availability, security, and performance. It can be used for a variety of use cases, such as storing and retrieving data, hosting static websites, and more. + +In this task, you will learn how to create and manage S3 buckets in AWS. + +## Task + +- Create an S3 bucket using Terraform. +- Configure the bucket to allow public read access. +- Create an S3 bucket policy that allows read-only access to a specific IAM user or role. +- Enable versioning on the S3 bucket. + +## Resources + +[Terraform S3 bucket resource](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket) + +Good luck and happy learning! + +[← Previous Day](../day66/README.md) | [Next Day →](../day68/README.md) diff --git a/2024/day68/README.md b/2024/day68/README.md new file mode 100644 index 0000000000..4185d8a5dd --- /dev/null +++ b/2024/day68/README.md @@ -0,0 +1,66 @@ +# Day 68 - Scaling with Terraform 🚀 + +Yesterday, we learned how to AWS S3 Bucket with Terraform. Today, we will see how to scale our infrastructure with Terraform. + +## Understanding Scaling + +Scaling is the process of adding or removing resources to match the changing demands of your application. As your application grows, you will need to add more resources to handle the increased load. And as the load decreases, you can remove the extra resources to save costs. + +Terraform makes it easy to scale your infrastructure by providing a declarative way to define your resources. You can define the number of resources you need and Terraform will automatically create or destroy the resources as needed. + +## Task 1: Create an Auto Scaling Group + +Auto Scaling Groups are used to automatically add or remove EC2 instances based on the current demand. Follow these steps to create an Auto Scaling Group: + +- In your main.tf file, add the following code to create an Auto Scaling Group: + +``` +resource "aws_launch_configuration" "web_server_as" { + image_id = "ami-005f9685cb30f234b" + instance_type = "t2.micro" + security_groups = [aws_security_group.web_server.name] + + user_data = <<-EOF + #!/bin/bash + echo "

You're doing really Great

" > index.html + nohup python -m SimpleHTTPServer 80 & + EOF +} + +resource "aws_autoscaling_group" "web_server_asg" { + name = "web-server-asg" + launch_configuration = aws_launch_configuration.web_server_lc.name + min_size = 1 + max_size = 3 + desired_capacity = 2 + health_check_type = "EC2" + load_balancers = [aws_elb.web_server_lb.name] + vpc_zone_identifier = [aws_subnet.public_subnet_1a.id, aws_subnet.public_subnet_1b.id] +} + + +``` + +- Run terraform apply to create the Auto Scaling Group. + +## Task 2: Test Scaling + +- Go to the AWS Management Console and select the Auto Scaling Groups service. + +- Select the Auto Scaling Group you just created and click on the "Edit" button. + +- Increase the "Desired Capacity" to 3 and click on the "Save" button. + +- Wait a few minutes for the new instances to be launched. + +- Go to the EC2 Instances service and verify that the new instances have been launched. + +- Decrease the "Desired Capacity" to 1 and wait a few minutes for the extra instances to be terminated. + +- Go to the EC2 Instances service and verify that the extra instances have been terminated. + +Congratulations🎊🎉 You have successfully scaled your infrastructure with Terraform. + +Happy Learning :) + +[← Previous Day](../day67/README.md) | [Next Day →](../day69/README.md) diff --git a/2024/day69/README.md b/2024/day69/README.md new file mode 100644 index 0000000000..570803dbdd --- /dev/null +++ b/2024/day69/README.md @@ -0,0 +1,182 @@ +# Day 69 - Meta-Arguments in Terraform + +When you define a resource block in Terraform, by default, this specifies one resource that will be created. To manage several of the same resources, you can use either count or for_each, which removes the need to write a separate block of code for each one. Using these options reduces overhead and makes your code neater. + +count is what is known as a ‘meta-argument’ defined by the Terraform language. Meta-arguments help achieve certain requirements within the resource block. + +## Count + +The count meta-argument accepts a whole number and creates the number of instances of the resource specified. + +When each instance is created, it has its own distinct infrastructure object associated with it, so each can be managed separately. When the configuration is applied, each object can be created, destroyed, or updated as appropriate. + +eg. + +``` + +terraform { + +required_providers { + +aws = { + +source = "hashicorp/aws" + +version = "~> 4.16" + +} + +} + +required_version = ">= 1.2.0" + +} + + + +provider "aws" { + +region = "us-east-1" + +} + + + +resource "aws_instance" "server" { + +count = 4 + + + +ami = "ami-08c40ec9ead489470" + +instance_type = "t2.micro" + + + +tags = { + +Name = "Server ${count.index}" + +} + +} + + + +``` + +## for_each + +Like the count argument, the for_each meta-argument creates multiple instances of a module or resource block. However, instead of specifying the number of resources, the for_each meta-argument accepts a map or a set of strings. This is useful when multiple resources are required that have different values. Consider our Active directory groups example, with each group requiring a different owner. + +``` + +terraform { + +required_providers { + +aws = { + +source = "hashicorp/aws" + +version = "~> 4.16" + +} + +} + +required_version = ">= 1.2.0" + +} + + + +provider "aws" { + +region = "us-east-1" + +} + + + +locals { + +ami_ids = toset([ + +"ami-0b0dcb5067f052a63", + +"ami-08c40ec9ead489470", + +]) + +} + + + +resource "aws_instance" "server" { + +for_each = local.ami_ids + + + +ami = each.key + +instance_type = "t2.micro" + +tags = { + +Name = "Server ${each.key}" + +} + +} + + + +Multiple key value iteration + +locals { + +ami_ids = { + +"linux" :"ami-0b0dcb5067f052a63", + +"ubuntu": "ami-08c40ec9ead489470", + +} + +} + + + +resource "aws_instance" "server" { + +for_each = local.ami_ids + + + +ami = each.value + +instance_type = "t2.micro" + + + +tags = { + +Name = "Server ${each.key}" + +} + +} + +``` + +## Task-01 + +- Create the above Infrastructure as code and demonstrate the use of Count and for_each. +- Write about meta-arguments and its use in Terraform. + +Happy learning :) + +[← Previous Day](../day68/README.md) | [Next Day →](../day70/README.md) diff --git a/2024/day70/README.md b/2024/day70/README.md new file mode 100644 index 0000000000..4a42230590 --- /dev/null +++ b/2024/day70/README.md @@ -0,0 +1,80 @@ +# Day 70 - Terraform Modules + +- Modules are containers for multiple resources that are used together. A module consists of a collection of .tf and/or .tf.json files kept together in a directory +- A module can call other modules, which lets you include the child module's resources into the configuration in a concise way. +- Modules can also be called multiple times, either within the same configuration or in separate configurations, allowing resource configurations to be packaged and re-used. + +### Below is the format on how to use modules: + +``` +# Creating a AWS EC2 Instance +resource "aws_instance" "server-instance" { + # Define number of instance + instance_count = var.number_of_instances + + # Instance Configuration + ami = var.ami + instance_type = var.instance_type + subnet_id = var.subnet_id + vpc_security_group_ids = var.security_group + + # Instance Tagsid + tags = { + Name = "${var.instance_name}" + } +} +``` + +``` +# Server Module Variables +variable "number_of_instances" { + description = "Number of Instances to Create" + type = number + default = 1 +} + +variable "instance_name" { + description = "Instance Name" +} + +variable "ami" { + description = "AMI ID" + default = "ami-xxxx" +} + +variable "instance_type" { + description = "Instance Type" +} + +variable "subnet_id" { + description = "Subnet ID" +} + +variable "security_group" { + description = "Security Group" + type = list(any) +} +``` + +``` +# Server Module Output +output "server_id" { + description = "Server ID" + value = aws_instance.server-instance.id +} + +``` + +## Task-01 + +Explain the below in your own words and it shouldnt be copied from Internet 😉 + +- Write about different modules Terraform. +- Difference between Root Module and Child Module. +- Is modules and Namespaces are same? Justify your answer for both Yes/No + +You all are doing great, and you have come so far. Well Done Everyone🎉 + +Thode mehnat aur krni hai bas to lge rho tab tak.....Happy learning :) + +[← Previous Day](../day69/README.md) | [Next Day →](../day71/README.md) diff --git a/2024/day71/README.md b/2024/day71/README.md new file mode 100644 index 0000000000..7bcb7bb3e1 --- /dev/null +++ b/2024/day71/README.md @@ -0,0 +1,41 @@ +# Day 71 - Let's prepare for some interview questions of Terraform 🔥 + +### 1. What is Terraform and how it is different from other IaaC tools? + +### 2. How do you call a main.tf module? + +### 3. What exactly is Sentinel? Can you provide few examples where we can use for Sentinel policies? + +### 4. You have a Terraform configuration file that defines an infrastructure deployment. However, there are multiple instances of the same resource that need to be created. How would you modify the configuration file to achieve this? + +### 5. You want to know from which paths Terraform is loading providers referenced in your Terraform configuration (\*.tf files). You need to enable debug messages to find this out. Which of the following would achieve this? + +A. Set the environment variable TF_LOG=TRACE + +B. Set verbose logging for each provider in your Terraform configuration + +C. Set the environment variable TF_VAR_log=TRACE + +D. Set the environment variable TF_LOG_PATH + +### 6. Below command will destroy everything that is being created in the infrastructure. Tell us how would you save any particular resource while destroying the complete infrastructure. + +``` +terraform destroy +``` + +### 7. Which module is used to store .tfstate file in S3? + +### 8. How do you manage sensitive data in Terraform, such as API keys or passwords? + +### 9. You are working on a Terraform project that needs to provision an S3 bucket, and a user with read and write access to the bucket. What resources would you use to accomplish this, and how would you configure them? + +### 10. Who maintains Terraform providers? + +### 11. How can we export data from one module to another? + +# + +Waiting for your responses😉.....Till then Happy learning :) + +[← Previous Day](../day70/README.md) | [Next Day →](../day72/README.md) diff --git a/2024/day72/README.md b/2024/day72/README.md new file mode 100644 index 0000000000..a283b10e39 --- /dev/null +++ b/2024/day72/README.md @@ -0,0 +1,16 @@ +Day 72 - Grafana🔥 + +Hello Learners , you guys are doing really a good job. You will not be there 24\*7 to monitor your resources. So, Today let’s monitor the resources in a smart way with - Grafana 🎉 + +## Task 1: + +> What is Grafana? What are the features of Grafana? +> Why Grafana? +> What type of monitoring can be done via Grafana? +> What databases work with Grafana? +> What are metrics and visualizations in Grafana? +> What is the difference between Grafana vs Prometheus? + +--- + +[← Previous Day](../day71/README.md) | [Next Day →](../day73/README.md) diff --git a/2024/day73/README.md b/2024/day73/README.md new file mode 100644 index 0000000000..a1af9d7dc9 --- /dev/null +++ b/2024/day73/README.md @@ -0,0 +1,16 @@ +Day 73 - Grafana 🔥 +Hope you are now clear with the basics of grafana, like why we use, where we use, what can we do with this and so on. + +Now, let's do some practical stuff. + +--- + +Task: + +> Setup grafana in your local environment on AWS EC2. + +--- + +Ref: https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7042518379030556672-ZZA-?utm_source=share&utm_medium=member_desktop + +[← Previous Day](../day72/README.md) | [Next Day →](../day74/README.md) diff --git a/2024/day74/README.md b/2024/day74/README.md new file mode 100644 index 0000000000..2877eeebd4 --- /dev/null +++ b/2024/day74/README.md @@ -0,0 +1,19 @@ +# Day 74 - Connecting EC2 with Grafana . + +You guys did amazing job last day setting up Grafana on Local 🔥. + +Now, let's do one step ahead. + +--- + +Task: + +Connect an Linux and one Windows EC2 instance with Grafana and monitor the different components of the server. + +--- + +Don't forget to share this amazing work over LinkedIn and Tag us. + +## Happy Learning :) + +[← Previous Day](../day73/README.md) | [Next Day →](../day75/README.md) diff --git a/2024/day75/README.md b/2024/day75/README.md new file mode 100644 index 0000000000..3c75d41caa --- /dev/null +++ b/2024/day75/README.md @@ -0,0 +1,30 @@ +# Day 75 - Sending Docker Log to Grafana + +We have monitored ,😉 that you guys are understanding and doing amazing with monitoring tool. 👌 + +Today, make it little bit more complex but interesting 😍 and let's add one more **Project** 🔥 to your resume. + +--- + +## Task: + +- Install _Docker_ and start docker service on a Linux EC2 through [USER DATA](../day39/README.md) . +- Create 2 Docker containers and run any basic application on those containers (A simple todo app will work). +- Now intregrate the docker containers and share the real time logs with Grafana (Your Instance should be connected to Grafana and Docker plugin should be enabled on grafana). +- Check the logs or docker container name on Grafana UI. + +--- + +You can use [this video](https://youtu.be/y3SGHbixmJw) for your refernce. But it's always better to find your own way of doing. 😊 + +## Bonus : + +- As you have done this amazing task, here is one bonus link.❤️ + +## You can use this [refernce video](https://youtu.be/CCi957AnSfc) to intregrate _Prometheus_ with _Grafana_ and monitor Docker containers. Seems interesting ? + +Don't forget to share this amazing work over LinkedIn and Tag us. + +## Happy Learning :) + +[← Previous Day](../day74/README.md) | [Next Day →](../day76/README.md) diff --git a/2024/day76/README.md b/2024/day76/README.md new file mode 100644 index 0000000000..7c3fbb0bd1 --- /dev/null +++ b/2024/day76/README.md @@ -0,0 +1,33 @@ +# Day 76 Build a Grafana dashboard + +A dashboard gives you an at-a-glance view of your data and lets you track metrics through different visualizations. + +Dashboards consist of panels, each representing a part of the story you want your dashboard to tell. + +Every panel consists of a query and a visualization. The query defines what data you want to display, whereas the visualization defines how the data is displayed. + +## Task 01 + +- In the sidebar, hover your cursor over the Create (plus sign) icon and then click Dashboard. + +- Click Add a new panel. + +- In the Query editor below the graph, enter the query from earlier and then press Shift + Enter: + +`sum(rate(tns_request_duration_seconds_count[5m])) by(route)` + +- In the Legend field, enter {{route}} to rename the time series in the legend. The graph legend updates when you click outside the field. + +- In the Panel editor on the right, under Settings, change the panel title to “Traffic”. + +- Click Apply in the top-right corner to save the panel and go back to the dashboard view. + +- Click the Save dashboard (disk) icon at the top of the dashboard to save your dashboard. + +- Enter a name in the Dashboard name field and then click Save. + +Read [this](https://grafana.com/tutorials/grafana-fundamentals/) in case you have any questions + +Do share some amazing Dashboards with the community + +[← Previous Day](../day75/README.md) | [Next Day →](../day77/README.md) diff --git a/2024/day77/README.md b/2024/day77/README.md new file mode 100644 index 0000000000..7acf545be9 --- /dev/null +++ b/2024/day77/README.md @@ -0,0 +1,14 @@ +# Day 77 Alerting + +Grafana Alerting allows you to learn about problems in your systems moments after they occur. Create, manage, and take action on your alerts in a single, consolidated view, and improve your team’s ability to identify and resolve issues quickly. + +Grafana Alerting is available for Grafana OSS, Grafana Enterprise, or Grafana Cloud. With Mimir and Loki alert rules you can run alert expressions closer to your data and at massive scale, all managed by the Grafana UI you are already familiar with. + +## Task-01 + +- Setup [Grafana cloud](https://grafana.com/products/cloud/) +- Setup sample alerting + +Check out [this blog](https://grafana.com/docs/grafana/latest/alerting/) for more details + +[← Previous Day](../day76/README.md) | [Next Day →](../day78/README.md) diff --git a/2024/day78/README.md b/2024/day78/README.md new file mode 100644 index 0000000000..631894de55 --- /dev/null +++ b/2024/day78/README.md @@ -0,0 +1,14 @@ +Day - 78 (Grafana Cloud) + +--- + +Task - 01 + +1. Setup alerts for EC2 instance. +2. Setup alerts for AWS Billing Alerts. + +--- + +For Reference: https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7044695663913148416-LfvD?utm_source=share&utm_medium=member_desktop + +[← Previous Day](../day77/README.md) | [Next Day →](../day79/README.md) diff --git a/2024/day79/README.md b/2024/day79/README.md new file mode 100644 index 0000000000..4eb87c4c49 --- /dev/null +++ b/2024/day79/README.md @@ -0,0 +1,20 @@ +Day 79 - Prometheus 🔥 + +Now, the next step is to learn about the Prometheus. +It's an open-source system for monitoring services and alerts based on a time series data model. Prometheus collects data and metrics from different services and stores them according to a unique identifier—the metric name—and a time stamp. + +Tasks: + +--- + +1. What is the Architecture of Prometheus Monitoring? +2. What are the Features of Prometheus? +3. What are the Components of Prometheus? +4. What database is used by Prometheus? +5. What is the default data retention period in Prometheus? + +--- + +Ref: https://www.devopsschool.com/blog/top-50-prometheus-interview-questions-and-answers/ + +[← Previous Day](../day78/README.md) | [Next Day →](../day80/README.md) diff --git a/2024/day80/README.md b/2024/day80/README.md new file mode 100644 index 0000000000..edbc3ec561 --- /dev/null +++ b/2024/day80/README.md @@ -0,0 +1,15 @@ +# Project-1 + +========= + +# Project Description + +The project aims to automate the building, testing, and deployment process of a web application using Jenkins and GitHub. The Jenkins pipeline will be triggered automatically by GitHub webhook integration when changes are made to the code repository. The pipeline will include stages such as building, testing, and deploying the application, with notifications and alerts for failed builds or deployments. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7011367641952993281-DHn5?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day79/README.md) | [Next Day →](../day81/README.md) diff --git a/2024/day81/README.md b/2024/day81/README.md new file mode 100644 index 0000000000..a10675fa1c --- /dev/null +++ b/2024/day81/README.md @@ -0,0 +1,15 @@ +# Project-2 + +========= + +# Project Description + +The project is about automating the deployment process of a web application using Jenkins and its declarative syntax. The pipeline includes stages like building, testing, and deploying to a staging environment. It also includes running acceptance tests and deploying to production if all tests pass. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7014971330496212992-6Q2m?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day80/README.md) | [Next Day →](../day82/README.md) diff --git a/2024/day82/README.md b/2024/day82/README.md new file mode 100644 index 0000000000..a17acccd92 --- /dev/null +++ b/2024/day82/README.md @@ -0,0 +1,15 @@ +# Project-3 + +========= + +# Project Description + +The project involves hosting a static website using an AWS S3 bucket. Amazon S3 is an object storage service that provides a simple web services interface to store and retrieve any amount of data. The website files will be uploaded to an S3 bucket and configured to function as a static website. The bucket will be configured with the appropriate permissions and a unique domain name, making the website publicly accessible. Overall, the project aims to leverage the benefits of AWS S3 to host and scale a static website in a cost-effective and scalable manner. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_aws-project-devopsjobs-activity-7016427742300663808-JAQd?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day81/README.md) | [Next Day →](../day83/README.md) diff --git a/2024/day83/README.md b/2024/day83/README.md new file mode 100644 index 0000000000..dc80aefc33 --- /dev/null +++ b/2024/day83/README.md @@ -0,0 +1,15 @@ +# Project-4 + +========= + +# Project Description + +The project aims to deploy a web application using Docker Swarm, a container orchestration tool that allows for easy management and scaling of containerized applications. The project will utilize Docker Swarm's production-ready features such as load balancing, rolling updates, and service discovery to ensure high availability and reliability of the web application. The project will involve creating a Dockerfile to package the application into a container and then deploying it onto a Swarm cluster. The Swarm cluster will be configured to provide automated failover, load balancing, and horizontal scaling to the application. The goal of the project is to demonstrate the benefits of Docker Swarm for deploying and managing containerized applications in production environments. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7034173810656296960-UjUw?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day82/README.md) | [Next Day →](../day84/README.md) diff --git a/2024/day84/README.md b/2024/day84/README.md new file mode 100644 index 0000000000..be78b29c8b --- /dev/null +++ b/2024/day84/README.md @@ -0,0 +1,15 @@ +# Project-5 + +========= + +# Project Description + +The project involves deploying a Netflix clone web application on a Kubernetes cluster, a popular container orchestration platform that simplifies the deployment and management of containerized applications. The project will require creating Docker images of the web application and its dependencies and deploying them onto the Kubernetes cluster using Kubernetes manifests. The Kubernetes cluster will provide benefits such as high availability, scalability, and automatic failover of the application. Additionally, the project will utilize Kubernetes tools such as Kubernetes Dashboard and kubectl to monitor and manage the deployed application. Overall, the project aims to demonstrate the power and benefits of Kubernetes for deploying and managing containerized applications at scale. + +## Task-01 + +Get a netflix clone form [GitHub](https://github.com/devandres-tech/Netflix-Clone), read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7034173810656296960-UjUw?utm_source=share&utm_medium=member_desktop) and follow the Redit clone steps to similarly deploy a Netflix Clone + +Happy Learning :) + +[← Previous Day](../day83/README.md) | [Next Day →](../day85/README.md) diff --git a/2024/day85/README.md b/2024/day85/README.md new file mode 100644 index 0000000000..0cd64c996b --- /dev/null +++ b/2024/day85/README.md @@ -0,0 +1,26 @@ +# Project-6 + +========= + +# Project Description + +The project involves deploying a Node JS app on AWS ECS Fargate and AWS ECR. +Read More about the tech stack [here](https://faun.pub/what-is-amazon-ecs-and-ecr-how-does-they-work-with-an-example-4acbf9be8415) + +## Task-01 + +- Get a NodeJs application from [GitHub](https://github.com/LondheShubham153/node-todo-cicd). + +- Build the Dockerfile present in the repo + +- Setup AWS CLI and AWS Login in order to tag and push to ECR + +- Setup an ECS cluster + +- Create a Task Definition for the node js project with ECR image + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day84/README.md) | [Next Day →](../day86/README.md) diff --git a/2024/day86/README.md b/2024/day86/README.md new file mode 100644 index 0000000000..c8f809df7d --- /dev/null +++ b/2024/day86/README.md @@ -0,0 +1,24 @@ +# Project-7 + +========= + +# Project Description + +The project involves deploying a Portfolio app on AWS S3 using GitHub Actions. +Git Hub actions allows you to perform CICD with GitHub Repository integrated. + +## Task-01 + +- Get a Portfolio application from [GitHub](https://github.com/LondheShubham153/tws-portfolio). + +- Build the GitHub Actions Workflow + +- Setup AWS CLI and AWS Login in order to sync website to S3 (to be done as a part of YAML) + +- Follow this [video]() to understand it better + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day85/README.md) | [Next Day →](../day87/README.md) diff --git a/2024/day87/README.md b/2024/day87/README.md new file mode 100644 index 0000000000..fa123ea638 --- /dev/null +++ b/2024/day87/README.md @@ -0,0 +1,24 @@ +# Project-8 + +========= + +# Project Description + +The project involves deploying a react application on AWS Elastic BeanStalk using GitHub Actions. +Git Hub actions allows you to perform CICD with GitHub Repository integrated. + +## Task-01 + +- Get source code from [GitHub](https://github.com/sitchatt/AWS_Elastic_BeanStalk_On_EC2.git). + +- Setup AWS Elastic BeanStalk + +- Build the GitHub Actions Workflow + +- Follow this [blog](https://www.linkedin.com/posts/sitabja-chatterjee_effortless-deployment-of-react-app-to-aws-activity-7053579065487687680-wZI8?utm_source=share&utm_medium=member_desktop) to understand it better + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day86/README.md) | [Next Day →](../day88/README.md) diff --git a/2024/day88/README.md b/2024/day88/README.md new file mode 100644 index 0000000000..3668934da1 --- /dev/null +++ b/2024/day88/README.md @@ -0,0 +1,23 @@ +# Project-9 + +========= + +# Project Description + +The project involves deploying a Django Todo app on AWS EC2 using Kubeadm Kubernetes cluster. + +Kubernetes Cluster helps in Auto-scaling and Auto-healing of your application. + +## Task-01 + +- Get a Django Full Stack application from [GitHub](https://github.com/LondheShubham153/django-todo-cicd). + +- Setup the Kubernetes cluster using [this script](https://github.com/RishikeshOps/Scripts/blob/main/k8sss.sh) + +- Setup Deployment and Service for Kubernetes. + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day87/README.md) | [Next Day →](../day89/README.md) diff --git a/2024/day89/README.md b/2024/day89/README.md new file mode 100644 index 0000000000..45ee46628d --- /dev/null +++ b/2024/day89/README.md @@ -0,0 +1,19 @@ +# Project-10 + +========= + +# Project Description + +The project involves Mounting of AWS S3 Bucket On Amazon EC2 Linux Using S3FS. + +This is a AWS Mini Project that will teach you AWS, S3, EC2, S3FS. + +## Task-01 + +- Create IAM user and set policies for the project resources using this [blog](https://medium.com/@chetxn/project-8-devops-implementation-8300b9ed1f2). +- Utilize and make the best use of aws-cli +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day88/README.md) | [Next Day →](../day90/README.md) diff --git a/2024/day90/README.md b/2024/day90/README.md new file mode 100644 index 0000000000..d28985c060 --- /dev/null +++ b/2024/day90/README.md @@ -0,0 +1,29 @@ +# Day 90: The Awesome Finale! 🎉 🎉 + +🚀 Can you believe it? You've hit the jackpot – Day 90, the grand finale of our DevOps bonanza. Time to give yourself a virtual high-five! + +### What's Next? + +While this marks the end of the official 90-day journey, remember that your learning journey in DevOps is far from over. There's always something new to explore, tools to master, and techniques to refine. We're continuing to curate more content, challenges, and resources to help you advance your DevOps expertise. + +### Share Your Achievement + +Share your journey with the world! Post about your accomplishments on social media using the hashtag #90DaysOfDevOps. Inspire others to join the DevOps movement and take charge of their learning path. + +### Keep the Momentum Going! + +The knowledge and skills you've gained during these 90 days are just the beginning. Keep practicing, experimenting, and collaborating. DevOps is a continuous journey of improvement and innovation. + +### Star the Repository + +If you've found value in this repository and the DevOps content we've curated, consider showing your appreciation by starring this repository. Your support motivates us to keep creating high-quality content and resources for the community. + +**[🌟 Star this repository](https://github.com/LondheShubham153/90DaysOfDevOps)** + +Thank you for being part of the "90 Days of DevOps" adventure. +Keep coding, automating, deploying, and innovating! 🎈 + +With gratitude, +@TrainWithShubham + +[← Previous Day](../day89/README.md) diff --git a/2025/ansible/README.md b/2025/ansible/README.md new file mode 100644 index 0000000000..5721fffd01 --- /dev/null +++ b/2025/ansible/README.md @@ -0,0 +1,193 @@ +# Week 9: Ansible Automation Challenge + +This set of tasks is part of the 90DaysOfDevOps challenge and focuses on solving real-world automation problems using Ansible. By completing these tasks on your designated Ansible project repository, you'll work on scenarios that mirror production environments and industry practices. The tasks cover installation, dynamic inventory management, robust playbook development, role organization, secure secret management, and orchestration of multi-tier applications. Your work will help you build practical skills and prepare for technical interviews. + +**Important:** +1. Fork or create your designated Ansible project repository (or use your own) and implement all tasks on your fork. +2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +3. Submit your `solution.md` file in the Week 9 (Ansible) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Install Ansible and Configure a Dynamic Inventory + +**Real-World Scenario:** +In production, inventories change frequently. Set up Ansible with a dynamic inventory (using a script or AWS EC2 plugin) to automatically fetch and update target hosts. + +**Steps:** +1. **Install Ansible:** + - Follow the official installation guide to install Ansible on your local machine. +2. **Configure a Dynamic Inventory:** + - Set up a dynamic inventory using an inventory script or the AWS EC2 dynamic inventory plugin. +3. **Test Connectivity:** + - Run: + ```bash + ansible all -m ping -i dynamic_inventory.py + ``` + to ensure all servers are reachable. +4. **Document in `solution.md`:** + - Include your dynamic inventory configuration and test outputs. + - Explain how dynamic inventories adapt to a production environment. + +**Interview Questions:** +- How do dynamic inventories improve the management of production hosts? +- What challenges do dynamic inventory sources present and how can you mitigate them? + +--- + +## Task 2: Develop a Robust Playbook to Install and Configure Nginx + +**Real-World Scenario:** +Web servers like Nginx must be reliably deployed and configured in production. Create a playbook that installs Nginx, configures it using advanced Jinja2 templating (with loops, conditionals, and filters), and verifies that Nginx is running correctly. Incorporate asynchronous task execution with error handling for long-running operations. + +**Steps:** +1. **Create a Comprehensive Playbook:** + - Write a playbook (e.g., `nginx_setup.yml`) that: + - Installs Nginx. + - Deploys a templated Nginx configuration using a Jinja2 template (`nginx.conf.j2`) that includes loops and conditionals. + - Implements asynchronous execution (`async` and `poll`) with error handling. +2. **Test the Playbook:** + - Run the playbook against your dynamic inventory. +3. **Document in `solution.md`:** + - Include your playbook and Jinja2 template. + - Describe your strategies for asynchronous execution and error handling. + +**Interview Questions:** +- How do Jinja2 templates with loops and conditionals improve production configuration management? +- What are the challenges of managing long-running tasks with async in Ansible, and how do you handle errors? + +--- + +## Task 3: Organize Complex Playbooks Using Roles and Advanced Variables + +**Real-World Scenario:** +For large-scale production environments, organizing your playbooks into roles enhances maintainability and collaboration. Refactor your playbooks into roles (e.g., `nginx`, `app`, `db`) and use advanced variable files (with hierarchies and conditionals) to manage different configurations. + +**Steps:** +1. **Create Roles:** + - Develop roles for different components (e.g., `nginx`, `app`, `db`) with the standard directory structure (`tasks/`, `handlers/`, `templates/`, `vars/`). +2. **Utilize Advanced Variables:** + - Create hierarchical variable files with default values and override files for various scenarios. +3. **Refactor and Execute:** + - Update your composite playbook to include the roles. +4. **Document in `solution.md`:** + - Provide the role directory structure and sample variable files. + - Explain how this organization improves maintainability and flexibility. + +**Interview Questions:** +- How do roles improve scalability and collaboration in large-scale Ansible projects? +- What strategies do you use for variable precedence and hierarchy in complex environments? + +--- + +## Task 4: Secure Production Data with Advanced Ansible Vault Techniques + +**Real-World Scenario:** +In production, managing secrets securely is critical. Use Ansible Vault to encrypt sensitive data and explore advanced techniques like splitting secrets into multiple files and decrypting them at runtime. + +**Steps:** +1. **Create Encrypted Files:** + - Use `ansible-vault create` to encrypt multiple secret files. +2. **Integrate Vault in Your Playbooks:** + - Modify your playbooks to load encrypted variables from multiple files. +3. **Test Decryption:** + - Run your playbooks with the vault password to ensure proper decryption. +4. **Document in `solution.md`:** + - Outline your vault strategy and best practices (without exposing secrets). + - Explain the importance of secure secret management. + +**Interview Questions:** +- How does Ansible Vault secure sensitive data in production? +- What advanced techniques can you use for managing secrets at scale? + +--- + +## Task 5: Advanced Orchestration for Multi-Tier Deployments + +**Real-World Scenario:** +Deploy a multi-tier application (e.g., frontend, backend, and database) using Ansible roles to manage each tier. Use orchestration features (such as `serial`, `order`, and async execution) to ensure a smooth deployment process. + +**Steps:** +1. **Develop a Composite Playbook:** + - Write a playbook that calls multiple roles (e.g., `nginx` for frontend, `app` for backend, `db` for the database). +2. **Manage Execution Order and Async Tasks:** + - Use features like `serial` or `order` and implement asynchronous tasks with error handling where necessary. +3. **Document in `solution.md`:** + - Include your composite playbook and explain your orchestration strategy. + - Describe any asynchronous task handling and error management. + +**Interview Questions:** +- How do you orchestrate multi-tier deployments with Ansible? +- What are the challenges and solutions for asynchronous task execution in a multi-tier environment? + +--- + +## Bonus Task: Multi-Environment Setup with Terraform & Ansible + +**Real-World Scenario:** +Integrate Terraform and Ansible to provision and configure AWS infrastructure across multiple environments (dev, staging, prod). Use Terraform to provision resources using environment-specific variable files and use Ansible to configure them (e.g., install and configure Nginx). + +**Steps:** +1. **Provision with Terraform:** + - Create environment-specific variable files (e.g., `dev.tfvars`, `staging.tfvars`, `prod.tfvars`). + - Apply your Terraform configuration for each environment: + ```bash + terraform apply -var-file="dev.tfvars" + ``` +2. **Configure with Ansible:** + - Create separate inventory files or use a dynamic inventory based on Terraform outputs. + - Write a playbook (e.g., `nginx_setup.yml`) to install and configure Nginx. + - Execute the playbook for each environment. +3. **Document in `solution.md`:** + - Provide your environment-specific variable files, inventory files, and playbook. + - Summarize how Terraform outputs integrate with Ansible to manage multi-environment deployments. + +**Interview Questions:** +- How do you integrate Terraform outputs into Ansible inventories in a production workflow? +- What challenges might you face when managing multi-environment configurations, and how do you overcome them? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Fork or use your designated Ansible project repository and ensure all files (playbooks, roles, inventory files, `solution.md`, etc.) are committed and pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `ansible-challenge`) to the main repository. + - **Title:** + ``` + Week 9 Challenge - Ansible Automation Challenge + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Submit Your Documentation:** + - **Important:** Place your `solution.md` file in the Week 9 (Ansible) task folder of the 90DaysOfDevOps repository. + +4. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Ansible challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., dynamic inventory, multi-tier orchestration, advanced Vault usage, and Terraform-Ansible integration). + - Use the hashtags: **#90DaysOfDevOps #Ansible #DevOps #InterviewPrep** + - Optionally, provide links to your fork or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Ansible + +- **[Ansible Short Notes](https://www.trainwithshubham.com/products/Ansible-Short-Notes-64ad5f72b308530823e2c036)** +- **[Ansible One-Shot Video](https://youtu.be/4GwafiGsTUM?si=gqlIsNrfAv495WGj)** +- **[Multi-env setup blog](https://trainwithshubham.blog/devops-project-multi-environment-infrastructure-with-terraform-and-ansible/)** + +--- + +## Additional Resources + +- **[Ansible Official Documentation](https://docs.ansible.com/)** +- **[Ansible Modules Documentation](https://docs.ansible.com/ansible/latest/modules/modules_by_category.html)** +- **[Ansible Galaxy](https://galaxy.ansible.com/)** +- **[Ansible Best Practices](https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. diff --git a/2025/aws/README.md b/2025/aws/README.md new file mode 100644 index 0000000000..8b13789179 --- /dev/null +++ b/2025/aws/README.md @@ -0,0 +1 @@ + diff --git a/2025/cicd/README.md b/2025/cicd/README.md new file mode 100644 index 0000000000..2d68a9b891 --- /dev/null +++ b/2025/cicd/README.md @@ -0,0 +1,288 @@ +# Week 6 : Jenkins ( CI/CD ) Basics and Advanced real world challenge + +This set of tasks is designed as part of the 90DaysOfDevOps challenge to simulate real-world scenarios you might encounter on the job or in technical interviews. By completing these tasks, you'll gain practical experience with advanced Jenkins topics, including pipelines, distributed agents, RBAC, shared libraries, vulnerability scanning, and automated notifications. + +Complete each task and document all steps, commands, Screenshots, and observations in a file named `solution.md`. This documentation will serve as both your preparation guide and a portfolio piece for interviews. + +--- + +## Task 1: Create a Jenkins Pipeline Job for CI/CD + +**Scenario:** +Create an end-to-end CI/CD pipeline for a sample application. + +**Steps:** +1. **Set Up a Pipeline Job:** + - Create a new Pipeline job in Jenkins. + - Write a basic Jenkinsfile that automates the build, test, and deployment of a sample application (e.g., a simple web app). + - Suggested stages: **Build**, **Test**, **Deploy**. +2. **Run and Verify the Pipeline:** + - Trigger the pipeline and ensure each stage runs successfully. + - Verify the execution by checking console logs and, if applicable, using `docker ps` to confirm container status. +3. **Document in `solution.md`:** + - Include your Jenkinsfile code and explain the purpose of each stage. + - Note any issues you encountered and how you resolved them. + +**Interview Questions:** +- How do declarative pipelines streamline the CI/CD process compared to scripted pipelines? +- What are the benefits of breaking the pipeline into distinct stages? + +--- + +## Task 2: Build a Multi-Branch Pipeline for a Microservices Application + +**Scenario:** +You have a microservices-based application with multiple components stored in separate Git repositories. Your goal is to create a multi-branch pipeline that builds, tests, and deploys each service concurrently. + +**Steps:** +1. **Set Up a Multi-Branch Pipeline Job:** + - Create a new multi-branch pipeline in Jenkins. + - Configure it to scan your Git repository (or repositories) for branches. +2. **Develop a Jenkinsfile for Each Service:** + - Write a Jenkinsfile that includes stages for **Checkout**, **Build**, **Test**, and **Deploy**. + - Include parallel stages if applicable (e.g., running tests for different services concurrently). +3. **Simulate a Merge Scenario:** + - Create a feature branch and simulate a pull request workflow (using the Jenkins “Pipeline Multibranch” plugin with PR support if available). +4. **Document in `solution.md`:** + - List the Jenkinsfile(s) used, explain your pipeline design, and describe how multi-branch pipelines help manage microservices deployments in production. + +**Interview Questions:** +- How does a multi-branch pipeline improve continuous integration for microservices? +- What challenges might you face when merging feature branches in a multi-branch pipeline? + +--- + +## Task 3: Configure and Scale Jenkins Agents/Nodes + +**Scenario:** +Your build workload has increased, and you need to configure multiple agents (across different OS types) to distribute the load. + +**Steps:** +1. **Set Up Multiple Agents:** + - Configure at least two agents (e.g., one Linux-based and one Windows-based) in Jenkins. + - Use Docker containers or VMs to simulate different environments. +2. **Label Agents:** + - Assign labels (e.g., `linux`, `windows`) and modify your Jenkinsfile to run appropriate stages on the correct agent. +3. **Run Parallel Jobs:** + - Create jobs that run in parallel across these agents. +4. **Document in `solution.md`:** + - Explain how you configured and verified each agent. + - Describe the benefits of distributed builds in terms of speed and reliability. + +**Interview Questions:** +- What are the benefits and challenges of using distributed agents in Jenkins? +- How can you ensure that jobs are assigned to the correct agent in a multi-platform environment? + +--- + +## Task 4: Implement and Test RBAC in a Multi-Team Environment + +**Scenario:** +In a large organization, different teams (developers, testers, and operations) require different levels of access to Jenkins. You need to configure RBAC to secure your CI/CD pipeline. + +**Steps:** +1. **Configure RBAC:** + - Use Matrix-based security or the Role Strategy Plugin to create roles (e.g., Admin, Developer, Tester). + - Define permissions for each role. +2. **Create Test Accounts:** + - Simulate real-world usage by creating user accounts for each role and verifying access. +3. **Document in `solution.md`:** + - Include screenshots or logs of your RBAC configuration. + - Explain the importance of access control and provide a potential risk scenario that RBAC helps mitigate. + +**Interview Questions:** +- Why is RBAC essential in a CI/CD environment, and what are the consequences of weak access control? +- Can you describe a scenario where inadequate RBAC could lead to security issues? + +--- + +## Task 5: Develop and Integrate a Jenkins Shared Library + +**Scenario:** +You are working on multiple pipelines that share common tasks (like code quality checks or deployment steps). To avoid duplication and ensure consistency, you need to develop a Shared Library. + +**Steps:** +1. **Create a Shared Library Repository:** + - Set up a separate Git repository that hosts your shared library code. + - Develop reusable functions (e.g., a function for sending notifications or a common test stage). +2. **Integrate the Library:** + - Update your Jenkinsfile(s) from previous tasks to load and use the shared library. + - Use syntax similar to: + ```groovy + @Library('my-shared-library') _ + pipeline { + // pipeline code using shared functions + } + ``` +3. **Document in `solution.md`:** + - Provide code examples from your shared library. + - Explain how this approach improves maintainability and reduces errors. + +**Interview Questions:** +- How do shared libraries contribute to code reuse and maintainability in large organizations? +- Provide an example of a function that would be ideal for a shared library and explain its benefits. + +--- + +## Task 6: Integrate Vulnerability Scanning with Trivy + +**Scenario:** +Security is critical in CI/CD. You must ensure that the Docker images built in your pipeline are free from known vulnerabilities. + +**Steps:** +1. **Add a Vulnerability Scan Stage:** + - Update your Jenkins pipeline to include a stage that runs Trivy on your Docker image: + ```groovy + stage('Vulnerability Scan') { + steps { + sh 'trivy image /sample-app:v1.0' + } + } + ``` +2. **Configure Fail Criteria:** + - Optionally, set the stage to fail the build if critical vulnerabilities are detected. +3. **Document in `solution.md`:** + - Summarize the scan output, note the vulnerabilities and severity, and describe any remediation steps. + - Reflect on the importance of automated security scanning in CI/CD pipelines. + +**Interview Questions:** +- Why is integrating vulnerability scanning into a CI/CD pipeline important? +- How does Trivy help improve the security of your Docker images? + +--- + +## Task 7: Dynamic Pipeline Parameterization + +**Scenario:** +In production environments, pipelines need to be flexible and configurable. Implement dynamic parameterization to allow the pipeline to accept runtime parameters (such as target environment, version numbers, or deployment options). + +**Steps:** +1. **Modify Your Jenkinsfile:** + - Update your Jenkinsfile to accept parameters. For example: + ```groovy + pipeline { + agent any + parameters { + string(name: 'TARGET_ENV', defaultValue: 'staging', description: 'Deployment target environment') + string(name: 'APP_VERSION', defaultValue: '1.0.0', description: 'Application version to deploy') + } + stages { + stage('Build') { + steps { + echo "Building version ${params.APP_VERSION} for ${params.TARGET_ENV} environment..." + // Build commands here + } + } + // Add other stages as needed + } + } + ``` +2. **Run the Parameterized Pipeline:** + - Trigger the pipeline and provide different parameter values to observe how the pipeline behavior changes. +3. **Document in `solution.md`:** + - Explain how parameterization makes the pipeline dynamic. + - Include sample outputs and discuss how this flexibility is useful in a production CI/CD environment. + +**Interview Questions:** +- How does pipeline parameterization improve the flexibility of CI/CD workflows? +- Provide an example of a scenario where dynamic parameters would be critical in a deployment pipeline. + +--- + +## Task 8: Integrate Email Notifications for Build Events + +**Scenario:** +Automated notifications keep teams informed about build statuses. Configure Jenkins to send email alerts upon build completion or failure. + +**Steps:** +1. **Configure SMTP Settings:** + - Set up SMTP details in Jenkins under "Manage Jenkins" → "Configure System". +2. **Update Your Jenkinsfile:** + - Add a stage that uses the `emailext` plugin to send notifications: + ```groovy + stage('Notify') { + steps { + emailext ( + subject: "Build Notification: ${env.JOB_NAME} - Build #${env.BUILD_NUMBER}", + body: "The build has completed successfully. Check details at: ${env.BUILD_URL}", + recipientProviders: [[$class: 'DevelopersRecipientProvider']] + ) + } + } + ``` +3. **Test the Notification:** + - Trigger the pipeline and verify that an email is sent. +4. **Document in `solution.md`:** + - Explain your configuration steps, note any challenges, and describe how you resolved them. + +**Interview Questions:** +- What are the advantages of automating email notifications in CI/CD? +- How would you troubleshoot issues if email notifications fail to send? + +--- + +## Task 9: Troubleshooting, Monitoring & Advanced Debugging + +**Scenario:** +Real-world CI/CD pipelines sometimes fail. Demonstrate how you would troubleshoot and monitor your Jenkins environment. + +**Steps:** +1. **Troubleshooting:** + - Simulate a pipeline failure (e.g., by introducing an error in the Jenkinsfile) and document your troubleshooting process. + - Use commands like `docker logs` and review Jenkins console output. +2. **Monitoring:** + - Describe methods for monitoring Jenkins, such as using system logs or monitoring plugins. +3. **Advanced Debugging:** + - Add debugging statements (e.g., `echo` commands) in your Jenkinsfile to output environment variables or intermediate results. + - Use Jenkins' "Replay" feature to test modifications without committing changes. +4. **Document in `solution.md`:** + - Provide a detailed account of your troubleshooting, monitoring, and debugging strategies. + - Reflect on how these practices help maintain a stable CI/CD environment. + +**Interview Questions:** +- How would you approach troubleshooting a failing Jenkins pipeline? +- What are some effective strategies for monitoring Jenkins in a production environment? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Ensure all files (e.g., Jenkinsfile, configuration scripts, `solution.md`, etc.) are committed and pushed to your repository. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `jenkins-challenge`) to the main repository. + - **Title:** + ``` + Week 6 Challenge - DevOps Batch 9: Jenkins CI/CD Challenge + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Jenkins challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., agent configuration, RBAC, shared libraries, vulnerability scanning, and troubleshooting). + - Use the hashtags: **#90DaysOfDevOps #Jenkins #CI/CD #DevOps #InterviewPrep** + - Optionally, provide links to your repository or blog posts detailing your journey. + +--- + + +## TrainWithShubham Resources for Jenkins CI/CD + +- **[Jenkins Short notes](https://www.trainwithshubham.com/products/64aac20780964e534608664d?dgps_u=l&dgps_s=ucpd&dgps_t=cp_u&dgps_u_st=p&dgps_uid=66c972da3795a9659545d71a)** +- **[Jenkins One-Shot Video](https://youtu.be/XaSdKR2fOU4?si=eDmLQMSSh_eMPT_p)** +- **[TWS blog on Jenkins CI/CD](https://trainwithshubham.blog/automate-cicd-spring-boot-banking-app-jenkins-docker-github/)** + +## Additional Resources + +- **[Jenkins Official Documentation](https://www.jenkins.io/doc/)** +- **[Jenkins Pipeline Documentation](https://www.jenkins.io/doc/book/pipeline/)** +- **[Jenkins Agents and Nodes](https://www.jenkins.io/doc/book/managing/nodes/)** +- **[Jenkins RBAC & Role Strategy Plugin](https://plugins.jenkins.io/role-strategy/)** +- **[Jenkins Shared Libraries](https://www.jenkins.io/doc/book/pipeline/shared-libraries/)** +- **[Trivy Vulnerability Scanner](https://trivy.dev/latest/docs/scanner/vulnerability/)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. \ No newline at end of file diff --git a/2025/docker/README.md b/2025/docker/README.md new file mode 100644 index 0000000000..194a3ac090 --- /dev/null +++ b/2025/docker/README.md @@ -0,0 +1,235 @@ +# Week 5: Docker Basics & Advanced Challenge + +Welcome to the Week 5 Docker Challenge! In this task, you will work with Docker concepts and tools taught by Shubham Bhaiya. This challenge covers the following topics: + +- **Introduction and Purpose:** Understand Docker’s role in modern development. +- **Virtualization vs. Containerization:** Learn the differences and benefits. +- **Build Kya Hota Hai:** Understand the Docker build process. +- **Docker Terminologies:** Get familiar with key Docker terms. +- **Docker Components:** Explore Docker Engine, images, containers, and more. +- **Project Building Using Docker:** Containerize a sample project. +- **Multi-stage Docker Builds / Distroless Images:** Optimize your images. +- **Docker Hub (Push/Tag/Pull):** Manage and distribute your Docker images. +- **Docker Volumes:** Persist data across container runs. +- **Docker Networking:** Connect containers using networks. +- **Docker Compose:** Orchestrate multi-container applications. +- **Docker Scout:** Analyze your images for vulnerabilities and insights. + +Complete all the tasks below and document your steps, commands, and observations in a file named `solution.md`. Finally, share your experience on LinkedIn using the provided guidelines. + +--- + +## Challenge Tasks + +### Task 1: Introduction and Conceptual Understanding +1. **Write an Introduction:** + - In your `solution.md`, provide a brief explanation of Docker’s purpose in modern DevOps. + - Compare **Virtualization vs. Containerization** and explain why containerization is the preferred approach for microservices and CI/CD pipelines. + +--- + +### Task 2: Create a Dockerfile for a Sample Project +1. **Select or Create a Sample Application:** + - Choose a simple application (for example, a basic Node.js, Python, or Java app that prints “Hello, Docker!” or serves a simple web page). + +2. **Write a Dockerfile:** + - Create a `Dockerfile` that defines how to build an image for your application. + - Include comments in your Dockerfile explaining each instruction. + - Build your image using: + ```bash + docker build -t /sample-app:latest . + ``` + +3. **Verify Your Build:** + - Run your container locally to ensure it works as expected: + ```bash + docker run -d -p 8080:80 /sample-app:latest + ``` + - Verify the container is running with: + ```bash + docker ps + ``` + - Check logs using: + ```bash + docker logs + ``` + +--- + +### Task 3: Explore Docker Terminologies and Components +1. **Document Key Terminologies:** + - In your `solution.md`, list and briefly describe key Docker terms such as image, container, Dockerfile, volume, and network. + - Explain the main Docker components (Docker Engine, Docker Hub, etc.) and how they interact. + +--- + +### Task 4: Optimize Your Docker Image with Multi-Stage Builds +1. **Implement a Multi-Stage Docker Build:** + - Modify your existing `Dockerfile` to include multi-stage builds. + - Aim to produce a lightweight, **distroless** (or minimal) final image. +2. **Compare Image Sizes:** + - Build your image before and after the multi-stage build modification and compare their sizes using: + ```bash + docker images + ``` +3. **Document the Differences:** + - Explain in `solution.md` the benefits of multi-stage builds and the impact on image size. + +--- + +### Task 5: Manage Your Image with Docker Hub +1. **Tag Your Image:** + - Tag your image appropriately: + ```bash + docker tag /sample-app:latest /sample-app:v1.0 + ``` +2. **Push Your Image to Docker Hub:** + - Log in to Docker Hub if necessary: + ```bash + docker login + ``` + - Push the image: + ```bash + docker push /sample-app:v1.0 + ``` +3. **(Optional) Pull the Image:** + - Verify by pulling your image: + ```bash + docker pull /sample-app:v1.0 + ``` + +--- + +### Task 6: Persist Data with Docker Volumes +1. **Create a Docker Volume:** + - Create a Docker volume: + ```bash + docker volume create my_volume + ``` +2. **Run a Container with the Volume:** + - Run a container using the volume to persist data: + ```bash + docker run -d -v my_volume:/app/data /sample-app:v1.0 + ``` +3. **Document the Process:** + - In `solution.md`, explain how Docker volumes help with data persistence and why they are useful. + +--- + +### Task 7: Configure Docker Networking +1. **Create a Custom Docker Network:** + - Create a custom Docker network: + ```bash + docker network create my_network + ``` +2. **Run Containers on the Same Network:** + - Run two containers (e.g., your sample app and a simple database like MySQL) on the same network to demonstrate inter-container communication: + ```bash + docker run -d --name sample-app --network my_network /sample-app:v1.0 + docker run -d --name my-db --network my_network -e MYSQL_ROOT_PASSWORD=root mysql:latest + ``` +3. **Document the Process:** + - In `solution.md`, describe how Docker networking enables container communication and its significance in multi-container applications. + +--- + +### Task 8: Orchestrate with Docker Compose +1. **Create a docker-compose.yml File:** + - Write a `docker-compose.yml` file that defines at least two services (e.g., your sample app and a database). + - Include definitions for services, networks, and volumes. +2. **Deploy Your Application:** + - Bring up your application using: + ```bash + docker-compose up -d + ``` + - Test the setup, then shut it down using: + ```bash + docker-compose down + ``` +3. **Document the Process:** + - Explain each service and configuration in your `solution.md`. + +--- + +### Task 9: Analyze Your Image with Docker Scout +1. **Run Docker Scout Analysis:** + - Execute Docker Scout on your image to generate a detailed report of vulnerabilities and insights: + ```bash + docker scout cves /sample-app:v1.0 + ``` + - Alternatively, if available, run: + ```bash + docker scout quickview /sample-app:v1.0 + ``` + to get a summarized view of the image’s security posture. + - **Optional:** Save the output to a file for further analysis: + ```bash + docker scout cves /sample-app:v1.0 > scout_report.txt + ``` + +2. **Review and Interpret the Report:** + - Carefully review the output and focus on: + - **List of CVEs:** Identify vulnerabilities along with their severity ratings (e.g., Critical, High, Medium, Low). + - **Affected Layers/Dependencies:** Determine which image layers or dependencies are responsible for the vulnerabilities. + - **Suggested Remediations:** Note any recommended fixes or mitigation strategies provided by Docker Scout. + - **Comparison Step:** If possible, compare this report with previous builds to assess improvements or regressions in your image's security posture. + - If Docker Scout is not available in your environment, document that fact and consider using an alternative vulnerability scanner (e.g., Trivy, Clair) for a comparative analysis. + +3. **Document Your Findings:** + - In your `solution.md`, provide a detailed summary of your analysis: + - List the identified vulnerabilities along with their severity levels. + - Specify which layers or dependencies contributed to these vulnerabilities. + - Outline any actionable recommendations or remediation steps. + - Reflect on how these insights might influence your image optimization or overall security strategy. + - **Optional:** Include screenshots or attach the saved report file (`scout_report.txt`) as evidence of your analysis. + +--- + +### Task 10: Documentation and Critical Reflection +1. **Update `solution.md`:** + - List all the commands and steps you executed. + - Provide explanations for each task and detail any improvements made (e.g., image optimization with multi-stage builds). +2. **Reflect on Docker’s Impact:** + - Write a brief reflection on the importance of Docker in modern software development, discussing its benefits and potential challenges. + +--- + +## 📢 How to Submit + +1. **Push Your Final Work:** + - Ensure that your complete project—including your `Dockerfile`, `docker-compose.yml`, `solution.md`, and any additional files (e.g., the Docker Scout report if saved)—is committed and pushed to your repository. + - Verify that all your changes are visible in your repository. + +2. **Create a Pull Request (PR):** + - Open a PR from your working branch (e.g., `docker-challenge`) to the main repository. + - Use a clear and descriptive title, for example: + ``` + Week 5 Challenge - DevOps Batch 9: Docker Basics & Advanced Challenge + ``` + - In the PR description, include the following details: + - A brief summary of your approach and the tasks you completed. + - A list of the key Docker commands used during the challenge. + - Any insights or challenges you encountered (e.g., lessons learned from multi-stage builds or Docker Scout analysis). + +3. **Share Your Experience on LinkedIn:** + - Write a LinkedIn post summarizing your Week 5 Docker challenge experience. + - In your post, include: + - A brief description of the challenge and what you learned. + - Screenshots, logs, or excerpts from your `solution.md` that highlight key steps or interesting findings (e.g., Docker Scout reports). + - The hashtags: **#90DaysOfDevOps #Docker #DevOps** + - Optionally, links to any blog posts or related GitHub repositories that further explain your journey. + +--- + +## Additional Resources + +- **[Docker Documentation](https://docs.docker.com/)** +- **[Docker Hub](https://docs.docker.com/docker-hub/)** +- **[Multi-stage Builds](https://docs.docker.com/develop/develop-images/multistage-build/)** +- **[Docker Compose](https://docs.docker.com/compose/)** +- **[Docker Scan (Vulnerability Scanning)](https://docs.docker.com/engine/scan/)** +- **[Containerization vs. Virtualization](https://www.docker.com/resources/what-container)** + +--- + +Happy coding and best of luck with this Docker challenge! Document your journey thoroughly in `solution.md` and refer to these resources for additional guidance. diff --git a/2025/git/01_Git_and_Github_Basics/README.md b/2025/git/01_Git_and_Github_Basics/README.md new file mode 100644 index 0000000000..589e08c57c --- /dev/null +++ b/2025/git/01_Git_and_Github_Basics/README.md @@ -0,0 +1,212 @@ +# Week 4: Git and GitHub Challenge + +Welcome to the Week 4 Challenge! In this task you will practice the essential Git and GitHub commands and concepts taught by Shubham Bhaiya. This includes: + +- **Git Basics:** `git init`, `git add`, `git commit` +- **Repository Management:** `git clone`, forking a repository, and understanding how a GitHub repo is made +- **Branching:** Creating branches (`git branch`), switching between branches (`git switch` / `git checkout`), and viewing commit history (`git log`) +- **Authentication:** Pushing and pulling using a Personal Access Token (PAT) +- **Critical Thinking:** Explaining why branching strategies are important in collaborative development + +To make this challenge more difficult, additional steps have been added. You will also be required to explore SSH authentication as a bonus task. Complete all the tasks and document every step in `solution.md`. Finally, share your experience on LinkedIn (details provided at the end). + +--- + +## Challenge Tasks + +### Task 1: Fork and Clone the Repository +1. **Fork the Repository:** + - Visit [this repository](https://github.com/LondheShubham153/90DaysOfDevOps) and fork it to your own GitHub account. If not done yet. + +2. **Clone Your Fork Locally:** + - Clone the forked repository using HTTPS: + ```bash + git clone + ``` + - Change directory into the cloned repository: + ```bash + cd 2025/git/01_Git_and_Github_Basics + ``` + +--- + +### Task 2: Initialize a Local Repository and Create a File +1. **Set Up Your Challenge Directory:** + - Inside the cloned repository, create a new directory for this challenge: + ```bash + mkdir week-4-challenge + cd week-4-challenge + ``` + +2. **Initialize a Git Repository:** + - Initialize the directory as a new Git repository: + ```bash + git init + ``` + +3. **Create a File:** + - Create a file named `info.txt` and add some initial content (for example, your name and a brief introduction). + +4. **Stage and Commit Your File:** + - Stage the file: + ```bash + git add info.txt + ``` + - Commit the file with a descriptive message: + ```bash + git commit -m "Initial commit: Add info.txt with introductory content" + ``` + +--- + +## Task 3: Configure Remote URL with PAT and Push/Pull + +1. **Configure Remote URL with Your PAT:** + To avoid entering your Personal Access Token (PAT) every time you push or pull, update your remote URL to include your credentials. + + **⚠️ Note:** Embedding your PAT in the URL is only for this exercise. It is not recommended for production use. + + Replace ``, ``, and `` with your actual GitHub username, your PAT, and the repository name respectively: + + ```bash + git remote add origin https://:@github.com//90DaysOfDevOps.git + ``` + If a remote named `origin` already exists, update it with: + ```bash + git remote set-url origin https://:@github.com//90DaysOfDevOps.git + ``` +2. **Push Your Commit to Remote:** + - Push your current branch (typically `main`) and set the upstream: + ```bash + git push -u origin main + ``` +3. **(Optional) Pull Remote Changes:** + - Verify your configuration by pulling changes: + ```bash + git pull origin main + ``` + +--- + +### Task 4: Explore Your Commit History +1. **View the Git Log:** + - Check your commit history using: + ```bash + git log + ``` + - Take note of the commit hash and details as you will reference these in your documentation. + +--- + +### Task 5: Advanced Branching and Switching +1. **Create a New Branch:** + - Create a branch called `feature-update`: + ```bash + git branch feature-update + ``` + +2. **Switch to the New Branch:** + - Switch using `git switch`: + ```bash + git switch feature-update + ``` + - Alternatively, you can use: + ```bash + git checkout feature-update + ``` + +3. **Modify the File and Commit Changes:** + - Edit `info.txt` (for example, add more details or improvements). + - Stage and commit your changes: + ```bash + git add info.txt + git commit -m "Feature update: Enhance info.txt with additional details" + git push origin feature-update + ``` + - Merge this branch to `main` via a Pull Request on GitHub. + +4. **(Advanced) Optional Extra Challenge:** + - If you feel confident, create another branch (e.g., `experimental`) from your main branch, make a conflicting change to `info.txt`, then switch back to `feature-update` and merge `experimental` to simulate a merge conflict. Resolve the conflict manually, then commit the resolution. + > *Note: This extra step is optional and intended for those looking for an additional challenge.* + +--- + +### Task 6: Explain Branching Strategies +1. **Document Your Process:** + - Create (or update) a file named `solution.md` in your repository. + - List all the Git commands you used in Tasks 1–4. + - **Explain:** Write a brief explanation on **why branching strategies are important** in collaborative development. Consider addressing: + - Isolating features and bug fixes + - Facilitating parallel development + - Reducing merge conflicts + - Enabling effective code reviews + +--- + +### Bonus Task: Explore SSH Authentication +1. **Generate an SSH Key (if not already set up):** + - Create an SSH key pair: + ```bash + ssh-keygen + ``` + - Follow the prompts and then locate your public key (typically found at `~/.ssh/id_ed25519.pub`). + +2. **Add Your SSH Public Key to GitHub:** + - Copy the contents of your public key and add it to your GitHub account under **SSH and GPG keys**. + (See [Connecting to GitHub with SSH](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) for help.) + +3. **Switch Your Remote URL to SSH:** + - Change the remote URL from HTTPS to SSH: + ```bash + git remote set-url origin git@github.com:/90DaysOfDevOps.git + ``` + +4. **Push Your Branch Using SSH:** + - Test the SSH connection by pushing your branch: + ```bash + git push origin feature-update + ``` + +--- + +## 📢 How to Submit + +1. **Push Your Final Work:** + - Ensure your branch (e.g., `feature-update`) with the updated `solution.md` file is pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch to the main repository. + - Use a clear title such as: + ``` + Week 4 Challenge - DevOps Batch 9: Git & GitHub Advanced Challenge + ``` + - In the PR description, summarize your process and list the Git commands you used. + +3. **Share Your Experience on LinkedIn:** + - Write a LinkedIn post summarizing your Week 4 experience. + - Include screenshots or logs of your tasks. + - Use hashtags: **#90DaysOfDevOps #GitGithub #DevOps** + - Optionally, share any blog posts, GitHub repos, or articles you create about this challenge. + +--- + +## Additional Resources + +- **Git Documentation:** + [https://git-scm.com/docs](https://git-scm.com/docs) + +- **Creating a Personal Access Token:** + [GitHub PAT Setup](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) + +- **Forking and Cloning Repositories:** + [Fork a Repository](https://docs.github.com/en/get-started/quickstart/fork-a-repo) | [Cloning a Repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) + +- **SSH Authentication with GitHub:** + [Connecting to GitHub with SSH](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) + +- **Understanding Branching Strategies:** + [Git Branching Strategies](https://www.atlassian.com/git/tutorials/comparing-workflows) + +--- + +Happy coding and best of luck with this challenge! Document your journey thoroughly and be sure to explore the additional resources if you get stuck. diff --git a/2025/git/02_Git_and_Github_Advanced/README.md b/2025/git/02_Git_and_Github_Advanced/README.md new file mode 100644 index 0000000000..5b9e775252 --- /dev/null +++ b/2025/git/02_Git_and_Github_Advanced/README.md @@ -0,0 +1,208 @@ +# Week 4: Git & GitHub Advanced Challenge + +This challenge covers advanced Git concepts essential for real-world DevOps workflows. By the end of this challenge, you will: + +- Understand how to work with Pull Requests effectively. +- Learn to undo changes using Reset & Revert. +- Use Stashing to manage uncommitted work. +- Apply Cherry-picking for selective commits. +- Keep a clean commit history using Rebasing. +- Learn industry-standard Branching Strategies. + +## **Topics Covered** +1. Pull Requests – Collaborating in teams. +2. Reset & Revert – Undo changes safely. +3. Stashing – Saving work temporarily. +4. Cherry-picking – Selecting specific commits. +5. Rebasing – Maintaining a clean history. +6. Branching Strategies – Industry best practices. + +## **Challenge Tasks** + +### **Task 1: Working with Pull Requests (PRs)** +**Scenario:** You are working on a new feature and need to merge your changes into the main branch using a Pull Request. + +1. Fork a repository and clone it locally. + ```bash + git clone + cd + ``` +2. Create a feature branch and make changes. + ```bash + git checkout -b feature-branch + echo "New Feature" >> feature.txt + git add . + git commit -m "Added a new feature" + ``` +3. Push the changes and create a Pull Request. + ```bash + git push origin feature-branch + ``` +4. Open a PR on GitHub, request a review, and merge it once approved. + +**Document in `solution.md`** +- Steps to create a PR. +- Best practices for writing PR descriptions. +- Handling review comments. + +--- + +### **Task 2: Undoing Changes – Reset & Revert** +**Scenario:** You accidentally committed incorrect changes and need to undo them. + +1. Create and modify a file. + ```bash + echo "Wrong code" >> wrong.txt + git add . + git commit -m "Committed by mistake" + ``` +2. Soft Reset (keeps changes staged). + ```bash + git reset --soft HEAD~1 + ``` +3. Mixed Reset (unstages changes but keeps files). + ```bash + git reset --mixed HEAD~1 + ``` +4. Hard Reset (removes all changes). + ```bash + git reset --hard HEAD~1 + ``` +5. Revert a commit safely. + ```bash + git revert HEAD + ``` + +**Document in `solution.md`** +- Differences between `reset` and `revert`. +- When to use each method. + +--- + +### **Task 3: Stashing - Save Work Without Committing** +**Scenario:** You need to switch branches but don’t want to commit incomplete work. + +1. Modify a file without committing. + ```bash + echo "Temporary Change" >> temp.txt + git add temp.txt + ``` +2. Stash the changes. + ```bash + git stash + ``` +3. Switch to another branch and apply the stash. + ```bash + git checkout main + git stash pop + ``` + +**Document in `solution.md`** +- When to use `git stash`. +- Difference between `git stash pop` and `git stash apply`. + +--- + +### **Task 4: Cherry-Picking - Selectively Apply Commits** +**Scenario:** A bug fix exists in another branch, and you only want to apply that specific commit. + +1. Find the commit to cherry-pick. + ```bash + git log --oneline + ``` +2. Apply a specific commit to the current branch. + ```bash + git cherry-pick + ``` +3. Resolve conflicts if any. + ```bash + git cherry-pick --continue + ``` + +**Document in `solution.md`** +- How cherry-picking is used in bug fixes. +- Risks of cherry-picking. + +--- + +### **Task 5: Rebasing - Keeping a Clean Commit History** +**Scenario:** Your branch is behind the main branch and needs to be updated without extra merge commits. + +1. Fetch the latest changes. + ```bash + git fetch origin main + ``` +2. Rebase the feature branch onto main. + ```bash + git rebase origin/main + ``` +3. Resolve conflicts and continue. + ```bash + git rebase --continue + ``` + +**Document in `solution.md`** +- Difference between `merge` and `rebase`. +- Best practices for rebasing. + +--- + +### **Task 6: Branching Strategies Used in Companies** +**Scenario:** Understand real-world branching strategies used in DevOps workflows. + +1. Research and explain Git workflows: + - Git Flow (Feature, Release, Hotfix branches). + - GitHub Flow (Main + Feature branches). + - Trunk-Based Development (Continuous Integration). + +2. Simulate a Git workflow using branches. + ```bash + git branch feature-1 + git branch hotfix-1 + git checkout feature-1 + ``` + +**Document in `solution.md`** +- Which strategy is best for DevOps and CI/CD. +- Pros and cons of different workflows. + +--- + +## **How to Submit** + +1. **Push your work to GitHub.** + ```bash + git add . + git commit -m "Completed Git & GitHub Advanced Challenge" + git push origin main + ``` + +2. **Create a Pull Request.** + - Title: + ``` + Git & GitHub Advanced Challenge - Completed + ``` + - PR Description: + - Steps followed for each task. + - Screenshots or logs (if applicable). + - +3. **Share Your Experience on LinkedIn:** + - Write a LinkedIn post summarizing your Week 4 Git & GitHub challenge experience. + - In your post, include: + - A brief description of the challenge and what you learned. + - Screenshots or excerpts from your `solution.md` that highlight key steps or interesting findings. + - The hashtags: **#90DaysOfDevOps #Git #GitHub #VersionControl #DevOps** + - Optionally, links to any blog posts or related GitHub repositories that further explain your journey. + +--- + +## **Additional Resources** +- [Git Official Documentation](https://git-scm.com/doc) +- [Git Reset & Revert Guide](https://www.atlassian.com/git/tutorials/resetting-checking-out-and-reverting) +- [Git Stash Explained](https://git-scm.com/book/en/v2/Git-Tools-Stashing-and-Cleaning) +- [Cherry-Picking Best Practices](https://www.atlassian.com/git/tutorials/cherry-pick) +- [Branching Strategies for DevOps](https://www.atlassian.com/git/tutorials/comparing-workflows) + +--- + +Happy coding and best of luck with this challenge! Document your journey thoroughly and be sure to explore the additional resources if you get stuck. diff --git a/2025/kubernetes/README.md b/2025/kubernetes/README.md new file mode 100644 index 0000000000..030d3fd81b --- /dev/null +++ b/2025/kubernetes/README.md @@ -0,0 +1,299 @@ +# Week 7 : Kubernetes Basics & Advanced Challenges + +This set of tasks is designed as part of the 90DaysOfDevOps challenge to simulate real-world scenarios you might encounter on the job or in technical interviews. By completing these tasks on the [SpringBoot BankApp](https://github.com/Amitabh-DevOps/Springboot-BankApp), you'll gain practical experience with advanced Kubernetes topics, including architecture, core objects, networking, storage management, configuration, autoscaling, security & access control, job scheduling, and bonus topics like Helm, Service Mesh, or AWS EKS. + +> [!IMPORTANT] +> +> 1. Fork the [SpringBoot BankApp](https://github.com/Amitabh-DevOps/Springboot-BankApp) and implement all tasks on your fork. +> 2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +> 3. Submit your `solution.md` file in the Week 7 (Kubernetes) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Understand Kubernetes Architecture & Deploy a Sample Pod + +**Scenario:** +Familiarize yourself with Kubernetes’ control plane and worker node components, then deploy a simple Pod manually. + +**Steps:** +1. **Study Kubernetes Architecture:** + - Review the roles of control plane components (API Server, Scheduler, Controller Manager, etcd, Cloud Controller) and worker node components (Kubelet, Container Runtime, Kube Proxy). +2. **Deploy a Sample Pod:** + - Create a YAML file (e.g., `pod.yaml`) to deploy a simple Pod (such as an NGINX container). + - Apply the YAML using: + ```bash + kubectl apply -f pod.yaml + ``` +3. **Document in `solution.md`:** + - Describe the Kubernetes architecture components. + - Include your Pod YAML and explain each section. + +> [!NOTE] +> +> **Interview Questions:** +> - Can you explain how the Kubernetes control plane components work together and the role of etcd in this architecture? +> - If a Pod fails to start, what steps would you take to diagnose the issue? + +--- + +## Task 2: Deploy and Manage Core Kubernetes Objects + +**Scenario:** +Deploy core Kubernetes objects for the SpringBoot BankApp application, including Deployments, ReplicaSets, StatefulSets, DaemonSets, and use Namespaces to isolate resources. + +**Steps:** +1. **Create a Namespace:** + - Write a YAML file to create a Namespace for the SpringBoot BankApp application. + - Apply the YAML: + ```bash + kubectl apply -f namespace.yaml + ``` +2. **Deploy a Deployment:** + - Create a YAML file for a Deployment (within your Namespace) that manages a set of Pods running a component of SpringBoot BankApp. + - Verify that a ReplicaSet is created automatically. +3. **Deploy a StatefulSet:** + - Write a YAML file for a StatefulSet (for example, for a database component) and apply it. +4. **Deploy a DaemonSet:** + - Create a YAML file for a DaemonSet to run a Pod on every node. +5. **Document in `solution.md`:** + - Include the YAML files for the Namespace, Deployment, StatefulSet, and DaemonSet. + - Explain the differences between these objects and when to use each. + +> [!NOTE] +> +> **Interview Questions:** +> - How does a Deployment ensure that the desired state of Pods is maintained in a cluster? +> - Can you explain the differences between a Deployment, StatefulSet, and DaemonSet, and provide an example scenario for each? + +--- + +## Task 3: Networking & Exposure – Create Services, Ingress, and Network Policies + +**Scenario:** +Expose your SpringBoot BankApp application to internal and external traffic by creating Services and configuring an Ingress, while using Network Policies to secure communication. + +**Steps:** +1. **Create a Service:** + - Write a YAML file for a Service of type ClusterIP. + - Modify the Service type to NodePort or LoadBalancer and apply the YAML. +2. **Configure an Ingress:** + - Create an Ingress resource to route external traffic to your application. +3. **Implement a Network Policy:** + - Write a YAML file for a Network Policy that restricts traffic to your application Pods. +4. **Document in `solution.md`:** + - Include the YAML files for your Service, Ingress, and Network Policy. + - Explain the differences between Service types and the roles of Ingress and Network Policies. + +> [!NOTE] +> +> **Interview Questions:** +> - How do NodePort and LoadBalancer Services differ in terms of exposure and use cases? +> - What is the role of a Network Policy in Kubernetes, and can you describe a scenario where it is essential? + +--- + +## Task 4: Storage Management – Use Persistent Volumes and Claims + +**Scenario:** +Deploy a component of the SpringBoot BankApp application that requires persistent storage by creating Persistent Volumes (PV), Persistent Volume Claims (PVC), and a StorageClass for dynamic provisioning. + +**Steps:** +1. **Create a Persistent Volume and Claim:** + - Write YAML files for a static PV and a corresponding PVC. +2. **Deploy an Application Using the PVC:** + - Modify a Pod or Deployment YAML to mount the PVC. +3. **Document in `solution.md`:** + - Include your PV, PVC, and application YAML. + - Explain how StorageClasses facilitate dynamic storage provisioning. + +> [!NOTE] +> +> **Interview Questions:** +> - What are the main differences between a Persistent Volume and a Persistent Volume Claim? +> - How does a StorageClass simplify storage management in Kubernetes? + +--- + +## Task 5: Configuration & Secrets Management with ConfigMaps and Secrets + +**Scenario:** +Deploy a component of the SpringBoot BankApp application that consumes external configuration and sensitive data using ConfigMaps and Secrets. + +**Steps:** +1. **Create a ConfigMap:** + - Write a YAML file for a ConfigMap containing configuration data. +2. **Create a Secret:** + - Write a YAML file for a Secret containing sensitive information. +3. **Deploy an Application:** + - Update your application YAML to mount the ConfigMap and Secret. +4. **Document in `solution.md`:** + - Include the YAML files and explain how the application uses these resources. + +> [!NOTE] +> +> **Interview Questions:** +> - How would you update a running application if a ConfigMap or Secret is modified? +> - What measures do you take to secure Secrets in Kubernetes? + +--- + +## Task 6: Autoscaling & Resource Management + +**Scenario:** +Implement autoscaling for a component of the SpringBoot BankApp application using the Horizontal Pod Autoscaler (HPA). Optionally, explore Vertical Pod Autoscaling (VPA) and ensure the Metrics Server is running. + +**Steps:** +1. **Deploy an Application with Resource Requests:** + - Deploy an application with defined resource requests and limits. +2. **Create an HPA Resource:** + - Write a YAML file for an HPA that scales the number of replicas based on CPU or memory usage. +3. **(Optional) Implement VPA & Metrics Server:** + - Optionally, deploy a VPA and verify that the Metrics Server is running. +4. **Document in `solution.md`:** + - Include the YAML files and explain how HPA (and optionally VPA) work. + - Discuss the benefits of autoscaling in production. + +> [!NOTE] +> +> **Interview Questions:** +> - What is the process by which the Horizontal Pod Autoscaler scales an application? +> - In what scenarios would vertical scaling (VPA) be more beneficial than horizontal scaling (HPA)? + +--- + +## Task 7: Security & Access Control + +**Scenario:** +Secure your Kubernetes cluster by implementing Role-Based Access Control (RBAC) and additional security measures. + +### Part A: RBAC Implementation +**Steps:** +1. **Configure RBAC:** + - Create roles and role bindings using YAML files for specific user groups (e.g., Admin, Developer, Tester). +2. **Create Test Accounts:** + - Simulate real-world usage by creating user accounts for each role and verifying access. +3. **Optional Enhancement:** + - Simulate an unauthorized action (e.g., a Developer attempting to delete a critical resource) and document how RBAC prevents it. + - Analyze RBAC logs (if available) to verify that unauthorized access attempts are recorded. +4. **Document in `solution.md`:** + - Include screenshots or logs of your RBAC configuration. + - Describe the roles, permissions, and potential risks mitigated by proper RBAC implementation. + +> [!NOTE] +> +> **Interview Questions:** +> - How do RBAC policies help secure a multi-team Kubernetes environment? +> - Can you provide an example of how improper RBAC could compromise a cluster? + +### Part B: Additional Security Controls +**Steps:** +1. **Set Up Taints & Tolerations:** + - Apply taints to nodes and specify tolerations in your Pod specifications. +2. **Define a Pod Disruption Budget (PDB):** + - Write a YAML file for a PDB to ensure a minimum number of Pods remain available during maintenance. +3. **Document in `solution.md`:** + - Include the YAML files and explain how taints, tolerations, and PDBs contribute to cluster stability and security. + +> [!NOTE] +> +> **Interview Questions:** +> - How do taints and tolerations ensure that critical workloads are isolated from interference? +> - Why are Pod Disruption Budgets important for maintaining application availability? + +--- + +## Task 8: Job Scheduling & Custom Resources + +**Scenario:** +Manage scheduled tasks and extend Kubernetes functionality by creating Jobs, CronJobs, and a Custom Resource Definition (CRD). + +**Steps:** +1. **Create a Job and CronJob:** + - Write YAML files for a Job (a one-time task) and a CronJob (a scheduled task). +2. **Create a Custom Resource Definition (CRD):** + - Write a YAML file for a CRD and use `kubectl` to create a custom resource. +3. **Document in `solution.md`:** + - Include the YAML files and explain the use cases for Jobs, CronJobs, and CRDs. + - Reflect on how CRDs extend Kubernetes capabilities. + +> [!NOTE] +> +> **Interview Questions:** +> - What factors would influence your decision to use a CronJob versus a Job? +> - How do CRDs enable custom extensions in Kubernetes? + +--- + +## Task 9: Bonus Task: Advanced Deployment with Helm, Service Mesh, or EKS + +**Scenario:** +For an added challenge, deploy a component of the SpringBoot BankApp application using Helm, implement a basic Service Mesh (e.g., Istio), or deploy your cluster on AWS EKS. + +**Steps:** +1. **Helm Deployment:** + - Create a Helm chart for your application. + - Deploy the application using Helm and perform an update. + - *OR* +2. **Service Mesh Implementation:** + - Deploy a basic Service Mesh (using Istio, Linkerd, or Consul) and demonstrate traffic management between services. + - *OR* +3. **Deploy on AWS EKS:** + - Set up an EKS cluster and deploy your application there. +4. **Document in `solution.md`:** + - Include your Helm chart files, Service Mesh configuration, or EKS deployment details. + - Explain the advantages of using Helm, a Service Mesh, or EKS in a production environment. + +> [!NOTE] +> +> **Interview Questions:** +> - How does Helm simplify application deployments in Kubernetes? +> - What are the benefits of using a Service Mesh in a microservices architecture? +> - How does deploying on AWS EKS compare with managing your own Kubernetes cluster? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Ensure all files (e.g., Manifest files, scripts, solution.md, etc.) are committed and pushed to your 90DaysOfDevOps repository. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `kubernetes-challenge`) to the main repository. + - **Title:** + ``` + Week 7 Challenge - DevOps Batch 9: Kubernetes Basics & Advanced Challenge + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Kubernetes challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., architecture, autoscaling, security, job scheduling, and advanced deployments). + - Use the hashtags: **#90DaysOfDevOps #Kubernetes #DevOps #InterviewPrep** + - Optionally, provide links to your fork or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Kubernetes + +- **[Kubernetes Short Notes](https://www.trainwithshubham.com/products/6515573bf42fc83942cd112e?dgps_u=l&dgps_s=ucpd&dgps_t=cp_u&dgps_u_st=u&dgps_uid=66c972da3795a9659545d71a)** +- **[Kubernetes One-Shot Video](https://youtu.be/W04brGNgxN4?si=oPscVYz0VFzZig8Q)** +- **[TWS blog on Kubernetes](https://trainwithshubham.blog/)** + +--- + +## Additional Resources + +- **[Kubernetes Official Documentation](https://kubernetes.io/docs/)** +- **[Kubernetes Concepts](https://kubernetes.io/docs/concepts/)** +- **[Helm Documentation](https://helm.sh/docs/)** +- **[Istio Documentation](https://istio.io/latest/docs/)** +- **[Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)** +- **[Kubernetes Networking](https://kubernetes.io/docs/concepts/services-networking/)** +- **[Kubernetes Storage](https://kubernetes.io/docs/concepts/storage/)** +- **[Kubernetes Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)** +- **[Kubernetes Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. diff --git a/2025/linux/README.md b/2025/linux/README.md new file mode 100644 index 0000000000..3add4b5e6a --- /dev/null +++ b/2025/linux/README.md @@ -0,0 +1,107 @@ +# Week 2: Linux System Administration & Automation + +Welcome to **Week 2** of the **90 Days of DevOps - 2025 Edition**! This week, we dive into **Linux system administration and automation**, covering essential topics such as **user management, file permissions, log analysis, process control, volume mounts, and shell scripting**. + +--- + +## 🚀 Project: DevOps Linux Server Monitoring & Automation +Imagine you're managing a **Linux-based production server** and need to ensure that **users, logs, and processes** are well-managed. You will perform real-world tasks such as **log analysis, volume management, and automation** to enhance your DevOps skills. + +--- + +## 📌 Tasks + +### **1️⃣ User & Group Management** +- Learn about Linux **users, groups, and permissions** (`/etc/passwd`, `/etc/group`). +- **Task:** + - Create a user `devops_user` and add them to a group `devops_team`. + - Set a password and grant **sudo** access. + - Restrict SSH login for certain users in `/etc/ssh/sshd_config`. + +--- + +### **2️⃣ File & Directory Permissions** +- **Task:** + - Create `/devops_workspace` and a file `project_notes.txt`. + - Set permissions: + - **Owner can edit**, **group can read**, **others have no access**. + - Use `ls -l` to verify permissions. + +--- + +### **3️⃣ Log File Analysis with AWK, Grep & Sed** +Logs are crucial in DevOps! You’ll analyze logs using the **Linux_2k.log** file from **LogHub** ([GitHub Repo](https://github.com/logpai/loghub/blob/master/Linux/Linux_2k.log)). + +- **Task:** + - **Download the log file** from the repository. + - **Extract insights using commands:** + - Use `grep` to find all occurrences of the word **"error"**. + - Use `awk` to extract **timestamps and log levels**. + - Use `sed` to replace all IP addresses with **[REDACTED]** for security. + - **Bonus:** Find the most frequent log entry using `awk` or `sort | uniq -c | sort -nr | head -10`. + +--- + +### **4️⃣ Volume Management & Disk Usage** +- **Task:** + - Create a directory `/mnt/devops_data`. + - Mount a new volume (or loop device for local practice). + - Verify using `df -h` and `mount | grep devops_data`. + +--- + +### **5️⃣ Process Management & Monitoring** +- **Task:** + - Start a background process (`ping google.com > ping_test.log &`). + - Use `ps`, `top`, and `htop` to monitor it. + - Kill the process and verify it's gone. + +--- + +### **6️⃣ Automate Backups with Shell Scripting** +- **Task:** + - Write a shell script to back up `/devops_workspace` as `backup_$(date +%F).tar.gz`. + - Save it in `/backups` and schedule it using `cron`. + - Make the script display a success message in **green text** using `echo -e`. + +--- + +## 🎯 Bonus Tasks (Optional 🚀) +1. Find the **top 5 most common log messages** in `Linux_2k.log` using `awk` and `sort`. +2. Use `find` to list **all files modified in the last 7 days**. +3. Write a script that extracts and displays only **ERROR and WARNING logs** from `Linux_2k.log`. + +--- + +## 📢 How to Submit +- **Write a LinkedIn post** summarizing your Week 2 experience. +- Include screenshots or logs of your tasks. +- **Use hashtags**: `#90DaysOfDevOps` `#LinuxAdmin` `#DevOps` +- Share any blog posts, GitHub repos, or articles you create. + +--- + +## 📚 Resources to Get Started +- [Linux In One Shot](https://youtu.be/e01GGTKmtpc?si=FSVNFRwdNC0NZeba) +- [Linux_2k.log (LogHub)](https://github.com/logpai/loghub/blob/master/Linux/Linux_2k.log) + +--- + +## 📝 Example Submission Post +```markdown +Week 2 of #90DaysOfDevOps2025 done! 🏆 + +✅ Managed users & SSH access +✅ Set up permissions & volumes +✅ Analyzed logs using AWK & grep +✅ Automated backups with a shell script + +Check out my blog here: [Your Blog/GitHub Link] + +#Linux #SysAdmin #DevOps +``` + +--- + +Happy learning, and see you in **Week 3**! 🚀 + diff --git a/2025/networking/README.md b/2025/networking/README.md new file mode 100644 index 0000000000..2abf0e5cf0 --- /dev/null +++ b/2025/networking/README.md @@ -0,0 +1,64 @@ +# Week 1: Networking Challenge + +Welcome to Week 1 of the **90 Days of DevOps - 2025 Edition**! This week's focus is on **Networking**, a foundational skill for every DevOps professional. Let's dive into understanding key networking concepts, tools, and tasks essential for building a strong DevOps career. + +## Tasks + +### 1. **Understand OSI & TCP/IP Models** +- Learn about the OSI and TCP/IP models, including their layers and purposes. +- **Task:** Write examples of how each layer applies to real-world scenarios (e.g., HTTP at the Application Layer, TCP at the Transport Layer). + +### 2. **Protocols and Ports for DevOps** +- Study the most commonly used protocols (e.g., HTTP, HTTPS, FTP, SSH, DNS) and their port numbers. +- **Task:** Create a blog, article, GitHub page, or README listing these protocols and explaining their relevance to DevOps workflows. + +### 3. **AWS EC2 and Security Groups** +- Launch an AWS EC2 instance (free tier is fine). +- Learn about Security Groups, their rules, and their significance in securing cloud instances. +- **Task:** Write a step-by-step guide or blog on how to create and configure Security Groups. + +### 4. **Hands-On with Networking Commands** +- Practice essential networking commands like: + - `ping` (check connectivity) + - `traceroute` / `tracert` (trace packet routes) + - `netstat` (network statistics) + - `curl` (make HTTP requests) + - `dig` / `nslookup` (DNS lookup) +- **Task:** Create a cheat sheet or short guide explaining the purpose and usage of each command. + + +--- + +## How to Submit +- Create a LinkedIn post summarizing your Week 1 Networking Challenge experience. +- Include the link to your blog, GitHub page, or README in the comments of your post. +- **Tip:** Use an eye-catching image or flow diagram relevant to networking concepts for better reach and engagement. + +--- + +## Resources to Get Started +- [OSI Model Explained (GeeksforGeeks)](https://www.geeksforgeeks.org/layers-of-osi-model/) +- [Common Networking Protocols](https://en.wikipedia.org/wiki/List_of_network_protocols) +- [AWS Free Tier](https://aws.amazon.com/free/) +- [DNS Basics by Cloudflare](https://www.cloudflare.com/learning/dns/what-is-dns/) +- [Docker Networking](https://docs.docker.com/network/) + +Feel free to explore these resources and expand your learning! + +--- + +### Example Submission Post: +"Week 1 of #90DaysOfDevOps2025 completed! 🚀 + +✅ Learned OSI & TCP/IP models +✅ Explored AWS Security Groups +✅ Practiced networking commands +✅ Set up my first web server + +Check out my blog here: [Your Blog/GitHub Link] + +#Networking #DevOps #90DaysOfDevOps" + +--- + +Good luck, and happy networking! 🌐 diff --git a/2025/observability/README.md b/2025/observability/README.md new file mode 100644 index 0000000000..3363f243d3 --- /dev/null +++ b/2025/observability/README.md @@ -0,0 +1,185 @@ +# Week 10: Observability Challenge with Prometheus and Grafana on KIND/EKS + +This challenge is part of the 90DaysOfDevOps program and focuses on solving advanced, production-grade observability scenarios using Prometheus and Grafana. You will deploy, configure, and fine-tune monitoring and alerting systems on a KIND cluster, and as a bonus, monitor and log an AWS EKS cluster. This exercise is designed to push your skills with advanced configurations, custom queries, dynamic dashboards, and robust alerting mechanisms, while preparing you for technical interviews. + +**Important:** +1. Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and implement all tasks on your fork. +2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +3. Submit your `solution.md` file in the Week 10 (Observability) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Setup a KIND Cluster for Observability + +**Real-World Scenario:** +Simulate a production-like Kubernetes environment locally by creating a KIND cluster to serve as the foundation for your monitoring setup. + +**Steps:** +1. **Install KIND:** + - Follow the official KIND installation guide. +2. **Create a KIND Cluster:** + - Run: + ```bash + kind create cluster --name observability-cluster + ``` +3. **Verify the Cluster:** + - Run `kubectl get nodes` and capture the output. +4. **Document in `solution.md`:** + - Include installation steps, the commands used, and output from `kubectl get nodes`. + +**Interview Questions:** +- What are the benefits and limitations of using KIND for production-like testing? +- How can you simulate production scenarios using a local KIND cluster? + +--- + +## Task 2: Deploy Prometheus on KIND with Advanced Configurations + +**Real-World Scenario:** +Deploy Prometheus on your KIND cluster with a custom configuration that includes advanced scrape settings and relabeling rules to ensure high-quality metric collection. + +**Steps:** +1. **Create a Custom Prometheus Configuration:** + - Write a `prometheus.yml` with custom scrape configurations targeting cluster components (e.g., kube-state-metrics, Node Exporter) and advanced relabeling rules to clean up metric labels. +2. **Deploy Prometheus:** + - Deploy Prometheus using a Kubernetes Deployment or via a Helm chart. +3. **Verify and Tune:** + - Access the Prometheus UI to verify that metrics are being scraped as expected. + - Adjust relabeling rules and scrape intervals to optimize performance. +4. **Document in `solution.md`:** + - Include your `prometheus.yml` and screenshots of the Prometheus UI showing active targets and effective relabeling. + +**Interview Questions:** +- How do advanced relabeling rules refine metric collection in Prometheus? +- What performance issues might you encounter when scraping targets on a KIND cluster, and how would you address them? + +--- + +## Task 3: Deploy Grafana and Build Production-Grade Dashboards + +**Real-World Scenario:** +Deploy Grafana on your KIND cluster and configure it to use Prometheus as a data source. Then, create dashboards that reflect real production metrics, including custom queries and complex visualizations. + +**Steps:** +1. **Deploy Grafana:** + - Create a Kubernetes Deployment and Service for Grafana. +2. **Configure the Data Source:** + - In the Grafana UI, add Prometheus as a data source. +3. **Design Production Dashboards:** + - Create dashboards with panels that display key metrics (e.g., CPU, memory, disk I/O, network latency) using advanced PromQL queries. + - Customize panel visualizations (e.g., graphs, tables, heatmaps) to present data effectively. +4. **Document in `solution.md`:** + - Include configuration details, screenshots of dashboards, and an explanation of the queries and visualization choices. + +**Interview Questions:** +- What factors are critical when designing dashboards for production monitoring? +- How do you optimize PromQL queries for performance and clarity in Grafana? + +--- + +## Task 4: Configure Alerting and Notification Rules + +**Real-World Scenario:** +Establish robust alerting to detect critical issues (e.g., resource exhaustion, node failures) and notify the operations team immediately. + +**Steps:** +1. **Define Alerting Rules:** + - Add alerting rules in `prometheus.yml` or configure Prometheus Alertmanager for specific conditions. +2. **Configure Notification Channels:** + - Set up Grafana (or Alertmanager) to send notifications via email, Slack, or another channel. +3. **Test Alerts:** + - Simulate alert conditions (e.g., by temporarily reducing resources) to verify that notifications are sent. +4. **Document in `solution.md`:** + - Include your alerting configuration, screenshots of triggered alerts, and a brief rationale for chosen thresholds. + +**Interview Questions:** +- How do you design effective alerting rules to minimize false positives in production? +- What challenges do you face in configuring notifications for a dynamic environment? + +--- + +## Task 5: Deploy Node Exporter for Enhanced System Metrics + +**Real-World Scenario:** +Enhance system monitoring by deploying Node Exporter on your KIND cluster to collect detailed metrics such as CPU, memory, disk, and network usage, which are critical for troubleshooting production issues. + +**Steps:** +1. **Deploy Node Exporter:** + - Create a Deployment or DaemonSet to deploy Node Exporter across all nodes in your KIND cluster. +2. **Verify Metrics Collection:** + - Ensure Node Exporter endpoints are correctly scraped by Prometheus. +3. **Document in `solution.md`:** + - Include your Node Exporter YAML configuration and screenshots showing metrics collected in Prometheus. + - Explain the importance of system-level metrics in production monitoring. + +**Interview Questions:** +- What additional system metrics does Node Exporter provide that are crucial for production? +- How would you integrate Node Exporter metrics into your existing Prometheus setup? + +--- + +## Bonus Task: Monitor and Log an AWS EKS Cluster + +**Real-World Scenario:** +For an added challenge, provision or use an existing AWS EKS cluster and set up Prometheus and Grafana to monitor and log its performance. This task simulates the observability of a production cloud environment. + +**Steps:** +1. **Provision an EKS Cluster:** + - Use Terraform to deploy an EKS cluster (or leverage an existing one) and document key configuration settings. +2. **Deploy Prometheus and Grafana on EKS:** + - Configure Prometheus with appropriate scrape targets for the EKS cluster. + - Deploy Grafana and integrate it with Prometheus. +3. **Integrate Logging (Optional):** + - Optionally, configure a logging solution (e.g., Fluentd or CloudWatch) to capture EKS logs. +4. **Document in `solution.md`:** + - Summarize your EKS provisioning steps, Prometheus and Grafana configurations, and any logging integration. + - Explain how monitoring and logging improve observability in a cloud environment. + +**Interview Questions:** +- What are the key challenges of monitoring an EKS cluster versus a local KIND cluster? +- How would you integrate logging with monitoring tools to ensure comprehensive observability? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and ensure all files (Prometheus and Grafana configurations, Node Exporter YAML, Terraform files for the bonus task, `solution.md`, etc.) are committed and pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `observability-challenge`) to the main repository. + - **Title:** + ``` + Week 10 Challenge - Observability Challenge (Prometheus & Grafana on KIND/EKS) + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Submit Your Documentation:** + - **Important:** Place your `solution.md` file in the Week 10 (Observability) task folder of the 90DaysOfDevOps repository. + +4. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Observability challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., KIND/EKS setup, advanced configurations, dashboard creation, alerting strategies, and Node Exporter integration). + - Use the hashtags: **#90DaysOfDevOps #Prometheus #Grafana #KIND #EKS #Observability #DevOps #InterviewPrep** + - Optionally, provide links to your repository or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Observability + +- **[Prometheus & Grafana One-Shot Video](https://youtu.be/DXZUunEeHqM?si=go1m-THyng7Ipyu6)** + +--- + +## Additional Resources + +- **[Prometheus Official Documentation](https://prometheus.io/docs/)** +- **[Grafana Official Documentation](https://grafana.com/docs/)** +- **[Alertmanager Documentation](https://prometheus.io/docs/alerting/latest/alertmanager/)** +- **[Kubernetes Monitoring with Prometheus](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/)** +- **[Grafana Dashboards](https://grafana.com/grafana/dashboards/)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. diff --git a/2025/projects/README.md b/2025/projects/README.md new file mode 100644 index 0000000000..8b13789179 --- /dev/null +++ b/2025/projects/README.md @@ -0,0 +1 @@ + diff --git a/2025/shell_scripting/README.md b/2025/shell_scripting/README.md new file mode 100644 index 0000000000..e8792c3280 --- /dev/null +++ b/2025/shell_scripting/README.md @@ -0,0 +1,130 @@ +## Week 3 Challenge 1: User Account Management + +In this challenge, you will create a bash script that provides options for managing user accounts on the system. The script should allow users to perform various user account-related tasks based on command-line arguments. + +### Part 1: Account Creation + +1. Implement an option `-c` or `--create` that allows the script to create a new user account. The script should prompt the user to enter the new username and password. + +2. Ensure that the script checks whether the username is available before creating the account. If the username already exists, display an appropriate message and exit gracefully. + +3. After creating the account, display a success message with the newly created username. + +### Part 2: Account Deletion + +1. Implement an option `-d` or `--delete` that allows the script to delete an existing user account. The script should prompt the user to enter the username of the account to be deleted. + +2. Ensure that the script checks whether the username exists before attempting to delete the account. If the username does not exist, display an appropriate message and exit gracefully. + +3. After successfully deleting the account, display a confirmation message with the deleted username. + +### Part 3: Password Reset + +1. Implement an option `-r` or `--reset` that allows the script to reset the password of an existing user account. The script should prompt the user to enter the username and the new password. + +2. Ensure that the script checks whether the username exists before attempting to reset the password. If the username does not exist, display an appropriate message and exit gracefully. + +3. After resetting the password, display a success message with the username and the updated password. + +### Part 4: List User Accounts + +1. Implement an option `-l` or `--list` that allows the script to list all user accounts on the system. The script should display the usernames and their corresponding user IDs (UID). + +### Part 5: Help and Usage Information + +1. Implement an option `-h` or `--help` that displays usage information and the available command-line options for the script. + +### Bonus Points (Optional) + +If you want to challenge yourself further, you can add additional features to the script, such as: + +- Displaying more detailed information about user accounts (e.g., home directory, shell, etc.). +- Allowing the modification of user account properties (e.g., username, user ID, etc.). + +Remember to handle errors gracefully, provide appropriate user prompts, and add comments to explain the logic and purpose of each part of the script. + +## [Example Interaction: User Account Management Script](./example_interaction_with_usr_acc_mgmt.md) + + +## Submission Instructions + +Create a bash script named `user_management.sh` that implements the User Account Management as described in the challenge. + +Add comments in the script to explain the purpose and logic of each part. + +## Week 3 Challenge 2: Automated Backup & Recovery using Cron + + +This is another challenge for Day 2 of the Bash Scripting Challenge! In this challenge, you will create a bash script that performs a backup of a specified directory and implements a rotation mechanism to manage backups. + +## Challenge Description + +Your task is to create a bash script that takes a directory path as a command-line argument and performs a backup of the directory. The script should create timestamped backup folders and copy all the files from the specified directory into the backup folder. + +Additionally, the script should implement a rotation mechanism to keep only the last 3 backups. This means that if there are more than 3 backup folders, the oldest backup folders should be removed to ensure only the most recent backups are retained. + +> The script will create a timestamped backup folder inside the specified directory and copy all the files into it. It will also check for existing backup folders and remove the oldest backups to keep only the last 3 backups. + +## Example Usage + +Assume the script is named `backup_with_rotation.sh`. Here's an example of how it will look, +also assuming the script is executed with the following commands on different dates: + +1. First Execution (2023-07-30): + +``` +$ ./backup_with_rotation.sh /home/user/documents +``` + +Output: + +``` +Backup created: /home/user/documents/backup_2023-07-30_12-30-45 +Backup created: /home/user/documents/backup_2023-07-30_15-20-10 +Backup created: /home/user/documents/backup_2023-07-30_18-40-55 +``` + +After this execution, the /home/user/documents directory will contain the following items: + +``` +backup_2023-07-30_12-30-45 +backup_2023-07-30_15-20-10 +backup_2023-07-30_18-40-55 +file1.txt +file2.txt +... +``` + +2. Second Execution (2023-08-01): + +``` +$ ./backup_with_rotation.sh /home/user/documents +``` + +Output: + +``` +Backup created: /home/user/documents/backup_2023-08-01_09-15-30 +``` + +After this execution, the /home/user/documents directory will contain the following items: + +``` +backup_2023-07-30_15-20-10 +backup_2023-07-30_18-40-55 +backup_2023-08-01_09-15-30 +file1.txt +file2.txt +... +``` + +In this example, the script creates backup folders with timestamped names and retains only the last 3 backups while removing the older backups. + +## Submission Instructions + +Create a bash script named backup_with_rotation.sh that implements the Directory Backup with Rotation as described in the challenge. + +Add comments in the script to explain the purpose and logic of each part. + + +Good luck with the User Account Management challenge! This challenge will test your ability to interact with user input, manage user accounts, and perform administrative tasks using bash scripting. Happy scripting and managing user accounts! diff --git a/2025/terraform/README.md b/2025/terraform/README.md new file mode 100644 index 0000000000..26a696d37c --- /dev/null +++ b/2025/terraform/README.md @@ -0,0 +1,228 @@ +# Week 8: Terraform (Infrastructure as Code) Challenge + +This set of tasks is designed as part of the 90DaysOfDevOps challenge to simulate complex, real-world scenarios you might encounter on the job or in technical interviews. By completing these tasks on the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop), you'll gain practical experience with advanced Terraform topics, including provisioning, state management, variables, modules, workspaces, resource lifecycle management, drift detection, and environment management. + +**Important:** +1. Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and implement all tasks on your fork. +2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +3. Submit your `solution.md` file in the Week 8 (Terraform) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Install Terraform, Initialize, and Provision a Basic Resource + +**Scenario:** +Begin by installing Terraform, initializing a project, and provisioning a basic resource (e.g., an AWS EC2 instance) to validate your setup. + +**Steps:** +1. **Install Terraform:** + - Download and install Terraform on your local machine. +2. **Initialize a Terraform Project:** + - Create a new directory for your Terraform project. + - Run `terraform init` to initialize the project. +3. **Provision a Basic Resource:** + - Create a configuration file (e.g., `main.tf`) to provision an AWS EC2 instance (or a similar resource for your cloud provider). + - Run `terraform apply` and confirm the changes. +4. **Document in `solution.md`:** + - Include the installation steps, your `main.tf` file, and the output of your `terraform apply` command. + +**Interview Questions:** +- How does Terraform manage resource creation and state? +- What is the significance of the `terraform init` command in a new project? + +--- + +## Task 2: Manage Terraform State with a Remote Backend + +**Scenario:** +Ensuring state consistency is critical when multiple team members work on infrastructure. Configure a remote backend (e.g., AWS S3 with DynamoDB for locking) to store your Terraform state file. + +**Steps:** +1. **Configure a Remote Backend:** + - Create a backend configuration in your `main.tf` or a separate backend file to configure a remote backend. +2. **Reinitialize Terraform:** + - Run `terraform init` to reinitialize your project with the new backend. +3. **Document in `solution.md`:** + - Include the backend configuration details. + - Explain the benefits of using a remote backend and state locking in collaborative environments. + +**Interview Questions:** +- Why is remote state management important in Terraform? +- How does state locking prevent conflicts during collaborative updates? + +--- + +## Task 3: Use Variables, Outputs, and Workspaces + +**Scenario:** +Improve the flexibility and reusability of your Terraform configuration by using variables, outputs, and workspaces to manage multiple environments. + +**Steps:** +1. **Define Variables and Outputs:** + - Create a `variables.tf` file to define configurable parameters (e.g., region, instance type). + - Create an `outputs.tf` file to output key information (e.g., public IP address of the EC2 instance). +2. **Implement Workspaces:** + - Use `terraform workspace new` to create separate workspaces for different environments (e.g., dev, staging, prod). +3. **Document in `solution.md`:** + - Include your `variables.tf`, `outputs.tf`, and a summary of your workspace setup. + - Explain how these features enable dynamic and multi-environment deployments. + +**Interview Questions:** +- How do variables and outputs enhance the reusability of Terraform configurations? +- What is the purpose of workspaces in Terraform, and how would you use them in a production scenario? + +--- + +## Task 4: Create and Use Terraform Modules + +**Scenario:** +Enhance reusability by creating a Terraform module for commonly used resources, and integrate it into your main configuration. + +**Steps:** +1. **Create a Module:** + - In a separate directory (e.g., `modules/ec2_instance`), create a module with `main.tf`, `variables.tf`, and `outputs.tf` for provisioning an EC2 instance. +2. **Reference the Module:** + - Update your main configuration to call the module using a `module` block. +3. **Document in `solution.md`:** + - Provide the module code and the main configuration. + - Explain how modules promote consistency and reduce code duplication. + +**Interview Questions:** +- What are the advantages of using modules in Terraform? +- How would you structure a module for reusable infrastructure components? + +--- + +## Task 5: Resource Dependencies and Lifecycle Management + +**Scenario:** +Ensure correct resource creation order and safe updates by managing dependencies and customizing resource lifecycles. + +**Steps:** +1. **Define Resource Dependencies:** + - Use the `depends_on` meta-argument in your configuration to specify dependencies explicitly. +2. **Configure Resource Lifecycles:** + - Add lifecycle blocks (e.g., `create_before_destroy`) in your resource definitions to manage updates safely. +3. **Document in `solution.md`:** + - Include examples of resource dependencies and lifecycle configurations in your code. + - Explain how these settings prevent downtime during updates. + +**Interview Questions:** +- How does Terraform handle resource dependencies? +- Can you explain the purpose of the `create_before_destroy` lifecycle argument? + +--- + +## Task 6: Infrastructure Drift Detection and Change Management + +**Scenario:** +In production, changes might occur outside of Terraform. Use Terraform commands to detect infrastructure drift and manage changes. + +**Steps:** +1. **Detect Drift:** + - Run `terraform plan` to identify differences between your configuration and the actual infrastructure. +2. **Reconcile Changes:** + - Describe your approach to updating the state or reapplying configurations when drift is detected. +3. **Document in `solution.md`:** + - Include examples of drift detection and your strategy for reconciling differences. + - Reflect on the importance of change management in infrastructure as code. + +**Interview Questions:** +- What is infrastructure drift, and why is it a concern in production environments? +- How would you resolve discrepancies between your Terraform configuration and actual infrastructure? + +--- + +## Task 7: (Optional) Dynamic Pipeline Parameterization for Terraform + +**Scenario:** +Enhance your Terraform configurations by using dynamic input parameters and conditional logic to deploy resources differently based on environment-specific values. + +**Steps:** +1. **Enhance Variables with Conditionals:** + - Update your `variables.tf` to include default values and conditional expressions for environment-specific configurations. +2. **Apply Conditional Logic:** + - Use conditional expressions in your resource definitions to adjust attributes based on variable values. +3. **Document in `solution.md`:** + - Explain how dynamic parameterization improves flexibility. + - Include sample outputs demonstrating different configurations. + +**Interview Questions:** +- How do conditional expressions in Terraform improve configuration flexibility? +- Provide an example scenario where dynamic parameters are critical in a deployment pipeline. + +--- + + +### **Bonus Task: Multi-Environment Setup with Terraform & Ansible ** + +**Scenario:** +Set up **AWS infrastructure** for multiple environments (dev, staging, prod) using **Terraform** for provisioning and **Ansible** for configuration. This includes installing both tools, creating dynamic inventories, and automating Nginx configuration across environments. + +1. **Install Tools:** + - Install **Terraform** and **Ansible** on your local machine. + +2. **Provision AWS Infrastructure with Terraform:** + - Create Terraform files to spin up EC2 instances (or similar resources) in dev, staging, and prod. + - Apply configurations (e.g., `terraform apply -var-file="dev.tfvars"`) for each environment. + +3. **Configure Hosts with Ansible:** + - Generate **dynamic inventories** (or separate inventory files) based on Terraform outputs. + - Write a playbook to install and configure **Nginx** across all environments. + - Run `ansible-playbook -i nginx_setup.yml` to automate the setup. + +4. **Automate & Document:** + - Ensure infrastructure changes are version-controlled. + - Place all steps, commands, and observations in `solution.md`. + +**Interview Questions :** +- **Terraform & Ansible Integration:** How do you share Terraform outputs (host details) with Ansible inventories? +- **Multi-Environment Management:** What strategies ensure consistency while keeping dev, staging, and prod isolated? +- **Nginx Configuration:** How do you handle environment-specific differences for Nginx setups? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and ensure all Terraform files (configuration files, modules, variable files, `solution.md`, etc.) are committed and pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `terraform-challenge`) to the main repository. + - **Title:** + ``` + Week 8 Challenge - Terraform Infrastructure as Code Challenge + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Submit Your Documentation:** + - **Important:** Place your `solution.md` file in the Week 8 (Terraform) task folder of the 90DaysOfDevOps repository. + +4. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Terraform challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., state management, module usage, drift detection, multi-environment setups). + - Use the hashtags: **#90DaysOfDevOps #Terraform #DevOps #InterviewPrep** + - Optionally, provide links to your fork or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Terraform + +- **[Terraform Short Notes](https://www.trainwithshubham.com/products/66d5c45f7345de4e9c1d8b05?dgps_u=l&dgps_s=ucpd&dgps_t=cp_u&dgps_u_st=u&dgps_uid=66c972da3795a9659545d71a)** +- **[Terraform One-Shot Video](https://youtu.be/S9mohJI_R34?si=QdRm-JrdKs8ZswXZ)** +- **[Multi-Environment Setup Blog](https://amitabhdevops.hashnode.dev/devops-project-multi-environment-infrastructure-with-terraform-and-ansible)** + +--- + +## Additional Resources + +- **[Terraform Official Documentation](https://www.terraform.io/docs/)** +- **[Terraform Providers](https://www.terraform.io/docs/providers/index.html)** +- **[Terraform Modules](https://www.terraform.io/docs/modules/index.html)** +- **[Terraform State Management](https://www.terraform.io/docs/state/index.html)** +- **[Terraform Workspaces](https://www.terraform.io/docs/language/state/workspaces.html)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. diff --git a/2026/day-01/README.md b/2026/day-01/README.md new file mode 100644 index 0000000000..118d4e7f4e --- /dev/null +++ b/2026/day-01/README.md @@ -0,0 +1,99 @@ +# Day 01 – Introduction to DevOps and Cloud + +## Task +Today’s goal is to **set the foundation for your DevOps journey**. + +You will create a **90-day personal DevOps learning plan** that clearly defines: +- What is your understanding of DevOps and Cloud Engineering? +- Why you are starting learning DevOps & Cloud? +- Where do you want to reach? +- How you will stay consistent every single day? + +This is not a generic plan. +This is your **career execution blueprint** for the next 90 days. + +--- + +## Expected Output +By the end of today, you should have: + +- A markdown file named: + `learning-plan.md` + +or + +- A hand written plan for the next 90 Days (Recommended) + + +The file/note should clearly reflect your intent, discipline, and seriousness toward becoming a DevOps engineer. + +--- + +## Guidelines +Follow these rules while creating your plan: + +- Mention your **current level** + (student / fresher / working professional / non-IT background, etc.) +- Define **3 clear goals** for the next 90 days + (example: deploy a production-grade application on Kubernetes) +- Define **3 core DevOps skills** you want to build + (example: Linux troubleshooting, CI/CD pipelines, Kubernetes debugging) +- Allocate a **weekly time budget** + (example: 2–2.5 hours per day on weekdays, 4-6 hours weekends) +- Keep the document **under 1 page** +- Be honest and realistic; consistency matters more than perfection + +--- + +## Resources +You may refer to: + +- TrainWithShubham [course curriculum](https://english.trainwithshubham.com/JOSH_BATCH_10_Syllabus_v1.pdf) +- TrainWithShubham DevOps [roadmap](https://docs.google.com/spreadsheets/d/1eE-NhZQFr545LkP4QNhTgXcZTtkMFeEPNyVXAflXia0/edit?gid=2073716385#gid=2073716385) +- Your own past experience and career aspirations + +Avoid over-researching today. The focus is **clarity**, not depth. + +--- + +## Why This Matters for DevOps +DevOps engineers succeed not just because of tools, but because of: + +- Discipline +- Ownership +- Long-term thinking +- Ability to execute consistently + +In real jobs, no one tells you exactly what to do every day. +This task trains you to **take ownership of your own growth**, just like a real DevOps engineer. + +A clear plan: +- Reduces confusion +- Prevents burnout +- Keeps you focused during tough days + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-01/` folder +3. Add your `learning-plan.md` file +4. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 01 progress on LinkedIn: + +- Post 2–3 lines on why you’re starting **#90DaysOfDevOps** +- Share one goal from your learning plan +- Optional: screenshot of your markdown file or a professional picture + +Use hashtags: +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham + + +Happy Learning +**TrainWithShubham** \ No newline at end of file diff --git a/2026/day-01/learning-plan.md b/2026/day-01/learning-plan.md new file mode 100644 index 0000000000..006a9d8521 --- /dev/null +++ b/2026/day-01/learning-plan.md @@ -0,0 +1,12 @@ +# My current level +I am not a complete fresher but also not intermediate.I have basic knowledge about linux commands, docker and ec2 instances. + +#Goals for next 90 days +To learn python along with this devops course, Focus on building projects, Showing up everyday no matter how I feel. + +# Core devops skills I want to build +Docker containerisation, Linux along with networking , Kubernetes. + +# Weekly time budget +4-5 hours/day on weekdays 6-7 hours on weekends + diff --git a/2026/day-02/README.md b/2026/day-02/README.md new file mode 100644 index 0000000000..d8910d0b82 --- /dev/null +++ b/2026/day-02/README.md @@ -0,0 +1,85 @@ +# Day 02 – Linux Architecture, Processes, and systemd + +## Task +Today’s goal is to **understand how Linux works under the hood**. + +You will create a short note that explains: +- The core components of Linux (kernel, user space, init/systemd) +- How processes are created and managed +- What systemd does and why it matters + +This is the foundation for all troubleshooting you will do as a DevOps engineer. + +--- + +## Expected Output +By the end of today, you should have: + +- A markdown file named: + `linux-architecture-notes.md` + +or + +- A hand written set of notes (Recommended) + +Your notes should be clear enough that someone new to Linux can follow them. + +--- + +## Guidelines +Follow these rules while creating your notes: + +- Explain **process states** (running, sleeping, zombie, etc.) +- List **5 commands** you would use daily +- Keep it **short and practical** (under 1 page) +- Use bullet points and short headings + +--- + +## Resources +You may refer to: + +- Linux `man` pages (`ps`, `top`, `systemctl`) +- Official systemd docs +- Your class notes + +Avoid copying/pasting AI Generated content. +Focus on understanding. + +--- + +## Why This Matters for DevOps +Linux is the base OS for almost every production system. + +If you know how processes and systemd work, you can: +- Debug crashed services faster +- Fix CPU/memory issues +- Understand logs and service restarts confidently + +This knowledge saves hours during incidents. + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-02/` folder +3. Add your `linux-architecture-notes.md` file +4. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 02 progress on LinkedIn: + +- Post 2–3 lines on what you learned about Linux internals +- Share one systemd command you found useful +- Optional: screenshot of your notes + +Use hashtags: +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham + + +Happy Learning +**TrainWithShubham** \ No newline at end of file diff --git a/2026/day-02/linux-architecture.md b/2026/day-02/linux-architecture.md new file mode 100644 index 0000000000..b700963bba --- /dev/null +++ b/2026/day-02/linux-architecture.md @@ -0,0 +1,48 @@ +# Core components of linux + +i) Hardware layer +ii) Shell +iii) Kernel +iv) System libraries/USER/APLLICATIONS +v) System utilities(like,GNU) + +# How processes are created in linux + +*fork() System Call: A running "parent" process initiates the fork() system call to create a new, nearly identical "child" process. + The child process receives a copy of the parent's memory space, open file descriptors, and other resources +*exec() System Call: After the fork(), the child process typically uses an exec() system call (e.g., execve()) to replace its entire memory space with a new program's code and data. +*wait() System Call: The parent process often uses the wait() system call to pause its own execution until its child process finishes and exits, + allowing the parent to collect the child's exit status and prevent it from becoming a zombie process. + +# Process states + +A process transitions through several states during its lifecycle: + +1. Running (R): The process is either currently executing on the CPU or waiting in the run queue to be executed. +2. Sleeping/Waiting (S or D): The process is waiting for some event to occur (e.g., I/O completion, a signal). +3. Stopped (T): The process has been suspended by a job control signal (like Ctrl+Z). +5. Zombie (Z): The process has terminated, but its parent process has not yet collected its exit status, so its entry still exists in the process table. + +# What systemd does + +1. Initializes the System: It is the first user-space process to run during boot (PID 1) +2. Manages Services: It starts, stops, and restarts background services (daemons) efficiently using "unit files" which define how services should behave [2]. +3. Provides System Logging: It includes journald, a centralized logging management system [1]. +4. Manages Devices and Mount Points: It uses udev (as part of the suite) to manage device events and automatically handle device hot-plugging [1]. +5. Enables Parallelism: It uses socket and D-Bus activation to start services in parallel, significantly speeding up boot times [2]. + +# Why does it matter + +1. Standardization: It provides a consistent, standardized framework across many different Linux distributions, making system administration and development more uniform [2]. +2. Faster Boot Times: Its design allows for aggressive parallelization during startup, which dramatically decreases the time it takes for a system to become usable [2]. +3. Modern Features: It offers robust features essential for modern computing, such as cgroup management for resource control, on-demand service activation, and better security isolation for services [1, 2]. + +# List of 5 commands that I will be using daily + +1. cd +2. ls +3. pwd +4. touch +5. man + + diff --git a/2026/day-03/README.md b/2026/day-03/README.md new file mode 100644 index 0000000000..5ccf2ccc98 --- /dev/null +++ b/2026/day-03/README.md @@ -0,0 +1,82 @@ +# Day 03 – Linux Commands Practice + +## Task +Today’s goal is to **build your Linux command confidence**. + +You will create a cheat sheet of commands focused on: +- Process management +- File system +- Networking troubleshooting + +This is the command toolkit you will reuse for years. + +--- + +## Expected Output +By the end of today, you should have: + +- A markdown file named: + `linux-commands-cheatsheet.md` + +or + +- A hand written cheat sheet (Recommended) + +Your cheat sheet should be easy to scan during real troubleshooting. + +--- + +## Guidelines +Follow these rules while creating your cheat sheet: + +- Include **at least 20 commands** with one‑line usage notes +- Add **3 networking commands** (`ping`, `ip addr`, `dig`, `curl`, etc.) +- Group commands by category +- Keep it concise and readable + +--- + +## Resources +You may refer to: + +- Linux `man` pages +- Your class notes +- Reliable Linux command references + +Don’t copy long lists. Focus on commands you understand. + +--- + +## Why This Matters for DevOps +Real production issues are solved at the command line. + +The faster you can inspect logs and network issues, the faster you can: +- Restore service +- Reduce downtime +- Gain trust as an operator + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-03/` folder +3. Add your `linux-commands-cheatsheet.md` file +4. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 03 progress on LinkedIn: + +- Post 2–3 lines on your favorite Linux commands +- Share one log command and one networking command +- Optional: screenshot of your cheat sheet + +Use hashtags: +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham + + +Happy Learning +**TrainWithShubham** \ No newline at end of file diff --git a/2026/day-03/linux-commands-cheatsheet.md b/2026/day-03/linux-commands-cheatsheet.md new file mode 100644 index 0000000000..5dbeac887d --- /dev/null +++ b/2026/day-03/linux-commands-cheatsheet.md @@ -0,0 +1,35 @@ +# Commands focused on process management + +1. ps aux (lists running processes with detailed info.) +2. top (provides list of running processes) +3. htop (advanced version of top where user can see horiontally and vertically) +4. kill (sends a signal to terminate a process by its process id) +5. pkill (terminate a process by its name) + +# Use this command to see the linux distribution and version +* cat /etc/os-release + +# Very improtant command to know the usage of a command +* man [type the command you want to get details of and it will give each and every detail about it] + +# Commands focused on file system + +1. ls (list directory contents) +2. cd (change directory) +3. pwd (print working directory) +4. cp (copy files or directories) +5. rm (remove file or directory +6. head (display first few lines of a file) +7. tail (display last few lines of a file) +8. chmod (change file permissions i.e, rwx) +9. chown (change file ownership) +10. find (search for files ina directory hierarchy) +11. tar (archive files) +12. zip/unzip (compress and extract files) + +# Commands focused on networking and troubleshooting + +1. curl (transfer data from or to a server) +2. wget (download files from internet) +3. ssh (secure shell to a remote server) +4. ping (check connectivity to a host) diff --git a/2026/day-04/README.md b/2026/day-04/README.md new file mode 100644 index 0000000000..12fd91c7cf --- /dev/null +++ b/2026/day-04/README.md @@ -0,0 +1,85 @@ +# Day 04 – Linux Practice: Processes and Services + +## Task +Today’s goal is to **practice Linux fundamentals with real commands**. + +You will create a short practice note by actually running basic commands and capturing what you see: +- Check running processes +- Inspect one systemd service +- Capture a small troubleshooting flow + +This is hands-on. Keep it simple and focused on fundamentals. + +--- + +## Expected Output +By the end of today, you should have: + +- A markdown file named: + `linux-practice.md` + +or + +- A hand written practice log (Recommended) + +Your note should show what you actually ran on your system. + +--- + +## Guidelines +Follow these rules while creating your practice note: + +- Run and record output for **at least 6 commands** +- Include **2 process commands** (`ps`, `top`, `pgrep`, etc.) +- Include **2 service commands** (`systemctl status`, `systemctl list-units`, etc.) +- Include **2 log commands** (`journalctl -u `, `tail -n 50`, etc.) +- Pick **one service on your system** (example: `ssh`, `cron`, `docker`) and inspect it +- Keep it **simple and actionable** + +Suggested structure for `linux-practice.md`: +- Process checks +- Service checks +- Log checks +- Mini troubleshooting steps + +--- + +## Resources +You may refer to: + +- Your notes from Day 02 and Day 03 +- Linux `man` pages +- Your class notes + +--- + +## Why This Matters for DevOps +Hands‑on practice builds speed and confidence. + +When issues happen in production, you won’t have time to search for basic commands. +This day helps you build muscle memory with Linux fundamentals. + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-04/` folder +3. Add your `linux-practice.md` file +4. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 04 progress on LinkedIn: + +- Post 2–3 lines on the Linux commands you practiced +- Share one service you inspected and what you learned +- Optional: screenshot of your practice note + +Use hashtags: +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham + +Happy Learning +**TrainWithShubham** diff --git a/2026/day-04/linux-practise.md b/2026/day-04/linux-practise.md new file mode 100644 index 0000000000..79c31be6f1 --- /dev/null +++ b/2026/day-04/linux-practise.md @@ -0,0 +1,79 @@ +# Outcome of ps +ps + PID TTY TIME CMD + 1181 pts/1 00:00:00 sudo + 1182 pts/1 00:00:00 su + 1183 pts/1 00:00:00 bash + 1688 pts/1 00:00:00 ps + +# Output of top +1 root 20 0 22496 13704 9480 S 0.0 1.4 0:01.55 + 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 + 3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 + 4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 + 5 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 + 6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 + +# Outcome of systemctl status +systemctl status 1180 +● session-2.scope - Session 2 of User ubuntu + Loaded: loaded (/run/systemd/transient/session-2.scope; transient) + Transient: yes + Active: active (running) since Thu 2026-01-29 06:37:11 UTC; 36min ago + Tasks: 9 + Memory: 93.7M (peak: 136.4M) + CPU: 5.584s + CGroup: /user.slice/user-1000.slice/session-2.scope + ├─ 875 "sshd: ubuntu [priv]" + ├─ 990 "sshd: ubuntu@pts/0" + ├─1027 -bash + ├─1180 sudo su + ├─1181 sudo su + ├─1182 su + ├─1183 bash + ├─1798 systemctl status 1180 + └─1799 less + +# Outcome of tail +tail -5 file +ssh .. +touch +vi +vim +nano + +# Outcome of crontab -l +crontab -l +# Edit this file to introduce tasks to be run by cron. +# +# Each task to run has to be defined through a single line +# indicating with different fields when the task will be run +# and what command to run for the task +# +# To define the time you can provide concrete values for +# minute (m), hour (h), day of month (dom), month (mon), +# and day of week (dow) or use '*' in these fields (for 'any'). +# +# Notice that tasks will be started based on the cron's system +# daemon's notion of time and timezones. +# +# Output of the crontab jobs (including errors) is sent through +# email to the user the crontab file belongs to (unless redirected). +# +# For example, you can run a backup of all your user accounts +# at 5 a.m every week with: +# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/ +# +# For more information see the manual pages of crontab(5) and cron(8) +# +# m h dom mon dow command + +0 3 * * * + + + +17 13 * * 4 echo "Weekend soon!" | mail -s "Reminder" gzeus5476@gmail.com + +# Outcome of journalctl +journalctl -u google.com +-- No entries -- diff --git a/2026/day-05/README.md b/2026/day-05/README.md new file mode 100644 index 0000000000..03b0b49f02 --- /dev/null +++ b/2026/day-05/README.md @@ -0,0 +1,102 @@ +# Day 05 – Linux Troubleshooting Drill: CPU, Memory, and Logs + +## Task +Today’s goal is to **run a focused troubleshooting drill**. + +You will pick a running process/service on your system and: +- Capture a quick health snapshot (CPU, memory, disk, network) +- Trace logs for that service +- Write a **mini runbook** describing what you did and what you’d do next if things were worse + +This turns yesterday’s practice into a repeatable troubleshooting routine. + +### What’s a runbook? +A **runbook** is a short, repeatable checklist you follow during an incident: the exact commands you run, what you observed, and the next actions if the issue persists. Keep it concise so you can reuse it under pressure. + +--- + +## Expected Output +By the end of today, you should have: + +- A markdown file named: + `linux-troubleshooting-runbook.md` + +or + +- A hand written runbook (Recommended) + +Your runbook should include both the commands you ran and brief interpretations. + +--- + +## Guidelines +Follow these rules while creating your runbook: + +- Run and record output for **at least 8 commands** (save snippets in your runbook) + - **Environment basics (2):** `uname -a`, `lsb_release -a` (or `cat /etc/os-release`) + - **Filesystem sanity (2):** create a throwaway folder and file, e.g., `mkdir /tmp/runbook-demo`, `cp /etc/hosts /tmp/runbook-demo/hosts-copy && ls -l /tmp/runbook-demo` + - **CPU / Memory (2):** `top`/`htop`/`ps -o pid,pcpu,pmem,comm -p `, `free -h`, `vm_stat` (mac) + - **Disk / IO (2):** `df -h`, `du -sh /var/log`, `iostat`/`vmstat`/`dstat` + - **Network (2):** `ss -tulpn`/`netstat -tulpn`, `curl -I `/`ping` + - **Logs (2):** `journalctl -u -n 50`, `tail -n 50 /var/log/.log` +- Choose **one target service/process** (e.g., `ssh`, `cron`, `docker`, your web app) and stick to it for the drill. +- For each command, add a 1–2 line note on what you observed (e.g., “CPU spikes to 80% when restarting”, “No recent errors in last 50 lines”). +- End with a **“If this worsens”** section listing 3 next steps you would take (ex: restart strategy, increase log verbosity, collect `strace`). +- Keep it concise and actionable (aim for ~1 page). + +Suggested structure for `linux-troubleshooting-runbook.md`: +- Target service / process +- Snapshot: CPU & Memory +- Snapshot: Disk & IO +- Snapshot: Network +- Logs reviewed +- Quick findings +- If this worsens (next steps) + +--- + +## Resources +You may refer to: + +- Notes from Day 02–04 +- Linux `man` pages (`top`, `ps`, `df`, `journalctl`, `ss/netstat`) +- Your class notes + +Avoid generic copy/paste. Use outputs from **your** machine. + +--- + +## Why This Matters for DevOps +Incidents rarely come with perfect clues. A fast, repeatable checklist saves minutes when services misbehave. + +This drill builds: +- Habit of capturing evidence before acting +- Confidence reading resource signals (CPU, memory, disk, network) +- Log-first mindset before restarts or escalations + +These habits reduce downtime and prevent guesswork in production. + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-05/` folder +3. Add your `linux-troubleshooting-runbook.md` file +4. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 05 progress on LinkedIn: + +- Post 2–3 lines on the checks you ran and one insight +- Share the service you inspected and one “next step” from your runbook +- Optional: screenshot of your runbook + +Use hashtags: +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham + +Happy Learning +**TrainWithShubham** diff --git a/2026/day-05/linux-troubleshooting-runbook.md b/2026/day-05/linux-troubleshooting-runbook.md new file mode 100644 index 0000000000..8b0d70fe4a --- /dev/null +++ b/2026/day-05/linux-troubleshooting-runbook.md @@ -0,0 +1,70 @@ +# uname -a +Linux ip-172-31-21-199 6.14.0-1018-aws #18~24.04.1-Ubuntu SMP Mon Nov 24 19:46:27 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux + +# cat /etc/os-release +PRETTY_NAME="Ubuntu 24.04.3 LTS" +NAME="Ubuntu" +VERSION_ID="24.04" +VERSION="24.04.3 LTS (Noble Numbat)" +VERSION_CODENAME=noble +ID=ubuntu +ID_LIKE=debian +HOME_URL="https://www.ubuntu.com/" +SUPPORT_URL="https://help.ubuntu.com/" +BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" +PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" +UBUNTU_CODENAME=noble +LOGO=ubuntu-logo + +# lsb_release -a +No LSB modules are available. +Distributor ID: Ubuntu +Description: Ubuntu 24.04.3 LTS +Release: 24.04 +Codename: noble + +# ps -o pid + PID + 1322 + 1323 + 1324 + 1485 + +# free -h + total used free shared buff/cache a vailable +Mem: 957Mi 333Mi 397Mi 888Ki 383Mi 623Mi +Swap: 0B 0B 0B + +# df -h +Filesystem Size Used Avail Use% Mounted on +/dev/root 27G 2.2G 24G 9% / +tmpfs 479M 0 479M 0% /dev/shm +tmpfs 192M 872K 191M 1% /run +tmpfs 5.0M 0 5.0M 0% /run/lock +/dev/xvda16 881M 89M 730M 11% /boot +/dev/xvda15 105M 6.2M 99M 6% /boot/efi +tmpfs 96M 12K 96M 1% /run/user/1000 + +# du -sh +8.0K . + +# +ps aux +USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND +root 1 0.0 1.3 22060 13348 ? Ss 10:23 0:01 /sbin +root 2 0.0 0.0 0 0 ? S 10:23 0:00 [kthr + + + + + + + + + + + + + + + diff --git a/2026/day-06/README.md b/2026/day-06/README.md new file mode 100644 index 0000000000..12d4ff03fb --- /dev/null +++ b/2026/day-06/README.md @@ -0,0 +1,93 @@ +# Day 06 – Linux Fundamentals: Read and Write Text Files + +## Task +This is a **continuation of Day 05**, but much simpler. + +Today’s goal is to **practice basic file read/write** using only fundamental commands. + +You will create a small text file and practice: +- Creating a file +- Writing text to a file +- Appending new lines +- Reading the file back + +Keep it basic and repeatable. + +--- + +## Expected Output +By the end of today, you should have: + +- the new created files +- A markdown file named: + `file-io-practice.md` + +or + +- A hand written practice note (Recommended) + +Your note should include the commands you ran and what they did. + +--- + +## Guidelines +Follow these rules while creating your practice note: + +- Create a file named `notes.txt` +- Write 3 lines into the file using **redirection** (`>` and `>>`) +- Use **`cat`** to read the full file +- Use **`head`** and **`tail`** to read parts of the file +- Use **`tee`** once to write and display at the same time +- Keep it short (8–12 lines total in the file) + +Suggested command flow: +1. `touch notes.txt` +2. `echo "Line 1" > notes.txt` +3. `echo "Line 2" >> notes.txt` +4. `echo "Line 3" | tee -a notes.txt` +5. `cat notes.txt` +6. `head -n 2 notes.txt` +7. `tail -n 2 notes.txt` + +--- + +## Resources +Use these docs to understand the commands: + +- `touch` (create an empty file) +- `cat` (read full file) +- `head` and `tail` (read parts of a file) +- `tee` (write and display at the same time) + +--- + +## Why This Matters for DevOps +Reading and writing files is a daily task in DevOps. + +Logs, configs, and scripts are all text files. +If you can handle files quickly, you can debug and automate faster. + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-06/` folder +3. Add your `file-io-practice.md` file +4. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 06 progress on LinkedIn: + +- Post 2–3 lines on what you learned about file read/write +- Share one command you will use often +- Optional: screenshot of your notes + +Use hashtags: +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham + +Happy Learning +**TrainWithShubham** diff --git a/2026/day-06/file-io-practise.md b/2026/day-06/file-io-practise.md new file mode 100644 index 0000000000..f3c5eca5dd --- /dev/null +++ b/2026/day-06/file-io-practise.md @@ -0,0 +1,12 @@ +# touch notes.txt +# echo "HEllo sabhi ko" > notes.txt +# echo "Hope you all are doing good" >> notes.txt +# echo "Have a nice day" | tee -a notes.txt +Have a nice day +# head -n 2 notes.txt +HEllo sabhi ko +Hope you all are doing good +# tail -n 2 notes.txt +Hope you all are doing good +Have a nice day + diff --git a/2026/day-07/README.md b/2026/day-07/README.md new file mode 100644 index 0000000000..613b241332 --- /dev/null +++ b/2026/day-07/README.md @@ -0,0 +1,256 @@ +# Day 07 – Linux File System Hierarchy & Scenario-Based Practice + +## Task +Today's goal is to **understand where things live in Linux** and **practice troubleshooting like a DevOps engineer**. + +You will create notes covering: +- Linux File System Hierarchy (the most important directories) +- Practice solving real-world scenarios step by step + +This consolidates your Linux fundamentals and prepares you for real-world troubleshooting. + +--- + +## Expected Output +By the end of today, you should have: + +- A markdown file named: + `day-07-linux-fs-and-scenarios.md` + +or + +- A hand written set of notes (Recommended) + +Your notes should have two sections: File System Hierarchy and Scenario Practice. + +--- + +## Guidelines + +### Part 1: Linux File System Hierarchy (30 minutes) + +Document the purpose of these **essential** directories: + +**Core Directories (Must Know):** +- `/` (root) - The starting point of everything +- `/home` - User home directories +- `/root` - Root user's home directory +- `/etc` - Configuration files +- `/var/log` - Log files (very important for DevOps!) +- `/tmp` - Temporary files + +**Additional Directories (Good to Know):** +- `/bin` - Essential command binaries +- `/usr/bin` - User command binaries +- `/opt` - Optional/third-party applications + +For each directory: +- Write 1-2 lines explaining what it contains +- Run `ls -l ` and note 1-2 files/folders you see +- Write one sentence: "I would use this when..." + +**Hands-on task:** +```bash +# Find the largest log file in /var/log +du -sh /var/log/* 2>/dev/null | sort -h | tail -5 + +# Look at a config file in /etc +cat /etc/hostname + +# Check your home directory +ls -la ~ +``` + +--- + +### Part 2: Scenario-Based Practice (40 minutes) + +**Important:** Focus on understanding the **troubleshooting flow**, not memorizing commands. Use the hints! + +--- + +#### SOLVED EXAMPLE: Understanding How to Approach Scenarios + +**Example Scenario: Check if a service is running** +``` +Question: How do you check if the 'nginx' service is running? +``` + +**My Solution (Step by step):** + +**Step 1:** Check service status +```bash +systemctl status nginx +``` +**Why this command?** It shows if the service is active, failed, or stopped + +**Step 2:** If service is not found, list all services +```bash +systemctl list-units --type=service +``` +**Why this command?** To see what services exist on the system + +**Step 3:** Check if service is enabled on boot +```bash +systemctl is-enabled nginx +``` +**Why this command?** To know if it will start automatically after reboot + +**What I learned:** Always check status first, then investigate based on what you see. + +--- + +Now try these scenarios yourself: + +--- + +**Scenario 1: Service Not Starting** +``` +A web application service called 'myapp' failed to start after a server reboot. +What commands would you run to diagnose the issue? +Write at least 4 commands in order. +``` + +**Hint:** +- First check: Is the service running or failed? +- Then check: What do the logs say? +- Finally check: Is it enabled to start on boot? + +**Commands to explore:** `systemctl status myapp`, `systemctl is-enabled myapp`, `journalctl -u myapp -n 50` + +**Resource:** Review Day 04 (Process and Services practice) + +**Template for your answer:** +``` +Step 1: [command] +Why: [one line explanation] + +Step 2: [command] +Why: [one line explanation] + +... +``` + +--- + +**Scenario 2: High CPU Usage** +``` +Your manager reports that the application server is slow. +You SSH into the server. What commands would you run to identify +which process is using high CPU? +``` + +**Hint:** +- Use a command that shows **live** CPU usage +- Look for processes sorted by CPU percentage +- Note the PID (Process ID) of the top process + +**Commands to explore:** `top` (press 'q' to quit), `htop`, `ps aux --sort=-%cpu | head -10` + +**Resource:** Review Day 05 (Troubleshooting Drill - CPU & Memory section) + +--- + +**Scenario 3: Finding Service Logs** +``` +A developer asks: "Where are the logs for the 'docker' service?" +The service is managed by systemd. +What commands would you use? +``` + +**Hint:** +- systemd services → logs are in journald +- Command pattern: `journalctl -u ` +- Use -n flag to limit number of lines +- Use -f flag to follow logs in real-time (like tail -f) + +**Commands to explore:** +```bash +# Check service status first +systemctl status ssh + +# View last 50 lines of logs +journalctl -u ssh -n 50 + +# Follow logs in real-time +journalctl -u ssh -f +``` + +**Resource:** Review Day 04 (Process and Services - Log checks section) + +--- + +**Scenario 4: File Permissions Issue** +``` +A script at /home/user/backup.sh is not executing. +When you run it: ./backup.sh +You get: "Permission denied" + +What commands would you use to fix this? +``` + +**Hint:** +- First: Check what permissions the file has +- Understand: Files need 'x' (execute) permission to run +- Fix: Add execute permission with chmod + +**Step-by-step solution structure:** +``` +Step 1: Check current permissions +Command: ls -l /home/user/backup.sh +Look for: -rw-r--r-- (notice no 'x' = not executable) + +Step 2: Add execute permission +Command: chmod +x /home/user/backup.sh + +Step 3: Verify it worked +Command: ls -l /home/user/backup.sh +Look for: -rwxr-xr-x (notice 'x' = executable) + +Step 4: Try running it +Command: ./backup.sh +``` + +**Resource:** Review Day 02 (File Permissions and Users Management) + +--- + +## Why This Matters for DevOps +Understanding the file system is critical for: +- Knowing where to find logs, configs, and binaries +- Troubleshooting deployment issues +- Writing automation scripts that work across systems + +Scenario-based practice prepares you for: +- Real production incidents +- DevOps interviews +- On-call troubleshooting under pressure + +These are questions you **will** face in interviews and during real incidents. + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-07/` folder +3. Add your `day-07-linux-fs-and-scenarios.md` file +4. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 07 progress on LinkedIn: + +- Post 2–3 lines on what you learned about Linux file system +- Share one scenario you found challenging and how you solved it +- Optional: screenshot of your notes + +Use hashtags: +``` +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham +``` + +Happy Learning +**TrainWithShubham** diff --git a/2026/day-07/day-07-linux-fs-and-scenarios.md b/2026/day-07/day-07-linux-fs-and-scenarios.md new file mode 100644 index 0000000000..3b53d333a0 --- /dev/null +++ b/2026/day-07/day-07-linux-fs-and-scenarios.md @@ -0,0 +1,22 @@ +# du -sh /var/log 2>/dev/null | sort -n | tail -n 5 +132M /var/log + +# journalctl -u nginx | tail -n 1 +Feb 03 09:28:05 ip-172-31-21-199 systemd[1]: Started nginx.service - A high performance web server and a reverse proxy server. + +# cat /etc/hostname +ip-172-31-21-199 + +# systemctl is-enabled nginx +enabled + +# systemctl list-unit-files | tail -3 +xfs_scrub_all.timer disabled enabled + +410 unit files listed. + +# ps aux --sort=-%cpu | head -3 +USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND +root 1 0.3 1.3 22092 13420 ? Ss :27 0:04 /sbin/init +ubuntu 1335 0.2 0.7 14996 7140 ? S 09:31 0:01 sshd: ubuntu@pts/0 + diff --git a/2026/day-08/README.md b/2026/day-08/README.md new file mode 100644 index 0000000000..443decbe16 --- /dev/null +++ b/2026/day-08/README.md @@ -0,0 +1,146 @@ +# Day 08 – Cloud Server Setup: Docker, Nginx & Web Deployment + +## Task +Today's goal is to **deploy a real web server on the cloud** and learn practical server management. + +You will: +- Launch a cloud instance (AWS EC2 or Utho) +- Connect via SSH +- Install Nginx +- Configure security groups for web access (port 80 by default for nginx) +- Extract and save logs to a file +- Verify your webpage is accessible from the internet + +This is real DevOps work - exactly what you'll do in production. + +--- + +## Expected Output +By the end of today, you should have: + +1. A markdown file named: `day-08-cloud-deployment.md` +2. Screenshots showing: + - SSH connection to your server + - Nginx welcome page accessible from browser + - Log file contents +3. The log file: `nginx-logs.txt` + +--- + +## Prerequisites +- AWS account (Free Tier) OR Utho account +- Basic understanding of Linux commands (Days 1-7) +- SSH client (Terminal on Mac/Linux, PuTTY on Windows) + +--- + +## Guidelines + +### Part 1: Launch Cloud Instance & SSH Access (15 minutes) + +**Step 1: Create a Cloud Instance** + + +**Step 2: Connect via SSH** + + +--- + +### Part 2: Install Docker & Nginx (20 minutes) + +**Step 1: Update System** + + +**Step 3: Install Nginx** + +**Verify Nginx is running:** + +--- + +### Part 3: Security Group Configuration (10 minutes) + +**Test Web Access:** +Open browser and visit: `http://` + +You should see the **Nginx welcome page**! + +📸 **Screenshot this page** - you'll need it for submission + +--- + +### Part 4: Extract Nginx Logs (15 minutes) + +**Step 1: View Nginx Logs** + +**Step 2: Save Logs to File** + +**Step 3: Download Log File to Your Local Machine** +```bash +# On your local machine (new terminal window) +# For AWS: +scp -i your-key.pem ubuntu@:~/nginx-logs.txt . + +# For Utho: +scp root@:~/nginx-logs.txt . +``` + +--- + + +## Documentation Template + +Create your `day-08-cloud-deployment.md` with this structure: + +## Commands Used +[List the key commands you used] + +## Challenges Faced +[Describe any issues and how you solved them] + +## What I Learned +[3-5 bullet points of key learnings] + +--- + + +## Why This Matters for DevOps + +This exercise teaches you: +- **Cloud infrastructure provisioning** - launching and configuring servers +- **Remote server management** - SSH, security, access control +- **Service deployment** - installing and running applications +- **Log management** - accessing and analyzing logs +- **Security** - configuring firewalls and security groups + +These are core skills for any DevOps engineer working in production. + +--- + + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-08/` folder +3. Add your `day-08-cloud-deployment.md` file +4. Add your `nginx-logs.txt` file +5. Add screenshots (name them: `ssh-connection.png`, `nginx-webpage.png`, `docker-nginx.png`) +6. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 08 progress on LinkedIn: + +- Post 2-3 lines on deploying your first cloud server +- Share screenshot of your Nginx webpage +- Mention one challenge you faced and solved +- Optional: Share your instance IP (if comfortable) + +Use hashtags: +``` +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham +``` + +Happy Learning +**TrainWithShubham** diff --git a/2026/day-08/day-08-cloud-deployment.md b/2026/day-08/day-08-cloud-deployment.md new file mode 100644 index 0000000000..969b3a7985 --- /dev/null +++ b/2026/day-08/day-08-cloud-deployment.md @@ -0,0 +1,61 @@ +# LIST OF COMMANDS THAT I USED + + 1 ls + 2 systemctl is-enabled nginx + 3 sudo apt-get install nginx + 4 systemctl is-enabled nginx + 5 systemctl status nginx + 6 cat /etc/hostname + 7 scp -i downloads/nginx.pem ubuntu@54.87.49.42:~/nginx-logs.txt . + 8 sudo scp -i downloads/nginx.pem ubuntu@54.87.49.42:~/nginx-logs.txt . + 9 scp -i nginx.pem ubuntu@54.87.49.42:~/nginx-logs.txt . + 11 systemctl is-enabled nginx + 15 history + 427 scp -i downloads/nginx.pem ubuntu@52.91.197.11:~/nginx-logs.txt . + 428 chmod 600 downloads/nginx.pem + 429 scp -i downloads/nginx.pem ubuntu@52.91.197.11:~/nginx-logs.txt . + 430 ls -l downloads/nginx.pem + 431 sudo chmod 600 downloads/nginx.pem + 432 ls -l downloads/nginx.pem + 433 sudo chmod 600 downloads/nginx.pem + 434 ls -l downloads/nginx.pem + 435 mv downloads/nginx.pem nginxx.pem + 436 ls + 437 ls nginx.pem + 438 cat nginxx.pem + 439 ls -l + 440 sudo chmod 600 nginxx.pem + 441 ls -l + 442 cd ~ + 443 ls + 444 mkdir -p ~/.ssh + 445 ls + 446 ls -l + 447 ls -a + 448 cp /mnt/c/Users/dell/downloads/nginx.pem ~/.ssh/nginx.pem + 449 cp /mnt/c/Users/dell/Downloads/nginx.pem ~/.ssh/nginx.pem + 450 cd /mnt/c/Users/dell + 451 ls + 452 cp /mnt/c/Users/dell/nginxx.pem ~/.ssh/nginx.pem + 453 cd ~ + 454 ls + 455 ls -l ~/.ssh/nginxx.pem + 456 sudo ls -l ~/.ssh/nginxx.pem + 457 cd .ssh + 458 ls + 459 cd .. + 460 ls + 461 ls -l ~/.ssh/nginx.pem + 462 chmod 600 ~/.ssh/nginx.pem + 463 ls -l ~/.ssh/nginx.pem + 464 scp -i ~/.ssh/nginx.pem ubuntu@52.91.197.11:/var/log/nginx/access.log . + 465 ls + 466 cat access.log + + + +# PROBLEM THAT I FACED WAS THAT I WAS RUNNING THAT (scp) COMMAND FROM MY SSH INSTANCE RATHER THAN MY LOCAL MACHINE SO IT TOOK ME A LOT OF TIME BUT NOW IT'S CLEAR + +# WHAT I LEARNED +I learned how to copy log files from another server. + diff --git a/2026/day-08/nginx-logs.txt b/2026/day-08/nginx-logs.txt new file mode 100644 index 0000000000..eca4c8e482 --- /dev/null +++ b/2026/day-08/nginx-logs.txt @@ -0,0 +1,15 @@ +152.58.157.30 - - [03/Feb/2026:10:06:02 +0000] "GET / HTTP/1.1" 200 409 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/144.0.0.0 Safari/537.36" +152.58.157.30 - - [03/Feb/2026:10:06:02 +0000] "GET /favicon.ico HTTP/1.1" 404 196 "http://54.87.49.42/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/144.0.0.0 Safari/537.36" +27.147.191.231 - - [03/Feb/2026:10:10:01 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36" +195.178.110.39 - - [03/Feb/2026:10:22:25 +0000] "\x16\x03\x01\x00\xEE\x01\x00\x00\xEA\x03\x03\x9B\xB64\xBC\xED\x1EA\x17\x94D.PChV_\x0B\xF1\x83\xEFR\xBA\xAB\x09Q{\xB4\xD0\xDA\xB3`S " 400 166 "-" "-" +13.89.125.26 - - [05/Feb/2026:05:32:31 +0000] "GET / HTTP/1.1" 200 409 "-" "Mozilla/5.0 zgrab/0.x" +185.16.39.146 - - [05/Feb/2026:05:32:55 +0000] "GET / HTTP/1.1" 200 615 "-" "Wget" +185.16.39.146 - - [05/Feb/2026:05:39:40 +0000] "GET / HTTP/1.1" 200 615 "-" "Wget" +152.58.156.224 - - [05/Feb/2026:05:40:01 +0000] "GET / HTTP/1.1" 200 409 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/144.0.0.0 Safari/537.36" +152.58.156.224 - - [05/Feb/2026:05:40:01 +0000] "GET /favicon.ico HTTP/1.1" 404 196 "http://52.91.197.11/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/144.0.0.0 Safari/537.36" +185.16.39.146 - - [05/Feb/2026:05:46:38 +0000] "GET / HTTP/1.1" 200 615 "-" "Wget" +202.40.178.238 - - [05/Feb/2026:05:50:11 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36" +185.16.39.146 - - [05/Feb/2026:05:52:53 +0000] "GET / HTTP/1.1" 200 615 "-" "Wget" +185.16.39.146 - - [05/Feb/2026:06:02:26 +0000] "GET / HTTP/1.1" 200 615 "-" "Wget" +204.76.203.219 - - [05/Feb/2026:06:06:53 +0000] "GET / HTTP/1.1" 200 409 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36 Edg/90.0.818.46" +185.16.39.146 - - [05/Feb/2026:06:11:34 +0000] "GET / HTTP/1.1" 200 615 "-" "Wget" diff --git a/2026/day-09/README.md b/2026/day-09/README.md new file mode 100644 index 0000000000..67aea03d24 --- /dev/null +++ b/2026/day-09/README.md @@ -0,0 +1,150 @@ +# Day 09 – Linux User & Group Management Challenge + +## Task +Today's goal is to **practice user and group management** by completing hands-on challenges. + +Figure out how to: +- Create users and set passwords +- Create groups and assign users +- Set up shared directories with group permissions + +Use what you learned from Days 1-7 to find the right commands! + +--- + +## Expected Output +- A markdown file: `day-09-user-management.md` +- Screenshots of command outputs +- List of commands used + +--- + +## Challenge Tasks + +### Task 1: Create Users (20 minutes) + +Create three users with home directories and passwords: +- `tokyo` +- `berlin` +- `professor` + +**Verify:** Check `/etc/passwd` and `/home/` directory + +--- + +### Task 2: Create Groups (10 minutes) + +Create two groups: +- `developers` +- `admins` + +**Verify:** Check `/etc/group` + +--- + +### Task 3: Assign to Groups (15 minutes) + +Assign users: +- `tokyo` → `developers` +- `berlin` → `developers` + `admins` (both groups) +- `professor` → `admins` + +**Verify:** Use appropriate command to check group membership + +--- + +### Task 4: Shared Directory (20 minutes) + +1. Create directory: `/opt/dev-project` +2. Set group owner to `developers` +3. Set permissions to `775` (rwxrwxr-x) +4. Test by creating files as `tokyo` and `berlin` + +**Verify:** Check permissions and test file creation + +--- + +### Task 5: Team Workspace (20 minutes) + +1. Create user `nairobi` with home directory +2. Create group `project-team` +3. Add `nairobi` and `tokyo` to `project-team` +4. Create `/opt/team-workspace` directory +5. Set group to `project-team`, permissions to `775` +6. Test by creating file as `nairobi` + +--- + +## Hints + +**Stuck? Try these commands:** +- User: `useradd`, `passwd`, `usermod` +- Group: `groupadd`, `groups` +- Permissions: `chgrp`, `chmod` +- Test: `sudo -u username command` + +**Tip:** Use `-m` flag with useradd for home directory, `-aG` for adding to groups + +--- + +## Documentation + +Create `day-09-user-management.md`: + +```markdown +# Day 09 Challenge + +## Users & Groups Created +- Users: tokyo, berlin, professor, nairobi +- Groups: developers, admins, project-team + +## Group Assignments +[List who is in which groups] + +## Directories Created +[List directories with permissions] + +## Commands Used +[Your commands here] + +## What I Learned +[3 key points] +``` + +--- + + +## Troubleshooting + +**Permission denied?** Use `sudo` + +**User can't access directory?** +- Check group: `groups username` +- Check permissions: `ls -ld /path` + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to `2026/day-09/` folder +3. Add your `day-09-user-management.md` with screenshots +4. Commit and push + +--- + +## Learn in Public +Share your Day 09 progress on LinkedIn: + +- Post about completing the user management challenge +- Share one thing you figured out +- Mention real-world DevOps use + +Use hashtags: +``` +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham +``` + +Happy Learning +**TrainWithShubham** diff --git a/2026/day-09/day-09-user-management.md b/2026/day-09/day-09-user-management.md new file mode 100644 index 0000000000..c21d72f271 --- /dev/null +++ b/2026/day-09/day-09-user-management.md @@ -0,0 +1,28 @@ +# Users created +jessi, hank, nairobi, tokyo + +# Groups created +developers, admins, project-team + +# Group Assignments +walt: walt developers +jessi: jessi developers admins +hank: hank admins +nairobi: nairobi admins project-team +tokyo: tokyo + +# Directories created +/opt/dev-project +/opt/dev-project +jessi +hank +nairobi +tokyo + +# commands used +useradd, mkdir, chgrp, chmod, groupadd, man, groups + +# WHAT I LEARNED +I learned how to add groups and users while making their own directories +Also how to assign users to different groups +And to get lst of users in a group diff --git a/2026/day-10/README.md b/2026/day-10/README.md new file mode 100644 index 0000000000..f1c66a14c3 --- /dev/null +++ b/2026/day-10/README.md @@ -0,0 +1,117 @@ +# Day 10 – File Permissions & File Operations Challenge + +## Task +Master file permissions and basic file operations in Linux. + +- Create and read files using `touch`, `cat`, `vim` +- Understand and modify permissions using `chmod` + +--- + +## Expected Output +- A markdown file: `day-10-file-permissions.md` +- Screenshots showing permission changes + +--- + +## Challenge Tasks + +### Task 1: Create Files (10 minutes) + +1. Create empty file `devops.txt` using `touch` +2. Create `notes.txt` with some content using `cat` or `echo` +3. Create `script.sh` using `vim` with content: `echo "Hello DevOps"` + +**Verify:** `ls -l` to see permissions + +--- + +### Task 2: Read Files (10 minutes) + +1. Read `notes.txt` using `cat` +2. View `script.sh` in vim read-only mode +3. Display first 5 lines of `/etc/passwd` using `head` +4. Display last 5 lines of `/etc/passwd` using `tail` + +--- + +### Task 3: Understand Permissions (10 minutes) + +Format: `rwxrwxrwx` (owner-group-others) +- `r` = read (4), `w` = write (2), `x` = execute (1) + +Check your files: `ls -l devops.txt notes.txt script.sh` + +Answer: What are current permissions? Who can read/write/execute? + +--- + +### Task 4: Modify Permissions (20 minutes) + +1. Make `script.sh` executable → run it with `./script.sh` +2. Set `devops.txt` to read-only (remove write for all) +3. Set `notes.txt` to `640` (owner: rw, group: r, others: none) +4. Create directory `project/` with permissions `755` + +**Verify:** `ls -l` after each change + +--- + +### Task 5: Test Permissions (10 minutes) + +1. Try writing to a read-only file - what happens? +2. Try executing a file without execute permission +3. Document the error messages + +--- + +## Hints + +- Create: `touch`, `cat > file`, `vim file` +- Read: `cat`, `head -n`, `tail -n` +- Permissions: `chmod +x`, `chmod -w`, `chmod 755` + +--- + +## Documentation + +Create `day-10-file-permissions.md`: + +```markdown +# Day 10 Challenge + +## Files Created +[list files] + +## Permission Changes +[before/after for each file] + +## Commands Used +[your commands] + +## What I Learned +[3 key points] +``` + +--- + +## Submission +1. Navigate to `2026/day-10/` folder +2. Add `day-10-file-permissions.md` with screenshots +3. Commit and push + +--- + +## Learn in Public + +Share on LinkedIn about mastering file permissions. + +Use hashtags: +``` +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham +``` + +Happy Learning +**TrainWithShubham** diff --git a/2026/day-10/day-10-file-permissions.md b/2026/day-10/day-10-file-permissions.md new file mode 100644 index 0000000000..b803a0606a --- /dev/null +++ b/2026/day-10/day-10-file-permissions.md @@ -0,0 +1,15 @@ +# Files Created +devops.txt, notes.txt, script.sh + +# Permission changes +filename before after +devops.txt 664 444 +notes.txt 664 640 +script.sh 664 775 + +# Commands used +cat, vim, ls, touch, chmod, head, tail, + +# What I learned +1> I learned how to execute a file and how to make it executable +2> Learned how to make a file read only, diff --git a/2026/day-11/README.md b/2026/day-11/README.md new file mode 100644 index 0000000000..128b04f0f1 --- /dev/null +++ b/2026/day-11/README.md @@ -0,0 +1,216 @@ +# Day 11 – File Ownership Challenge (chown & chgrp) + +## Task +Master file and directory ownership in Linux. + +- Understand file ownership (user and group) +- Change file owner using `chown` +- Change file group using `chgrp` +- Apply ownership changes recursively + +--- + +## Expected Output +- A markdown file: `day-11-file-ownership.md` +- Screenshots showing ownership changes + +--- + +## Challenge Tasks + +### Task 1: Understanding Ownership (10 minutes) + +1. Run `ls -l` in your home directory +2. Identify the **owner** and **group** columns +3. Check who owns your files + +**Format:** `-rw-r--r-- 1 owner group size date filename` + +Document: What's the difference between owner and group? + +--- + +### Task 2: Basic chown Operations (20 minutes) + +1. Create file `devops-file.txt` +2. Check current owner: `ls -l devops-file.txt` +3. Change owner to `tokyo` (create user if needed) +4. Change owner to `berlin` +5. Verify the changes + +**Try:** +```bash +sudo chown tokyo devops-file.txt +``` + +--- + +### Task 3: Basic chgrp Operations (15 minutes) + +1. Create file `team-notes.txt` +2. Check current group: `ls -l team-notes.txt` +3. Create group: `sudo groupadd heist-team` +4. Change file group to `heist-team` +5. Verify the change + +--- + +### Task 4: Combined Owner & Group Change (15 minutes) + +Using `chown` you can change both owner and group together: + +1. Create file `project-config.yaml` +2. Change owner to `professor` AND group to `heist-team` (one command) +3. Create directory `app-logs/` +4. Change its owner to `berlin` and group to `heist-team` + +**Syntax:** `sudo chown owner:group filename` + +--- + +### Task 5: Recursive Ownership (20 minutes) + +1. Create directory structure: + ``` + mkdir -p heist-project/vault + mkdir -p heist-project/plans + touch heist-project/vault/gold.txt + touch heist-project/plans/strategy.conf + ``` + +2. Create group `planners`: `sudo groupadd planners` + +3. Change ownership of entire `heist-project/` directory: + - Owner: `professor` + - Group: `planners` + - Use recursive flag (`-R`) + +4. Verify all files and subdirectories changed: `ls -lR heist-project/` + +--- + +### Task 6: Practice Challenge (20 minutes) + +1. Create users: `tokyo`, `berlin`, `nairobi` (if not already created) +2. Create groups: `vault-team`, `tech-team` +3. Create directory: `bank-heist/` +4. Create 3 files inside: + ``` + touch bank-heist/access-codes.txt + touch bank-heist/blueprints.pdf + touch bank-heist/escape-plan.txt + ``` + +5. Set different ownership: + - `access-codes.txt` → owner: `tokyo`, group: `vault-team` + - `blueprints.pdf` → owner: `berlin`, group: `tech-team` + - `escape-plan.txt` → owner: `nairobi`, group: `vault-team` + +**Verify:** `ls -l bank-heist/` + +--- + +## Key Commands Reference + +```bash +# View ownership +ls -l filename + +# Change owner only +sudo chown newowner filename + +# Change group only +sudo chgrp newgroup filename + +# Change both owner and group +sudo chown owner:group filename + +# Recursive change (directories) +sudo chown -R owner:group directory/ + +# Change only group with chown +sudo chown :groupname filename +``` + +--- + +## Hints + +- Most `chown`/`chgrp` operations need `sudo` +- Use `-R` flag for recursive directory changes +- Always verify with `ls -l` after changes +- User must exist before using in `chown` +- Group must exist before using in `chgrp`/`chown` + +--- + +## Documentation + +Create `day-11-file-ownership.md`: + +```markdown +# Day 11 Challenge + +## Files & Directories Created +[list all files/directories] + +## Ownership Changes +[before/after for each file] + +Example: +- devops-file.txt: user:user → tokyo:heist-team + +## Commands Used +[your commands here] + +## What I Learned +[3 key points about file ownership] +``` + +--- + +## Troubleshooting + +**Permission denied?** +- Use `sudo` for chown/chgrp operations + +**Group doesn't exist?** +- Create it first: `sudo groupadd groupname` + +**User doesn't exist?** +- Create it first: `sudo useradd username` + +--- + +## Why This Matters for DevOps + +In real DevOps scenarios, you need proper file ownership for: + +- Application deployments +- Shared team directories +- Container file permissions +- CI/CD pipeline artifacts +- Log file management + +--- + +## Submission +1. Navigate to `2026/day-11/` folder +2. Add `day-11-file-ownership.md` with screenshots +3. Commit and push to your fork + +--- + +## Learn in Public + +Share on LinkedIn about mastering file ownership. + +Use hashtags: +``` +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham +``` + +Happy Learning +**TrainWithShubham** diff --git a/2026/day-11/day-11-file-ownership.md b/2026/day-11/day-11-file-ownership.md new file mode 100644 index 0000000000..297eb47dae --- /dev/null +++ b/2026/day-11/day-11-file-ownership.md @@ -0,0 +1,18 @@ +# Files and Directories created +files - project-config.yml, devops-file.txt, gold.txt, strategy.conf, access-codes, blueprints.pdf, escape-plan.txt, +directories - heist-project, vault, plans, bank-heist + +# Ownership changes +file before(owner,group) after(owner,group) +project-config.yml ubuntu:ubuntu walt:heist-team +access-codes.txt ubuntu:ubuntu walt:vault-team +blueprints.pdf ubuntu:ubuntu jessi:tech-team +escape-plan ubuntu:ubuntu nairobi:vault-team + +# commands used +mkdir, chown, chgrp, touch, ls, cd, history, pwd + +# What i learned +I learned how to change file ownerships(like, user and group) + + diff --git a/2026/day-12/README.md b/2026/day-12/README.md new file mode 100644 index 0000000000..1e7d433d1f --- /dev/null +++ b/2026/day-12/README.md @@ -0,0 +1,48 @@ +# Day 12 – Breather & Revision (Days 01–11) + +## Goal +Take a **one-day pause** to consolidate everything from Days 01–11 so you don’t forget the fundamentals you just built. + +## Expected Output +- A markdown file: `day-12-revision.md` + (bullet notes + checkpoints) +- Optional: screenshots of any re-runs you do + +## What to Review (pick at least one per section) +- **Mindset & plan:** revisit your Day 01 learning plan—are your goals still right? any tweaks? +- **Processes & services:** rerun 2 commands from Day 04/05 (e.g., `ps`, `systemctl status`, `journalctl -u `); jot what you observed today. +- **File skills:** practice 3 quick ops from Days 06–11 (e.g., `echo >>`, `chmod`, `chown`, `ls -l`, `cp`, `mkdir`). +- **Cheat sheet refresh:** skim your Day 03 commands—highlight 5 you’d reach for first in an incident. +- **User/group sanity:** recreate one small scenario from Day 09 or Day 11 (create a user or change ownership) and verify with `id`/`ls -l`. + +## Mini Self-Check (write short answers in `day-12-revision.md`) +1) Which 3 commands save you the most time right now, and why? +2) How do you check if a service is healthy? List the exact 2–3 commands you’d run first. +3) How do you safely change ownership and permissions without breaking access? Give one example command. +4) What will you focus on improving in the next 3 days? + +## Suggested Flow (30–45 minutes) +- 10 min: skim notes from each day, update Day 01 plan if needed. +- 15–20 min: rerun a tiny hands-on set (process check, service check, file permission change). +- 5–10 min: write the self-check answers and key takeaways. + +## Tips +- Keep it light—this is about retention, not new concepts. +- If something felt shaky this week (e.g., `chmod` numbers, `journalctl` flags), practice that specifically. +- Small wins: one screenshot of a command rerun + 5 bullet notes is enough. + +## Submission +1. Navigate to `2026/day-12/` +2. Add `day-12-revision.md` with your bullets and answers +3. Commit and push to your fork + +## Learn in Public +Post 2–3 lines on what you reinforced today and one command you now remember confidently. + +Use hashtags: +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham + +Happy Learning +**TrainWithShubham** diff --git a/2026/day-12/day-12-revision.md b/2026/day-12/day-12-revision.md new file mode 100644 index 0000000000..9bfce5c3f3 --- /dev/null +++ b/2026/day-12/day-12-revision.md @@ -0,0 +1,17 @@ +# Commands that save me the most +1> systemctl (to check if a service is running or not) +2> ls (to see what all files and directories i have created) +3> pwd (to see in which directry I m working as I usually forget) + +# To check if a service is healthy +1> systemctl is-enabled (to see if a service is started/running or not) +2> journalctl -u ( to see the logs of a service) +3> systemctl status + +# To change ownership and permission + of a file named file.txt + 1> sudo chown file.txt + 2> sudo chmod 764 file.txt + + # What I will focus on next 3 days? + Next 3 days , I will focus on giving more and more time for developing my skills. diff --git a/2026/day-13/README.md b/2026/day-13/README.md new file mode 100644 index 0000000000..dafdf15e29 --- /dev/null +++ b/2026/day-13/README.md @@ -0,0 +1,99 @@ +# Day 13 – Linux Volume Management (LVM) + +## Task +Learn LVM to manage storage flexibly – create, extend, and mount volumes. + +**Watch First:** [Linux LVM Tutorial](https://youtu.be/Evnf2AAt7FQ?si=ncnfQYySYtK_2K3c) + +--- + +## Expected Output +- A markdown file: `day-13-lvm.md` +- Screenshots of command outputs + +--- + +## Before You Start + +Switch to root user: +```bash +sudo -i +``` +or +```bash +sudo su +``` +No spare disk? Create a virtual one (watch the tutorial): +```bash +dd if=/dev/zero of=/tmp/disk1.img bs=1M count=1024 +losetup -fP /tmp/disk1.img +losetup -a # Note the device name (e.g., /dev/loop0) +``` + +--- + +## Challenge Tasks + +### Task 1: Check Current Storage +Run: `lsblk`, `pvs`, `vgs`, `lvs`, `df -h` + +### Task 2: Create Physical Volume +```bash +pvcreate /dev/sdb # or your loop device +pvs +``` + +### Task 3: Create Volume Group +```bash +vgcreate devops-vg /dev/sdb +vgs +``` + +### Task 4: Create Logical Volume +```bash +lvcreate -L 500M -n app-data devops-vg +lvs +``` + +### Task 5: Format and Mount +```bash +mkfs.ext4 /dev/devops-vg/app-data +mkdir -p /mnt/app-data +mount /dev/devops-vg/app-data /mnt/app-data +df -h /mnt/app-data +``` + +### Task 6: Extend the Volume +```bash +lvextend -L +200M /dev/devops-vg/app-data +resize2fs /dev/devops-vg/app-data +df -h /mnt/app-data +``` + +--- + +## Documentation + +Create `day-13-lvm.md` with: +- Commands used +- Screenshots of outputs +- What you learned (3 points) + +--- + +## Submission +1. Add your `day-13-lvm.md` to `2026/day-13/` +2. Commit and push + +--- + +## Learn in Public + +Share your LVM progress on LinkedIn. + +``` +#90DaysOfDevOps #DevOpsKaJosh #TrainWithShubham +``` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-13/day13-lvm.md b/2026/day-13/day13-lvm.md new file mode 100644 index 0000000000..0177771594 --- /dev/null +++ b/2026/day-13/day13-lvm.md @@ -0,0 +1,13 @@ +# List of commands used +- lsblk +- pvcreate +- pvs ( to see created physical volume) +- vgcreate +- vgs (to see list of volume groups) +- lvcreate +- mkdir +- mkfs +- mount +- lvextend +- resize2fs +- df -h diff --git a/2026/day-13/volume-management.jpeg b/2026/day-13/volume-management.jpeg new file mode 100644 index 0000000000..0c2892d1d6 Binary files /dev/null and b/2026/day-13/volume-management.jpeg differ diff --git a/2026/day-14/README.md b/2026/day-14/README.md new file mode 100644 index 0000000000..8b2331666b --- /dev/null +++ b/2026/day-14/README.md @@ -0,0 +1,70 @@ +# Day 14 – Networking Fundamentals & Hands-on Checks + +## Task +Get comfortable with core networking concepts and the commands you’ll actually run during troubleshooting. + +You will: +- Map the **OSI vs TCP/IP models** in your own words +- Run essential connectivity commands +- Capture a mini network check for a target host/service + +Keep it short, real, and repeatable. + +--- + +## Expected Output +- A markdown file: `day-14-networking.md` +- Screenshots (optional) of key command outputs + +--- + +## Quick Concepts (write 1–2 bullets each) +- OSI layers (L1–L7) vs TCP/IP stack (Link, Internet, Transport, Application) +- Where **IP**, **TCP/UDP**, **HTTP/HTTPS**, **DNS** sit in the stack +- One real example: “`curl https://example.com` = App layer over TCP over IP” + +--- + +## Hands-on Checklist (run these; add 1–2 line observations) +- **Identity:** `hostname -I` (or `ip addr show`) — note your IP. +- **Reachability:** `ping ` — mention latency and packet loss. +- **Path:** `traceroute ` (or `tracepath`) — note any long hops/timeouts. +- **Ports:** `ss -tulpn` (or `netstat -tulpn`) — list one listening service and its port. +- **Name resolution:** `dig ` or `nslookup ` — record the resolved IP. +- **HTTP check:** `curl -I ` — note the HTTP status code. +- **Connections snapshot:** `netstat -an | head` — count ESTABLISHED vs LISTEN (rough). + +Pick one target service/host (e.g., `google.com`, your lab server, or a local service) and stick to it for ping/traceroute/curl where possible. + +--- + +## Mini Task: Port Probe & Interpret +1) Identify one listening port from `ss -tulpn` (e.g., SSH on 22 or a local web app). +2) From the same machine, test it: `nc -zv localhost ` (or `curl -I http://localhost:`). +3) Write one line: is it reachable? If not, what’s the next check? (e.g., service status, firewall). + +--- + +## Reflection (add to your markdown) +- Which command gives you the fastest signal when something is broken? +- What layer (OSI/TCP-IP) would you inspect next if DNS fails? If HTTP 500 shows up? +- Two follow-up checks you’d run in a real incident. + +--- + +## Submission +1. Add `day-14-networking.md` to `2026/day-14/` +2. Commit and push to your fork + +--- + +## Learn in Public +Post 2–3 lines on the commands you practiced and one interesting traceroute/curl finding. + +Use hashtags: +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham + +Happy Learning +**TrainWithShubham** diff --git a/2026/day-15/README.md b/2026/day-15/README.md new file mode 100644 index 0000000000..fddae03a49 --- /dev/null +++ b/2026/day-15/README.md @@ -0,0 +1,104 @@ +# Day 15 – Networking Concepts: DNS, IP, Subnets & Ports + +## Task +Build on Day 14 by understanding the building blocks of networking every DevOps engineer must know. + +You will: +- Understand how **DNS** resolves names to IPs +- Learn **IP addressing** (IPv4, public vs private) +- Break down **CIDR notation** and **subnetting** basics +- Know common **ports** and why they matter + +This is concept-focused — research, understand, and document in your own words. + +--- + +## Expected Output +- A markdown file: `day-15-networking-concepts.md` + +--- + +## Challenge Tasks + +### Task 1: DNS – How Names Become IPs +1. Explain in 3–4 lines: what happens when you type `google.com` in a browser? +2. What are these record types? Write one line each: + - `A`, `AAAA`, `CNAME`, `MX`, `NS` +3. Run: `dig google.com` — identify the A record and TTL from the output + +--- + +### Task 2: IP Addressing +1. What is an IPv4 address? How is it structured? (e.g., `192.168.1.10`) +2. Difference between **public** and **private** IPs — give one example of each +3. What are the private IP ranges? + - `10.x.x.x`, `172.16.x.x – 172.31.x.x`, `192.168.x.x` +4. Run: `ip addr show` — identify which of your IPs are private + +--- + +### Task 3: CIDR & Subnetting +1. What does `/24` mean in `192.168.1.0/24`? +2. How many usable hosts in a `/24`? A `/16`? A `/28`? +3. Explain in your own words: why do we subnet? +4. Quick exercise — fill in: + +| CIDR | Subnet Mask | Total IPs | Usable Hosts | +|------|-------------|-----------|--------------| +| /24 | ? | ? | ? | +| /16 | ? | ? | ? | +| /28 | ? | ? | ? | + +--- + +### Task 4: Ports – The Doors to Services +1. What is a port? Why do we need them? +2. Document these common ports: + +| Port | Service | +|------|---------| +| 22 | ? | +| 80 | ? | +| 443 | ? | +| 53 | ? | +| 3306 | ? | +| 6379 | ? | +| 27017| ? | + +3. Run `ss -tulpn` — match at least 2 listening ports to their services + +--- + +### Task 5: Putting It Together +Answer in 2–3 lines each: +- You run `curl http://myapp.com:8080` — what networking concepts from today are involved? +- Your app can't reach a database at `10.0.1.50:3306` — what would you check first? + +--- + +## Documentation + +Create `day-15-networking-concepts.md` with: +- Your answers to each task +- Command outputs from `dig` and `ss` +- The filled CIDR table +- What you learned (3 key points) + +--- + +## Submission +1. Add `day-15-networking-concepts.md` to `2026/day-15/` +2. Commit and push to your fork + +--- + +## Learn in Public + +Share what you learned about DNS, subnets, or ports on LinkedIn. + +``` +#90DaysOfDevOps #DevOpsKaJosh #TrainWithShubham +``` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-16/README.md b/2026/day-16/README.md new file mode 100644 index 0000000000..9b1f50d0ee --- /dev/null +++ b/2026/day-16/README.md @@ -0,0 +1,104 @@ +# Day 16 – Shell Scripting Basics + +## Task +Start your shell scripting journey — learn the fundamentals every script needs. + +You will: +- Understand **shebang** (`#!/bin/bash`) and why it matters +- Work with **variables**, **echo**, and **read** +- Write basic **if-else** conditions + +--- + +## Expected Output +- A markdown file: `day-16-shell-scripting.md` +- All scripts you write during the tasks + +--- + +## Challenge Tasks + +### Task 1: Your First Script +1. Create a file `hello.sh` +2. Add the shebang line `#!/bin/bash` at the top +3. Print `Hello, DevOps!` using `echo` +4. Make it executable and run it + +```bash +chmod +x hello.sh +./hello.sh +``` + +**Document:** What happens if you remove the shebang line? + +--- + +### Task 2: Variables +1. Create `variables.sh` with: + - A variable for your `NAME` + - A variable for your `ROLE` (e.g., "DevOps Engineer") + - Print: `Hello, I am and I am a ` +2. Try using single quotes vs double quotes — what's the difference? + +--- + +### Task 3: User Input with read +1. Create `greet.sh` that: + - Asks the user for their name using `read` + - Asks for their favourite tool + - Prints: `Hello , your favourite tool is ` + +--- + +### Task 4: If-Else Conditions +1. Create `check_number.sh` that: + - Takes a number using `read` + - Prints whether it is **positive**, **negative**, or **zero** + +2. Create `file_check.sh` that: + - Asks for a filename + - Checks if the file **exists** using `-f` + - Prints appropriate message + +--- + +### Task 5: Combine It All +Create `server_check.sh` that: +1. Stores a service name in a variable (e.g., `nginx`, `sshd`) +2. Asks the user: "Do you want to check the status? (y/n)" +3. If `y` — runs `systemctl status ` and prints whether it's **active** or **not** +4. If `n` — prints "Skipped." + +--- + +## Hints +- Shebang: `#!/bin/bash` tells the system which interpreter to use +- Variables: `NAME="Shubham"` (no spaces around `=`) +- Read: `read -p "Enter name: " NAME` +- If syntax: `if [ condition ]; then ... elif ... else ... fi` +- File check: `if [ -f filename ]; then` + +--- + +## Documentation + +Create `day-16-shell-scripting.md` with: +- Each script's code and output +- What you learned (3 key points) + +--- + +## Submission +1. Add your scripts and `day-16-shell-scripting.md` to `2026/day-16/` +2. Commit and push to your fork + +--- + +## Learn in Public + +Share your first shell scripts on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-16/day-16-shell-scripting.md b/2026/day-16/day-16-shell-scripting.md new file mode 100644 index 0000000000..ef1e27293f --- /dev/null +++ b/2026/day-16/day-16-shell-scripting.md @@ -0,0 +1,78 @@ +# TASK-1(Script code) +-#!/bin/bash +-echo "Hello , Devops!" + +# OUTPUT +Hello , Devops! + +# TASK-2(Script) +- #!/bin/bash +-read -p "Type your name:" name +-read -p "Type your role:" role +-echo "Hello my name is $name and my role is is $role" + +# OUTPUT +-Type your name:uttam +-Type your role:teacher +-Hello my name is uttam and my role is is teacher + +# TASK-3(Script) +-#!/bin/bash +-read -p "Type your name:" name +-read -p "Type your fav. tool:" tool +-echo "Hello my name is $name and my favourite tool is $tool" + +# OUTPUT +-Type your name:uttam +-Type your fav. tool:docker +-Hello my name is uttam and my favourite tool is docker + +# TASK-4(Script) +-#!/bin/bash +-read -p "Enter your number:" a +-if [ $a -gt 0 ];then +- echo "Given number is positive" +-elif [ $a -eq 0 ];then +- echo "Given number is is exactly zero" +-else +- echo "Given number is negative" +fi + +# OUTPUT + +-Enter your number:0 +-Given number is is exactly zero + +# TASK-5(Script) +-#!/bin/bash + +-read -p "Enter service name:" service_name +-read -p "Do you want to check service status(y/n)" +-if [ y ];then +- echo "service is active" +- systemctl status $service_name +-else +- echo "SKipped" +-fi + +# OUTPUT + +-Enter service name:nginx +-Do you want to check service status(y/n)y +-service is active +-● nginx.service - A high performance web server and a reverse proxy server +- Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; preset: enable> +- Active: active (running) since Fri 2026-02-13 07:09:53 UTC; 4h 38min ago +- Docs: man:nginx(8) +- Main PID: 1708 (nginx) +- Tasks: 5 (limit: 2131) +- Memory: 3.7M (peak: 8.3M) +- CPU: 96ms +- CGroup: /system.slice/nginx.service +- ├─1708 "nginx: master process /usr/sbin/nginx -g daemon on; master_pro> +- ├─1711 "nginx: worker process" +- ├─1712 "nginx: worker process" +- ├─1713 "nginx: worker process" +- +- └─1714 "nginx: worker process" + diff --git a/2026/day-17/README.md b/2026/day-17/README.md new file mode 100644 index 0000000000..b5117067b8 --- /dev/null +++ b/2026/day-17/README.md @@ -0,0 +1,110 @@ +# Day 17 – Shell Scripting: Loops, Arguments & Error Handling + +## Task +Level up your scripting — use loops, handle arguments, and deal with errors. + +You will: +- Write **for** and **while** loops +- Use **command-line arguments** (`$1`, `$2`, `$#`, `$@`) +- Install packages via script +- Add basic **error handling** + +--- + +## Expected Output +- A markdown file: `day-17-scripting.md` +- All scripts you write during the tasks + +--- + +## Challenge Tasks + +### Task 1: For Loop +1. Create `for_loop.sh` that: + - Loops through a list of 5 fruits and prints each one +2. Create `count.sh` that: + - Prints numbers 1 to 10 using a for loop + +--- + +### Task 2: While Loop +1. Create `countdown.sh` that: + - Takes a number from the user + - Counts down to 0 using a while loop + - Prints "Done!" at the end + +--- + +### Task 3: Command-Line Arguments +1. Create `greet.sh` that: + - Accepts a name as `$1` + - Prints `Hello, !` + - If no argument is passed, prints "Usage: ./greet.sh " + +2. Create `args_demo.sh` that: + - Prints total number of arguments (`$#`) + - Prints all arguments (`$@`) + - Prints the script name (`$0`) + +--- + +### Task 4: Install Packages via Script +1. Create `install_packages.sh` that: + - Defines a list of packages: `nginx`, `curl`, `wget` + - Loops through the list + - Checks if each package is installed (use `dpkg -s` or `rpm -q`) + - Installs it if missing, skips if already present + - Prints status for each package + +> Run as root: `sudo -i` or `sudo su` + +--- + +### Task 5: Error Handling +1. Create `safe_script.sh` that: + - Uses `set -e` at the top (exit on error) + - Tries to create a directory `/tmp/devops-test` + - Tries to navigate into it + - Creates a file inside + - Uses `||` operator to print an error if any step fails + +Example: +```bash +mkdir /tmp/devops-test || echo "Directory already exists" +``` + +2. Modify your `install_packages.sh` to check if the script is being run as root — exit with a message if not. + +--- + +## Hints +- For loop: `for item in list; do ... done` +- While loop: `while [ condition ]; do ... done` +- Arguments: `$1` first arg, `$#` count, `$@` all args +- Check root: `if [ "$EUID" -ne 0 ]; then echo "Run as root"; exit 1; fi` +- Check package: `dpkg -s &> /dev/null && echo "installed"` + +--- + +## Documentation + +Create `day-17-scripting.md` with: +- Each script's code and output +- What you learned (3 key points) + +--- + +## Submission +1. Add your scripts and `day-17-scripting.md` to `2026/day-17/` +2. Commit and push to your fork + +--- + +## Learn in Public + +Share your scripting progress on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-17/day-17-scripting.md b/2026/day-17/day-17-scripting.md new file mode 100644 index 0000000000..3811984f32 --- /dev/null +++ b/2026/day-17/day-17-scripting.md @@ -0,0 +1,82 @@ +# for_loop.sh + +#!/bin/bash + +# Define an array of 5 fruits +fruits=("Apple" "Banana" "Orange" "Grape" "Mango") + +# Loop through each fruit in the array +for fruit in "${fruits[@]}"; do + echo "Fruit: $fruit" +done + +# count.sh + +#!/bin/bash + +for i in {1..10}; +do +echo "$i" +done + +# countdown.sh + +read -p "Enter a number:" number + +while [ $number -ge 0 ]; do + echo "Number is $number" + ((number--)) +done + +# greet.sh + +#!/bin/bash + + +echo "Hello,$1" + +# args_demo.sh + +#!/bin/bash + +echo "$#" +echo "$@" +echo "$0" + +# install_packages.sh + +#!/bin/bash + +PACKAGES=("nginx" "curl" "wget") +echo "Updating package lists.." +sudo apt-get update -qq + +for PKG in "${PACKAGES[@]}"; do + + if dpkg -s "$PKG" >/dev/null 2>&1; then + echo "[SKIP] $PKG is already installed" + else + echo "[MISSING] $PKG is not installed.Installing now.." + + if sudo apt-get install -y "$PKG" >/dev/null 2>&1; then + echo "[SUCCESS] $PKG has been installed" + else + echo "[ERROE] failed to install $PKG ." + fi + fi +done + +# safe_script.sh + +#!/bin/bash + + +set -e + +mkdir -p /tmp/devops-test || echo "Directory already exists" +cd /tmp/devops-test +echo "I'm in $(pwd)" +touch empty.text +exit + + diff --git a/2026/day-18/README.md b/2026/day-18/README.md new file mode 100644 index 0000000000..f26af0d2c1 --- /dev/null +++ b/2026/day-18/README.md @@ -0,0 +1,104 @@ +# Day 18 – Shell Scripting: Functions & intermediate Concepts + +## Task +Write cleaner, reusable scripts — learn functions, strict mode, and real-world patterns. + +You will: +- Write and call **functions** +- Use **`set -euo pipefail`** for safer scripts +- Work with **return values** and **local variables** +- Build an intermediate script + +--- + +## Expected Output +- A markdown file: `day-18-scripting.md` +- All scripts you write during the tasks + +--- + +## Challenge Tasks + +### Task 1: Basic Functions +1. Create `functions.sh` with: + - A function `greet` that takes a name as argument and prints `Hello, !` + - A function `add` that takes two numbers and prints their sum + - Call both functions from the script + +--- + +### Task 2: Functions with Return Values +1. Create `disk_check.sh` with: + - A function `check_disk` that checks disk usage of `/` using `df -h` + - A function `check_memory` that checks free memory using `free -h` + - A main section that calls both and prints the results + +--- + +### Task 3: Strict Mode — `set -euo pipefail` +1. Create `strict_demo.sh` with `set -euo pipefail` at the top +2. Try using an **undefined variable** — what happens with `set -u`? +3. Try a command that **fails** — what happens with `set -e`? +4. Try a **piped command** where one part fails — what happens with `set -o pipefail`? + +**Document:** What does each flag do? +- `set -e` → +- `set -u` → +- `set -o pipefail` → + +--- + +### Task 4: Local Variables +1. Create `local_demo.sh` with: + - A function that uses `local` keyword for variables + - Show that `local` variables don't leak outside the function + - Compare with a function that uses regular variables + +--- + +### Task 5: Build a Script — System Info Reporter +Create `system_info.sh` that uses functions for everything: +1. A function to print **hostname and OS info** +2. A function to print **uptime** +3. A function to print **disk usage** (top 5 by size) +4. A function to print **memory usage** +5. A function to print **top 5 CPU-consuming processes** +6. A `main` function that calls all of the above with section headers +7. Use `set -euo pipefail` at the top + +Output should look clean and readable. + +--- + +## Hints +- Function syntax: `function_name() { ... }` +- Local vars: `local MY_VAR="value"` +- Strict mode: `set -euo pipefail` as first line after shebang +- Pass args to functions: `greet "Shubham"` → access as `$1` inside +- `$?` gives the exit code of last command + +--- + +## Documentation + +Create `day-18-scripting.md` with: +- Each script's code and output +- Explanation of `set -euo pipefail` +- What you learned (3 key points) + +--- + +## Submission +1. Add your scripts and `day-18-scripting.md` to `2026/day-18/` +2. Commit and push to your fork + +--- + +## Learn in Public + +Share what you learned about shell functions and strict mode on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-18/day-18-scripting.md b/2026/day-18/day-18-scripting.md new file mode 100644 index 0000000000..24398a2f44 --- /dev/null +++ b/2026/day-18/day-18-scripting.md @@ -0,0 +1,130 @@ +## function.sh for greeting user and printing sum of two numbers +#!/bin/bash + +greet_user () { + read -p "Enter a number:" a + read -p "Enter another number:" b + sum=$((a + b)) + echo "Hello $1! " + echo "Sum of the numbers is : $sum" +} +greet_user "$1" + +## disk_check.sh + +#!/bin/bash + + +check_disk() { + echo "====== Root disk usage =====" + df -h / + echo +} + +check_memory() { + echo "===== Memory usage =====" + free -h + echo +} + + +check_disk +check_memory + + +## strict_demo.sh + +#!/bin/bash + + +set -euo pipefail +read -p "Input which function to call (1/2/3): " input + +undefined_variable () +{ + echo "hello learners" + echo "hope you are doing $well" +} +command_failure () +{ + echo "The given command is" + ls /etrin +} +pipe_failure () +{ + cat "PIPE" | awk #23 + echo "The piefail has occured beacuse a part of script has failed" +} + +if [ $input == 1 ] +then + undefined_variable + echo "Done" +elif [ $input == 2 ] +then + command_failure + echo "Donw" +elif [ $input == 3 ] +then + pipe_failure + echo "Done" +fi + +# local_demo.sh + +#!/bin/bash + + +local_variable_store () { + local x=10 + echo "Local variable value inside function is : $x" +} +global_variable_store () { + y=20 + echo "Value of global variable inside function is : $y" +} +local_variable_store +echo "Value of local variable outside function is : $x" +echo "===========LOCAL VARIABLE CAN'T BE ACCESSED OUTSIDE FUNCTION======================" +global_variable_store +echo "Value of global variable outside function is : $y" + +# system_info.sh + +#!/bin/bash + + +set -euo pipefail + +hostname_info () { + cat /etc/os-release + hostname +} +uptime () { + /usr/bin/uptime -p +} +disk_usage () { + df -h | sort -h | head -n 6 +} +memory_usage () { + free -h +} +cpu_cons_proc () { + ps aux --sort=-%cpu | head -n 6 +} +main_function () { + echo " ========== Hostname and OS info are ============================" + hostname_info +echo "=============== Uptime of the system is =======================" +uptime +echo "==================TOP 5 DISK USAGES ===================" +disk_usage +echo "================== MEMORY USAGE ===================" +memory_usage +echo "=========================== TOP 5 CPU CONSUMING PROCESSES =======================" +cpu_cons_proc +} +main_function + + + diff --git a/2026/day-19/README.md b/2026/day-19/README.md new file mode 100644 index 0000000000..a8b21f33b3 --- /dev/null +++ b/2026/day-19/README.md @@ -0,0 +1,108 @@ +# Day 19 – Shell Scripting Project: Log Rotation, Backup & Crontab + +## Task +Apply everything from Days 16–18 in real-world mini projects. + +You will: +- Write a **log rotation** script +- Write a **server backup** script +- Schedule them with **crontab** + +--- + +## Expected Output +- A markdown file: `day-19-project.md` +- All scripts you write during the tasks + +--- + +## Challenge Tasks + +### Task 1: Log Rotation Script +Create `log_rotate.sh` that: +1. Takes a log directory as an argument (e.g., `/var/log/myapp`) +2. Compresses `.log` files older than 7 days using `gzip` +3. Deletes `.gz` files older than 30 days +4. Prints how many files were compressed and deleted +5. Exits with an error if the directory doesn't exist + +--- + +### Task 2: Server Backup Script +Create `backup.sh` that: +1. Takes a source directory and backup destination as arguments +2. Creates a timestamped `.tar.gz` archive (e.g., `backup-2026-02-08.tar.gz`) +3. Verifies the archive was created successfully +4. Prints archive name and size +5. Deletes backups older than 14 days from the destination +6. Handles errors — exit if source doesn't exist + +--- + +### Task 3: Crontab +1. Read: `crontab -l` — what's currently scheduled? +2. Understand cron syntax: + ``` + * * * * * command + │ │ │ │ │ + │ │ │ │ └── Day of week (0-7) + │ │ │ └──── Month (1-12) + │ │ └────── Day of month (1-31) + │ └──────── Hour (0-23) + └────────── Minute (0-59) + ``` +3. Write cron entries (in your markdown, don't apply if unsure) for: + - Run `log_rotate.sh` every day at 2 AM + - Run `backup.sh` every Sunday at 3 AM + - Run a health check script every 5 minutes + +--- + +### Task 4: Combine — Scheduled Maintenance Script +Create `maintenance.sh` that: +1. Calls your log rotation function +2. Calls your backup function +3. Logs all output to `/var/log/maintenance.log` with timestamps +4. Write the cron entry to run it daily at 1 AM + +--- + +## Hints +- Compress old files: `find /path -name "*.log" -mtime +7 -exec gzip {} \;` +- Timestamp: `date +%Y-%m-%d` +- Tar: `tar -czf backup.tar.gz /source/dir` +- Cron edit: `crontab -e` +- Log with timestamp: `echo "$(date): message" >> logfile` + +--- + +## Documentation + +Create `day-19-project.md` with: +- Each script's code +- Sample outputs +- Cron entries you wrote +- What you learned (3 key points) + +--- + +## Submission +1. Add your scripts and `day-19-project.md` to `2026/day-19/` +2. Commit and push to your fork + +--- + +## Reference Video + +[![Watch the video](https://img.youtube.com/vi/PZYJ33bMXAw/0.jpg)](https://youtu.be/PZYJ33bMXAw?si=RzEzOSom7-FqnopA) + +--- + +## Learn in Public + +Share your shell scripting projects on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-19/day-19-project.md b/2026/day-19/day-19-project.md new file mode 100644 index 0000000000..e609bae849 --- /dev/null +++ b/2026/day-19/day-19-project.md @@ -0,0 +1,112 @@ +# TO CREATE LOG ROTATION + +#!/bin/bash + +if [ $# -ne 1 ]; then + echo "Usage: $0 " + exit 1 +fi + +LOG_DIR="$1" + +[ -d "$LOG_DIR" ] || { echo "Error: Directory does not exist."; exit 1; } + +# Count & compress .log files older than 7 days +compressed=$(find "$LOG_DIR" -type f -name "*.log" -mtime +7 -exec gzip {} \; -printf '.' | wc -c) + +# Count & delete .gz files older than 30 days +deleted=$(find "$LOG_DIR" -type f -name "*.gz" -mtime +30 -delete -printf '.' | wc -c) + +echo "Compressed $compressed file(s)." +echo "Deleted $deleted old compressed file(s)." + + + + +# TO CREATE SERVER BACKUP SCRIPT + +#!/bin/bash +set -euo pipefail + +<< readme +This is a script for backup +Usage: +./backup.sh +readme + + +display_usage() { + echo "Usage: +./backup.sh +" +} + +if [ $# -eq 0 ]; then + display_usage +fi + +source_dir=$1 +timestamp=$(date '+%Y-%m-%d-%H-%M-%S') +backup_dir=$2 + +create_backup() { + zip -r "${backup_dir}/backup_${timestamp}.zip" "${source_dir}" >/dev/null + if [ $? -eq 0 ]; then + echo "Backup generated successfully for ${timestamp}" + echo "BACKUP_FILE_NAME======${backup_dir}/backup_${timestamp}.zip" + fi +} +create_backup + +create_delete() { + # Count & delete .gz files older than 30 days +deleted=$(find "$backup_dir" -type f -name "*.zip" -mtime +14 -delete -printf '.' | wc -c) +} +create_delete + + +# CRON_JOB + +0 2 * * * /home/dell-2004/bash_scripts/log_rotate2.sh >> /home/dell-2004/cron.log 2>&1 +0 3 * * 7 /home/dell-2004/bash_scripts/backup.sh >> /home/dell-2004/cron.log 2>&1 + + +# MAINTENANCE.SH + + +#!/bin/bash + + +maintenance() { + date=$(date '+%Y-%m-%y-%H-%M-%S') + echo "$dt" + + source ./backup.sh /home/dell-2004/bash_scripts /home/dell-2004/backups + + if [ $? -eq 0 ]; then + echo "backup taken" + else + echo "backup failed" + fi + + + source ./log_rotate2.sh /home/dell-2004/log_practise + + if [ $? -eq 0 ]; then + echo "log move successfully" + else + echo "logfiles didn't move" + fi +} >> /var/log/maintenance.log + +maintenance + +## cat /var/log/maintenance.log + +Backup generated successfully for 2026-02-23-17-01-42 +BACKUP_FILE_NAME======/home/dell-2004/backups/backup_2026-02-23-17-01-42.zip +backup taken +Compressed 0 file(s). +Deleted 0 old compressed file(s). +log move successfully + diff --git a/2026/day-20/README.md b/2026/day-20/README.md new file mode 100644 index 0000000000..e8806923b3 --- /dev/null +++ b/2026/day-20/README.md @@ -0,0 +1,124 @@ +# Day 20 – Bash Scripting Challenge: Log Analyzer and Report Generator + +## Task + +You are a system administrator responsible for managing a network of servers. Every day, a log file is generated on each server containing important system events and error messages. Your job is to analyze these log files, identify specific events, and generate a summary report. + +Write a Bash script (`log_analyzer.sh`) that automates the process of analyzing log files and generating a daily summary report. + +--- + +## Expected Output +- A Bash script: `log_analyzer.sh` +- A generated summary report: `log_report_.txt` +- A markdown file: `day-20-solution.md` documenting your approach + +--- + +## Challenge Tasks + +### Task 1: Input and Validation +Your script should: +1. Accept the path to a log file as a command-line argument +2. Exit with a clear error message if no argument is provided +3. Exit with a clear error message if the file doesn't exist + +--- + +### Task 2: Error Count +1. Count the total number of lines containing the keyword `ERROR` or `Failed` +2. Print the total error count to the console + +--- + +### Task 3: Critical Events +1. Search for lines containing the keyword `CRITICAL` +2. Print those lines along with their line number + +Example output: +``` +--- Critical Events --- +Line 84: 2025-07-29 10:15:23 CRITICAL Disk space below threshold +Line 217: 2025-07-29 14:32:01 CRITICAL Database connection lost +``` + +--- + +### Task 4: Top Error Messages +1. Extract all lines containing `ERROR` +2. Identify the **top 5 most common** error messages +3. Display them with their occurrence count, sorted in descending order + +Example output: +``` +--- Top 5 Error Messages --- +45 Connection timed out +32 File not found +28 Permission denied +15 Disk I/O error +9 Out of memory +``` + +--- + +### Task 5: Summary Report +Generate a summary report to a text file named `log_report_.txt` (e.g., `log_report_2026-02-11.txt`). The report should include: +1. Date of analysis +2. Log file name +3. Total lines processed +4. Total error count +5. Top 5 error messages with their occurrence count +6. List of critical events with line numbers + +--- + +### Task 6 (Optional): Archive Processed Logs +Add a feature to: +1. Create an `archive/` directory if it doesn't exist +2. Move the processed log file into `archive/` after analysis +3. Print a confirmation message + +--- + +## Sample Log File + +A sample log file is available in this directory: `sample_log.log` + +You can also pick real-world log datasets from the [LogHub repository](https://github.com/logpai/loghub) to test your script against production-like logs (e.g., ZooKeeper, HDFS, Apache, Linux syslogs). + +--- + +## Hints +- Count errors: `grep -c "ERROR" logfile.log` +- Print with line numbers: `grep -n "CRITICAL" logfile.log` +- Top occurrences: `grep "ERROR" logfile.log | awk '{$1=$2=$3=""; print}' | sort | uniq -c | sort -rn | head -5` +- Associative arrays: `declare -A error_map` +- Date for filename: `date +%Y-%m-%d` +- Move files: `mv logfile.log archive/` + +--- + +## Documentation + +Create `day-20-solution.md` with: +- Your script's code +- Sample output from running against the sample log +- What commands/tools you used (`grep`, `awk`, `sort`, `uniq`, etc.) +- What you learned (3 key points) + +--- + +## Submission +1. Add your scripts and `day-20-solution.md` to `2026/day-20/` +2. Commit and push to your fork + +--- + +## Learn in Public + +Share your log analyzer project on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-20/day-20-solution.md b/2026/day-20/day-20-solution.md new file mode 100644 index 0000000000..838ea3afd0 --- /dev/null +++ b/2026/day-20/day-20-solution.md @@ -0,0 +1,61 @@ +######### + +####### MY BASH SCRIPT FOR LOG ANALYSER AND SAMPLE REPORT + +#!/bin/bash +set -euo pipefail +TIMESTAMP=$(date '+%Y-%m-%y-%H-%M-%S') +error_check() { +if [ $# -eq 0 ]; then + echo "NO ARGUMENTS PROVIDED" >&2 + echo "USAGE: $0 " >&2 + exit 1 +fi + + +LOG_FILE="$1" +TOTAL_LINES=$(wc -l < "$LOG_FILE") + + +if [ ! -f "$LOG_FILE" ]; then + echo "Error: FILE does not exist: $LOG_FILE" >&2 + exit 1 +fi + echo "Logs found" +} +error_check "$@" + +error_count() { + awk '/DEBUG/ { print NR $0 }' $LOG_FILE +} + + +critical_events() { + CRITICAL=$(awk '/CRITICAL/ { print NR,$0 }' $LOG_FILE) + if [ $? -eq 0 ]; then + echo "--------------CRITICAL EVENTS------------------" + echo "$CRITICAL" + fi +} + + + +top_error() { + GREP=$(grep "ERROR" $LOG_FILE | awk '{$1=$2=$3=""; print}' | sort | uniq -c | sort -rn | head -2) +echo "---------------TOP 2 ERROR MESSAGES-------------------" +echo "$GREP" +} + +Summary_report() { + echo "TIMESTAMP: $TIMESTAMP" + echo "$LOG_FILE" + echo "Total lines processed: $TOTAL_LINES" + error_count + top_error + echo "----------TOP 2 ERROR MESSAGES COUNT------------" + top_error | wc -l + critical_events + + +} >> "log_report_${LOG_FILE}_${TIMESTAMP}.txt" +Summary_report diff --git a/2026/day-20/day20.png b/2026/day-20/day20.png new file mode 100644 index 0000000000..5f984a8b85 Binary files /dev/null and b/2026/day-20/day20.png differ diff --git a/2026/day-20/sample_logs_generator.sh b/2026/day-20/sample_logs_generator.sh new file mode 100755 index 0000000000..1058333381 --- /dev/null +++ b/2026/day-20/sample_logs_generator.sh @@ -0,0 +1,40 @@ +#!/bin/bash + +# Usage: ./log_generator.sh + +if [ "$#" -ne 2 ]; then + echo "Usage: $0 " + exit 1 +fi + +log_file_path="$1" +num_lines="$2" + +if [ -e "$log_file_path" ]; then + echo "Error: File already exists at $log_file_path." + exit 1 +fi + +# List of possible log message levels +log_levels=("INFO" "DEBUG" "ERROR" "WARNING" "CRITICAL") + +# List of possible error messages +error_messages=("Failed to connect" "Disk full" "Segmentation fault" "Invalid input" "Out of memory") + +# Function to generate a random log line +generate_log_line() { + local log_level="${log_levels[$((RANDOM % ${#log_levels[@]}))]}" + local error_msg="" + if [ "$log_level" == "ERROR" ]; then + error_msg="${error_messages[$((RANDOM % ${#error_messages[@]}))]}" + fi + echo "$(date '+%Y-%m-%d %H:%M:%S') [$log_level] $error_msg - $RANDOM" +} + +# Create the log file with random log lines +touch "$log_file_path" +for ((i=0; i> "$log_file_path" +done + +echo "Log file created at: $log_file_path with $num_lines lines." diff --git a/2026/day-21/README.md b/2026/day-21/README.md new file mode 100644 index 0000000000..fee60cc22c --- /dev/null +++ b/2026/day-21/README.md @@ -0,0 +1,139 @@ +# Day 21 – Shell Scripting Cheat Sheet: Build Your Own Reference Guide + +## Task + +You've spent the last several days learning Shell scripting — from basics to real-world projects. Now it's time to consolidate everything into a **personal cheat sheet** that you can use as a quick-reference guide for the rest of your DevOps journey. + +The best way to revise is to **teach it back**. Writing a cheat sheet forces you to organize your understanding and identify gaps. + +--- + +## Expected Output +- A markdown file: `shell_scripting_cheatsheet.md` + +--- + +## Challenge Tasks + +### Task 1: Basics +Document the following with short descriptions and examples: +1. Shebang (`#!/bin/bash`) — what it does and why it matters +2. Running a script — `chmod +x`, `./script.sh`, `bash script.sh` +3. Comments — single line (`#`) and inline +4. Variables — declaring, using, and quoting (`$VAR`, `"$VAR"`, `'$VAR'`) +5. Reading user input — `read` +6. Command-line arguments — `$0`, `$1`, `$#`, `$@`, `$?` + +--- + +### Task 2: Operators and Conditionals +Document with examples: +1. String comparisons — `=`, `!=`, `-z`, `-n` +2. Integer comparisons — `-eq`, `-ne`, `-lt`, `-gt`, `-le`, `-ge` +3. File test operators — `-f`, `-d`, `-e`, `-r`, `-w`, `-x`, `-s` +4. `if`, `elif`, `else` syntax +5. Logical operators — `&&`, `||`, `!` +6. Case statements — `case ... esac` + +--- + +### Task 3: Loops +Document with examples: +1. `for` loop — list-based and C-style +2. `while` loop +3. `until` loop +4. Loop control — `break`, `continue` +5. Looping over files — `for file in *.log` +6. Looping over command output — `while read line` + +--- + +### Task 4: Functions +Document with examples: +1. Defining a function — `function_name() { ... }` +2. Calling a function +3. Passing arguments to functions — `$1`, `$2` inside functions +4. Return values — `return` vs `echo` +5. Local variables — `local` + +--- + +### Task 5: Text Processing Commands +Document the most useful flags/patterns for each: +1. `grep` — search patterns, `-i`, `-r`, `-c`, `-n`, `-v`, `-E` +2. `awk` — print columns, field separator, patterns, `BEGIN/END` +3. `sed` — substitution, delete lines, in-place edit +4. `cut` — extract columns by delimiter +5. `sort` — alphabetical, numerical, reverse, unique +6. `uniq` — deduplicate, count +7. `tr` — translate/delete characters +8. `wc` — line/word/char count +9. `head` / `tail` — first/last N lines, follow mode + +--- + +### Task 6: Useful Patterns and One-Liners +Include at least 5 real-world one-liners you find useful. Examples: +- Find and delete files older than N days +- Count lines in all `.log` files +- Replace a string across multiple files +- Check if a service is running +- Monitor disk usage with alerts +- Parse CSV or JSON from command line +- Tail a log and filter for errors in real time + +--- + +### Task 7: Error Handling and Debugging +Document with examples: +1. Exit codes — `$?`, `exit 0`, `exit 1` +2. `set -e` — exit on error +3. `set -u` — treat unset variables as error +4. `set -o pipefail` — catch errors in pipes +5. `set -x` — debug mode (trace execution) +6. Trap — `trap 'cleanup' EXIT` + +--- + +### Task 8: Bonus — Quick Reference Table +Create a summary table like this at the top of your cheat sheet: + +| Topic | Key Syntax | Example | +|-------|-----------|---------| +| Variable | `VAR="value"` | `NAME="DevOps"` | +| Argument | `$1`, `$2` | `./script.sh arg1` | +| If | `if [ condition ]; then` | `if [ -f file ]; then` | +| For loop | `for i in list; do` | `for i in 1 2 3; do` | +| Function | `name() { ... }` | `greet() { echo "Hi"; }` | +| Grep | `grep pattern file` | `grep -i "error" log.txt` | +| Awk | `awk '{print $1}' file` | `awk -F: '{print $1}' /etc/passwd` | +| Sed | `sed 's/old/new/g' file` | `sed -i 's/foo/bar/g' config.txt` | + +--- + +## Format Guidelines + +Your cheat sheet should be: +- Written in **Markdown** (`.md`) +- Organized with **clear headings** for each section +- Include **code blocks** with syntax highlighting (` ```bash `) +- Keep explanations **short** — 1-2 lines max per item +- Focus on **practical examples** over theory +- Something **you would actually refer back to** on the job + +--- + +## Submission +1. Add your `shell_scripting_cheatsheet.md` to `2026/day-21/` +2. Commit and push to your fork + +--- + +## Learn in Public + +Share your cheat sheet on LinkedIn — help others revise too! + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-22/README.md b/2026/day-22/README.md new file mode 100644 index 0000000000..88bb7bf280 --- /dev/null +++ b/2026/day-22/README.md @@ -0,0 +1,105 @@ +# Day 22 – Introduction to Git: Your First Repository + +## Task + +Today marks the beginning of your Git journey. Git is the backbone of modern DevOps — every tool, pipeline, and workflow revolves around version control. Before diving into advanced concepts, you need to get comfortable with the basics by doing. + +You will: +- Understand what Git is and why it matters +- Set up your first Git repository from scratch +- Start building a living document of Git commands + +--- + +## Expected Output +- A local Git repository with a clean commit history +- A file called `git-commands.md` that you will keep updating in future days +- A file called `day-22-notes.md` with your answers + +--- + +## Challenge Tasks + +### Task 1: Install and Configure Git +1. Verify Git is installed on your machine +2. Set up your Git identity — name and email +3. Verify your configuration + +--- + +### Task 2: Create Your Git Project +1. Create a new folder called `devops-git-practice` +2. Initialize it as a Git repository +3. Check the status — read and understand what Git is telling you +4. Explore the hidden `.git/` directory — look at what's inside + +--- + +### Task 3: Create Your Git Commands Reference +1. Create a file called `git-commands.md` inside the repo +2. Add the Git commands you've used so far, organized by category: + - **Setup & Config** + - **Basic Workflow** + - **Viewing Changes** +3. For each command, write: + - What it does (1 line) + - An example of how to use it + +--- + +### Task 4: Stage and Commit +1. Stage your file +2. Check what's staged +3. Commit with a meaningful message +4. View your commit history + +--- + +### Task 5: Make More Changes and Build History +1. Edit `git-commands.md` — add more commands as you discover them +2. Check what changed since your last commit +3. Stage and commit again with a different, descriptive message +4. Repeat this process at least **3 times** so you have multiple commits in your history +5. View the full history in a compact format + +--- + +### Task 6: Understand the Git Workflow +Answer these questions in your own words (add them to a `day-22-notes.md` file): +1. What is the difference between `git add` and `git commit`? +2. What does the **staging area** do? Why doesn't Git just commit directly? +3. What information does `git log` show you? +4. What is the `.git/` folder and what happens if you delete it? +5. What is the difference between a **working directory**, **staging area**, and **repository**? + +--- + +## Ongoing Task + +**Keep updating `git-commands.md` every day** as you learn new Git commands in the upcoming days. This will become your personal Git reference. Maintain a clean commit history — one commit per update with a clear message. + +--- + +## Hints +- All you need today are about 8-10 Git commands — Google them, try them, break things +- Read what `git status` tells you — it's your best friend +- Use `man git-` or `git --help` to explore + +--- + +## Submission +1. Share a screenshot of your `git log --oneline` output showing multiple commits +2. Add your `day-22-notes.md` to `2026/day-22/` +3. Commit and push to your fork +4. Add your submission for Community Builder of the week on discord + +--- + +## Learn in Public + +Share your first Git repo and commit history on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-22/git-commands.md b/2026/day-22/git-commands.md new file mode 100644 index 0000000000..4fbe083a04 --- /dev/null +++ b/2026/day-22/git-commands.md @@ -0,0 +1,13 @@ +#### git-commands.md + +# LIST OF GIT COMMANDS I USED + +1> git init {IT INITIALIZES THE LOCAL REPO AS GIT REPO}\ +#IF YOU WANT TO TURN A DIRECTORY INTO GIT DIRECTORY WHERE YOU CAN ADD OR COMMIT A FILE +2> git config --global user.name { USED TO CHANGE USER NAME} +TO SET USERNAME +3> git config --global user.email { USED TO CHANGE USER EMAIL} +TO SET USER EMAIL-ID +4> git add . { IT MOVES YOUR FILE TO STAGING AREA } +5> git commit -m "" { COMMITS YOUR FILE(S) +6> git log {shows commit history } diff --git a/2026/day-23/README.md b/2026/day-23/README.md new file mode 100644 index 0000000000..d5d4722461 --- /dev/null +++ b/2026/day-23/README.md @@ -0,0 +1,90 @@ +# Day 23 – Git Branching & Working with GitHub + +## Task + +Now that you know how to create repos, stage, and commit — it's time to learn the most powerful concept in Git: **branching**. Branches let you work on features, fixes, and experiments in isolation without breaking your main code. You'll also push your work to GitHub for the first time. + +--- + +## Expected Output +- A markdown file: `day-23-notes.md` with your answers +- Continue updating `git-commands.md` in your `devops-git-practice` repo +- Your practice repo pushed to GitHub + +--- + +## Challenge Tasks + +### Task 1: Understanding Branches +Answer these in your `day-23-notes.md`: +1. What is a branch in Git? +2. Why do we use branches instead of committing everything to `main`? +3. What is `HEAD` in Git? +4. What happens to your files when you switch branches? + +--- + +### Task 2: Branching Commands — Hands-On +In your `devops-git-practice` repo, perform the following: +1. List all branches in your repo +2. Create a new branch called `feature-1` +3. Switch to `feature-1` +4. Create a new branch and switch to it in a single command — call it `feature-2` +5. Try using `git switch` to move between branches — how is it different from `git checkout`? +6. Make a commit on `feature-1` that does **not** exist on `main` +7. Switch back to `main` — verify that the commit from `feature-1` is not there +8. Delete a branch you no longer need +9. Add all branching commands to your `git-commands.md` + +--- + +### Task 3: Push to GitHub +1. Create a **new repository** on GitHub (do NOT initialize it with a README) +2. Connect your local `devops-git-practice` repo to the GitHub remote +3. Push your `main` branch to GitHub +4. Push `feature-1` branch to GitHub +5. Verify both branches are visible on GitHub +6. Answer in your notes: What is the difference between `origin` and `upstream`? + +--- + +### Task 4: Pull from GitHub +1. Make a change to a file **directly on GitHub** (use the GitHub editor) +2. Pull that change to your local repo +3. Answer in your notes: What is the difference between `git fetch` and `git pull`? + +--- + +### Task 5: Clone vs Fork +1. **Clone** any public repository from GitHub to your local machine +2. **Fork** the same repository on GitHub, then clone your fork +3. Answer in your notes: + - What is the difference between clone and fork? + - When would you clone vs fork? + - After forking, how do you keep your fork in sync with the original repo? + +--- + +## Hints +- When you create a branch, it starts from the commit you're currently on +- `git switch` is the modern alternative to `git checkout` for switching branches +- To push a new branch: `git push -u origin ` +- A fork is a GitHub concept, not a Git concept + +--- + +## Submission +1. Add your `day-23-notes.md` to `2026/day-23/` +2. Update `git-commands.md` with all new commands and commit +3. Push to your fork + +--- + +## Learn in Public + +Share your branching workflow and first GitHub push on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-23/git-commands.md b/2026/day-23/git-commands.md new file mode 100644 index 0000000000..362e7ea104 --- /dev/null +++ b/2026/day-23/git-commands.md @@ -0,0 +1,26 @@ +# LIST OF GIT COMMANDS I USED + +1> git init {IT INITIALIZES THE LOCAL REPO AS GIT REPO}\ +#IF YOU WANT TO TURN A DIRECTORY INTO GIT DIRECTORY WHERE YOU CAN ADD OR COMMIT A FILE + +2> git config --global user.name { USED TO CHANGE USER NAME} +TO SET USERNAME + +3> git config --global user.email { USED TO CHANGE USER EMAIL} +TO SET USER EMAIL-ID + +4> git add . { IT MOVES YOUR FILE TO STAGING AREA } + +5> git commit -m "" { COMMITS YOUR FILE(S) + +6> git log {shows commit history } + +7> git branch (To see which all branches the user have and header shows in which branch the user is currently working at) + +8> git checkout -b (makes a new branch and takes you there at the same time) + +9> git switch ( takes you to an existing branch) + +10> git branch -d feature-2 ( To delete a local branch) + + diff --git a/2026/day-24/README.md b/2026/day-24/README.md new file mode 100644 index 0000000000..de51b4f827 --- /dev/null +++ b/2026/day-24/README.md @@ -0,0 +1,104 @@ +# Day 24 – Advanced Git: Merge, Rebase, Stash & Cherry Pick + +## Task + +You know how to branch and push to GitHub. Now it's time to learn how branches come back together — and what to do when you're in the middle of something and need to context-switch. These are the Git skills that separate beginners from confident practitioners. + +--- + +## Expected Output +- A markdown file: `day-24-notes.md` with your observations and answers +- Continue updating `git-commands.md` in your `devops-git-practice` repo + +--- + +## Challenge Tasks + +### Task 1: Git Merge — Hands-On +1. Create a new branch `feature-login` from `main`, add a couple of commits to it +2. Switch back to `main` and merge `feature-login` into `main` +3. Observe the merge — did Git do a **fast-forward** merge or a **merge commit**? +4. Now create another branch `feature-signup`, add commits to it — but also add a commit to `main` before merging +5. Merge `feature-signup` into `main` — what happens this time? +6. Answer in your notes: + - What is a fast-forward merge? + - When does Git create a merge commit instead? + - What is a merge conflict? (try creating one intentionally by editing the same line in both branches) + +--- + +### Task 2: Git Rebase — Hands-On +1. Create a branch `feature-dashboard` from `main`, add 2-3 commits +2. While on `main`, add a new commit (so `main` moves ahead) +3. Switch to `feature-dashboard` and rebase it onto `main` +4. Observe your `git log --oneline --graph --all` — how does the history look compared to a merge? +5. Answer in your notes: + - What does rebase actually do to your commits? + - How is the history different from a merge? + - Why should you **never rebase commits that have been pushed and shared** with others? + - When would you use rebase vs merge? + +--- + +### Task 3: Squash Commit vs Merge Commit +1. Create a branch `feature-profile`, add 4-5 small commits (typo fix, formatting, etc.) +2. Merge it into `main` using `--squash` — what happens? +3. Check `git log` — how many commits were added to `main`? +4. Now create another branch `feature-settings`, add a few commits +5. Merge it into `main` **without** `--squash` (regular merge) — compare the history +6. Answer in your notes: + - What does squash merging do? + - When would you use squash merge vs regular merge? + - What is the trade-off of squashing? + +--- + +### Task 4: Git Stash — Hands-On +1. Start making changes to a file but **do not commit** +2. Now imagine you need to urgently switch to another branch — try switching. What happens? +3. Use `git stash` to save your work-in-progress +4. Switch to another branch, do some work, switch back +5. Apply your stashed changes using `git stash pop` +6. Try stashing multiple times and list all stashes +7. Try applying a specific stash from the list +8. Answer in your notes: + - What is the difference between `git stash pop` and `git stash apply`? + - When would you use stash in a real-world workflow? + +--- + +### Task 5: Cherry Picking +1. Create a branch `feature-hotfix`, make 3 commits with different changes +2. Switch to `main` +3. Cherry-pick **only the second commit** from `feature-hotfix` onto `main` +4. Verify with `git log` that only that one commit was applied +5. Answer in your notes: + - What does cherry-pick do? + - When would you use cherry-pick in a real project? + - What can go wrong with cherry-picking? + +--- + +## Hints +- Visualize history: `git log --oneline --graph --all` +- To intentionally create a merge conflict: edit the **same line** of the **same file** on two branches +- Stash with a message: `git stash push -m "description"` +- Cherry-pick needs a commit hash — find it with `git log --oneline` + +--- + +## Submission +1. Add your `day-24-notes.md` to `2026/day-24/` +2. Update `git-commands.md` with all new commands and commit +3. Push to your fork + +--- + +## Learn in Public + +Share your merge vs rebase comparison on LinkedIn — a diagram or screenshot of `git log --graph` goes a long way! + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-24/day-24-notes.md b/2026/day-24/day-24-notes.md new file mode 100644 index 0000000000..a0d7d0fbde --- /dev/null +++ b/2026/day-24/day-24-notes.md @@ -0,0 +1,54 @@ +## git merge +# What is a fast forward merge? +A git merge that occurs ina direct,linear path and; leaves no merge commit and it's automatic when a linear path is available. + +# When does Git create merge commit? +WHen git needs to combine two different histories commit histories. + +# What is a merge conflict? +When users on different branch try to change the same file, conflict occurs. + +## git rebase +# What does git rebase actually do to your commits? +It creates a linear commit history. + +# How is the history different from a merge? +It doesn't show where a new branch is merged. + +# Why should you never rebase commits that have been pushed and shared with others? +Because it will change commit history and without branching others will not be able to locate the branch which have been changed and their local version of history doesn't matches the server's version. + +# When would you rebase vs merge? +Use rebase to sync your private branch with main and clean up pending commits. +Use merge to sync your shared branch with main and after finishing a feature to move it to main. + +## git squash +# What does squash merging do? +It combines all commits into one and puts it in staging area to be committed. + +# When would you use squash merge vs regular merge? +When you have many small query or appending commits , use squash merge. +Use regular merge when your commit contains important and distinct architectural steps. + +# What is the trade-off of squashing? +Clutter:: Very Low as there is only commit per feature) +Debugging:: Harder as all the Changes are bundled. + +## git stash +# What is the difference between "git stash pop" and "git stash apply"? +While using git stash pop it removes your work from stash, and deletes the stash entry immediately. +While in the case of git stash apply, it gives copy of a file meaning keeping the original in stash list. + +# When would you stash in a real-world workflow? +Only when my file/work isn't ready to be committed. + +## cherry Picking +# What does cherry-pick do? +It can pick a single commit to merge instead of merging whole commmit history. + +# When would you cherry-pick in a reak project? +If only some commits are good to be merged into main. + +# What can go wrong with cheer-picking? +If you merge two branches, same commit can appear two times. + diff --git a/2026/day-25/README.md b/2026/day-25/README.md new file mode 100644 index 0000000000..ec5888ab16 --- /dev/null +++ b/2026/day-25/README.md @@ -0,0 +1,102 @@ +# Day 25 – Git Reset vs Revert & Branching Strategies + +## Task + +You'll learn how to **undo mistakes** safely — one of the most important skills in Git. You'll also explore **branching strategies** used by real engineering teams to manage code at scale. + +--- + +## Expected Output +- A markdown file: `day-25-notes.md` with your observations and answers +- Continue updating `git-commands.md` in your `devops-git-practice` repo + +--- + +## Challenge Tasks + +### Task 1: Git Reset — Hands-On +1. Make 3 commits in your practice repo (commit A, B, C) +2. Use `git reset --soft` to go back one commit — what happens to the changes? +3. Re-commit, then use `git reset --mixed` to go back one commit — what happens now? +4. Re-commit, then use `git reset --hard` to go back one commit — what happens this time? +5. Answer in your notes: + - What is the difference between `--soft`, `--mixed`, and `--hard`? + - Which one is destructive and why? + - When would you use each one? + - Should you ever use `git reset` on commits that are already pushed? + +--- + +### Task 2: Git Revert — Hands-On +1. Make 3 commits (commit X, Y, Z) +2. Revert commit Y (the middle one) — what happens? +3. Check `git log` — is commit Y still in the history? +4. Answer in your notes: + - How is `git revert` different from `git reset`? + - Why is revert considered **safer** than reset for shared branches? + - When would you use revert vs reset? + +--- + +### Task 3: Reset vs Revert — Summary +Create a comparison in your notes: + +| | `git reset` | `git revert` | +|---|---|---| +| What it does | ? | ? | +| Removes commit from history? | ? | ? | +| Safe for shared/pushed branches? | ? | ? | +| When to use | ? | ? | + +--- + +### Task 4: Branching Strategies +Research the following branching strategies and document each in your notes with: +- How it works (short description) +- A simple diagram or flow (text-based is fine) +- When/where it's used +- Pros and cons + +1. **GitFlow** — develop, feature, release, hotfix branches +2. **GitHub Flow** — simple, single main branch + feature branches +3. **Trunk-Based Development** — everyone commits to main, short-lived branches +4. Answer: + - Which strategy would you use for a startup shipping fast? + - Which strategy would you use for a large team with scheduled releases? + - Which one does your favorite open-source project use? (check any repo on GitHub) + +--- + +### Task 5: Git Commands Reference Update +Update your `git-commands.md` to cover everything from Days 22–25: +- Setup & Config +- Basic Workflow (add, commit, status, log, diff) +- Branching (branch, checkout, switch) +- Remote (push, pull, fetch, clone, fork) +- Merging & Rebasing +- Stash & Cherry Pick +- Reset & Revert + +--- + +## Hints +- `git reflog` is your safety net — it shows everything Git has done, even after a hard reset +- For branching strategies, look at how projects like Kubernetes, React, or Linux kernel manage branches + +--- + +## Submission +1. Add your `day-25-notes.md` to `2026/day-25/` +2. Update `git-commands.md` — commit and push +3. Push to your fork + +--- + +## Learn in Public + +Share your Reset vs Revert comparison or your branching strategy notes on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-25/day-25-notes.md b/2026/day-25/day-25-notes.md new file mode 100644 index 0000000000..f58b1c8faf --- /dev/null +++ b/2026/day-25/day-25-notes.md @@ -0,0 +1,58 @@ +# Task-1 (Git reset--Hands on) + +## What is the difference between --soft, --mixed, and --hard? +--soft → moves HEAD/branch, keeps staging area and working directory unchanged +--mixed (default) → moves HEAD/branch, resets staging area, keeps working directory unchanged +--hard → moves HEAD/branch, resets staging area and overwrites working directory to match the target commit +## Which one is destructive and why? +--hard is destructive because it permanently discards uncommitted changes in both the staging area and working directory (overwrites files on disk). +## When would you use each one? +--soft → when you want to uncommit but keep changes staged (e.g. edit last commit, split commits) +--mixed → when you want to uncommit and unstage changes but keep the files modified in your working directory +--hard → when you want to completely throw away all uncommitted changes and go back to a clean state at a specific commit +## Should you ever use git reset on commits that are already pushed? +Almost never on shared branches — use git revert instead to avoid rewriting public history and breaking teammates' work. + +# Task 2: (Git Revert — Hands-On) + +## How is git revert different from git reset? +git revert creates a new commit that undoes changes while keeping history intact; git reset moves the branch pointer and can discard commits from history. +## Why is revert considered safer than reset for shared branches? + Revert preserves public history so collaborators can pull safely without conflicts or lost work; reset rewrites history and requires force push that breaks others' branches. +## When would you use revert vs reset? +Use revert for already-pushed/shared commits to avoid breaking history; use reset for local-only commits you haven't pushed yet or on personal branches. + +# Task 4: (Branching Strategies) + +## 1) GitFlow + +How it works: Long-lived main (prod) + develop; feature → develop, release → main, hotfix → main & develop. +Flow: main ← release ← develop ← feature ; hotfix → main + develop +Used: Enterprises with planned versioned releases. +Pros/Cons: Clear structure & stable releases; but heavy, slow, merge-conflict prone. + +## 2) GitHub Flow + +How it works: Single main; short feature branches → PR → merge to main → deploy. +Flow: main ← feature (PR) +Used: SaaS, CI/CD environments deploying continuously. +Pros/Cons: Simple & fast; but weak for complex release/version control. + +## 3) Trunk-Based Development + +How it works: Developers commit directly to main (trunk) or very short-lived branches; heavy CI + feature flags. +Flow: devs → main (daily merges) +Used: High-velocity teams (e.g., big tech CI-driven orgs). +Pros/Cons: Minimal merge pain & fast integration; requires strong discipline and automation. + +## Which strategy would you use for a startup shipping fast? + +Startup shipping fast: GitHub Flow or Trunk-Based (speed > structure). + +## Which strategy would you use for a large team with scheduled releases? + +Large team with scheduled releases: GitFlow (controlled release cycles). + +## Which one does your favorite open-source project use? (check any repo on GitHub) + +Open-source example: Kubernetes uses a trunk-based style with main + release branches. diff --git a/2026/day-26/README.md b/2026/day-26/README.md new file mode 100644 index 0000000000..91782d54b3 --- /dev/null +++ b/2026/day-26/README.md @@ -0,0 +1,96 @@ +# Day 26 – GitHub CLI: Manage GitHub from Your Terminal + +## Task + +Every time you switch to the browser to create a PR, check an issue, or manage a repo — you lose context. The **GitHub CLI (`gh`)** lets you do all of that without leaving your terminal. For DevOps engineers, this is essential — especially when you start automating workflows, scripting PR reviews, and managing repos at scale. + +--- + +## Expected Output +- A markdown file: `day-26-notes.md` with your observations and answers +- Add `gh` commands to your `git-commands.md` + +--- + +## Challenge Tasks + +### Task 1: Install and Authenticate +1. Install the GitHub CLI on your machine +2. Authenticate with your GitHub account +3. Verify you're logged in and check which account is active +4. Answer in your notes: What authentication methods does `gh` support? + +--- + +### Task 2: Working with Repositories +1. Create a **new GitHub repo** directly from the terminal — make it public with a README +2. Clone a repo using `gh` instead of `git clone` +3. View details of one of your repos from the terminal +4. List all your repositories +5. Open a repo in your browser directly from the terminal +6. Delete the test repo you created (be careful!) + +--- + +### Task 3: Issues +1. Create an issue on one of your repos from the terminal — give it a title, body, and a label +2. List all open issues on that repo +3. View a specific issue by its number +4. Close an issue from the terminal +5. Answer in your notes: How could you use `gh issue` in a script or automation? + +--- + +### Task 4: Pull Requests +1. Create a branch, make a change, push it, and create a **pull request** entirely from the terminal +2. List all open PRs on a repo +3. View the details of your PR — check its status, reviewers, and checks +4. Merge your PR from the terminal +5. Answer in your notes: + - What merge methods does `gh pr merge` support? + - How would you review someone else's PR using `gh`? + +--- + +### Task 5: GitHub Actions & Workflows (Preview) +1. List the workflow runs on any public repo that uses GitHub Actions +2. View the status of a specific workflow run +3. Answer in your notes: How could `gh run` and `gh workflow` be useful in a CI/CD pipeline? + +(Don't worry if you haven't learned GitHub Actions yet — this is a preview for upcoming days) + +--- + +### Task 6: Useful `gh` Tricks +Explore and try these — add the ones you find useful to your `git-commands.md`: +1. `gh api` — make raw GitHub API calls from the terminal +2. `gh gist` — create and manage GitHub Gists +3. `gh release` — create and manage releases +4. `gh alias` — create shortcuts for commands you use often +5. `gh search repos` — search GitHub repos from the terminal + +--- + +## Hints +- `gh help` and `gh --help` are your best friends +- Most `gh` commands work with `--repo owner/repo` to target a specific repo +- Use `--json` flag with most commands to get machine-readable output (useful for scripting) +- `gh pr create --fill` auto-fills the PR title and body from your commits + +--- + +## Submission +1. Add your `day-26-notes.md` to `2026/day-26/` +2. Update `git-commands.md` with `gh` commands — this completes your Git & GitHub reference from Days 22–26 +3. Push to your fork + +--- + +## Learn in Public + +Share your favorite `gh` commands or a screenshot of creating a PR from the terminal on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-26/day-26-notes.md b/2026/day-26/day-26-notes.md new file mode 100644 index 0000000000..20cd58e334 --- /dev/null +++ b/2026/day-26/day-26-notes.md @@ -0,0 +1,27 @@ +# Task 1: Install and Authenticate +## What authentication methods does gh support? +(HTTPS) and (SSH) + + +# Task 2: Issues +## How could you use gh issue in a script or automation? +You can use gh issue in scripts to automatically create issues when errors or failures occur in a project. +It can also be used to list or close issues automatically during CI/CD workflows on GitHub. +This helps teams track problems without manually managing issues. + +# Task 3: Pull Requests + +## What merge methods does gh pr merge support? +gh pr merge supports merge, squash, and rebase merge methods on GitHub. + +## How would you review someone else's PR using gh? +You can review a PR by viewing it with gh pr view and checking the changes using gh pr diff . + +# Task 4: Github Actions & Workflows (Preview) + +## How could gh run and gh workflow be useful in a CI/CD pipeline? +gh run and gh workflow in GitHub CLI help manage GitHub Actions from the terminal. + +They can be used in CI/CD pipelines to trigger workflows, monitor workflow runs, and check logs without opening the GitHub website. This helps automate builds, deployments, and debugging directly from scripts or automation tools. + + diff --git a/2026/day-27/README.md b/2026/day-27/README.md new file mode 100644 index 0000000000..57813e276c --- /dev/null +++ b/2026/day-27/README.md @@ -0,0 +1,118 @@ +# Day 27 – GitHub Profile Makeover: Build Your Developer Identity + +## Task + +Your GitHub profile is your **developer resume**. Recruiters, hiring managers, and open-source maintainers will look at your GitHub before your LinkedIn. Today, you'll clean up your profile, organize your repositories, and create a profile README that tells your story. + +This is not a coding day — it's a **branding day**. Treat it seriously. + +--- + +## Expected Output +- A polished GitHub profile with a profile README +- Well-organized repositories with proper names, descriptions, and READMEs +- A markdown file: `day-27-notes.md` documenting what you changed and why + +--- + +## Challenge Tasks + +### Task 1: Audit Your Current GitHub Profile +Before making changes, assess where you stand: +1. Visit your own GitHub profile as if you were a stranger — what impression does it give? +2. Answer in your notes: + - Is your profile picture professional? + - Is your bio filled in? Does it say what you do? + - Are your pinned repos relevant, or are they random forks? + - Do your repos have descriptions, or are they blank? + - Would a recruiter understand what you've been working on? + +--- + +### Task 2: Create Your Profile README +1. Create a **special repository** with the same name as your GitHub username (e.g., `github.com/yourname/yourname`) +2. Add a `README.md` — this will appear on your profile page +3. Include the following in your profile README: + - A short introduction — who you are, what you're learning + - What you're currently working on (e.g., 90 Days of DevOps) + - Skills/tools you know or are learning (Linux, Git, Python, Shell, etc.) + - Links to your important repos + - How to reach you (LinkedIn, Twitter, email — whatever you're comfortable sharing) +4. Keep it clean and simple — don't overload it with badges and widgets + +--- + +### Task 3: Organize Your Repositories +Create and organize the following repos (if you don't have them already): + +1. **90 Days of DevOps** — your fork or personal repo with all daily submissions + - Clear README explaining what the challenge is + - Organized folder structure by day + +2. **Shell Scripts** — a dedicated repo for all your shell scripting work + - Move/copy your scripts from Days 16–21 here + - Add a README listing what each script does + +3. **Python Scripts** — a dedicated repo for your Python projects + - Move/copy your scripts from Days 7–15 here + - Add a README listing what each script does + +4. **DevOps Notes** — a repo for your learning notes, cheat sheets, and references + - Add your shell scripting cheat sheet (Day 21) + - Add your git-commands.md + - Organize by topic (Linux, Git, Python, etc.) + +For **every repo**, make sure you have: +- A clear, descriptive **repo name** (use hyphens, not spaces — e.g., `shell-scripts` not `Shell Scripts`) +- A one-line **description** on GitHub +- A proper **README.md** explaining what's inside +- A relevant `.gitignore` + +--- + +### Task 4: Pin Your Best Repos +1. Go to your GitHub profile and select **6 pinned repositories** +2. Choose repos that best represent your work and learning +3. Make sure each pinned repo has a description and README + +--- + +### Task 5: Clean Up +1. Delete or archive repos that are empty, abandoned, or irrelevant +2. Rename any repos with unclear names +3. Make sure you're not exposing any secrets (`.env` files, API keys, passwords) in any repo — check your commit history too + +--- + +### Task 6: Before & After +1. Take a screenshot of your GitHub profile **before** you started today +2. Take a screenshot **after** all your changes +3. Add both to your `day-27-notes.md` +4. Write 3 things you improved and why + +--- + +## Tips for a Good Profile README +- Keep it **short** — 15-20 lines max +- Use headers and bullet points — don't write paragraphs +- Show what you're **doing**, not just what you **know** +- A few well-placed badges are fine, but don't turn it into a Christmas tree +- Look at profiles you admire for inspiration — but make yours authentic + +--- + +## Submission +1. Add your `day-27-notes.md` (with before/after screenshots) to `2026/day-27/` +2. Share the link to your updated GitHub profile +3. Push to your fork + +--- + +## Learn in Public + +Share your before & after GitHub profile screenshots on LinkedIn. Tag people who inspired your profile. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-27/day-27-notes.md b/2026/day-27/day-27-notes.md new file mode 100644 index 0000000000..19612c4487 --- /dev/null +++ b/2026/day-27/day-27-notes.md @@ -0,0 +1,2 @@ +### I forgot to take screenshot before editing my github profile as I was editing at thesame time as I was reading the .md file. +### So I will be uploading this file and a screenshot of my current profie which is significantly different from how it was. diff --git a/2026/day-27/github-profile.png b/2026/day-27/github-profile.png new file mode 100644 index 0000000000..d9467bec59 Binary files /dev/null and b/2026/day-27/github-profile.png differ diff --git a/2026/day-28/README.md b/2026/day-28/README.md new file mode 100644 index 0000000000..638b3fc2f9 --- /dev/null +++ b/2026/day-28/README.md @@ -0,0 +1,135 @@ +# Day 28 – Revision Day: Everything from Day 1 to Day 27 + +## Task + +You've covered a lot of ground in 27 days — DevOps fundamentals, Linux deep dives, Shell scripting, Python basics, Git & GitHub, and even your developer branding. Today, **stop and revise**. No new concepts. Just solidify what you've learned. + +The goal is to identify gaps, revisit topics you struggled with, and make sure you can confidently explain and use everything covered so far. + +--- + +## What You've Covered So Far + +| Days | Topic | Key Concepts | +|------|-------|-------------| +| 1 | DevOps & Cloud Intro | What is DevOps, SDLC, Cloud basics | +| 2–7 | Linux Fundamentals | Architecture, commands, processes, systemd, file system hierarchy, troubleshooting, text files | +| 8 | Cloud Server Setup | Docker, Nginx, web deployment | +| 9–11 | Users, Permissions & Ownership | User/group management, file permissions, chown/chgrp | +| 12 | Revision Day 1 | Days 1–11 recap | +| 13 | Volume Management | LVM — physical volumes, volume groups, logical volumes | +| 14–15 | Networking | Fundamentals, DNS, IP, subnets, ports, hands-on checks | +| 16–18 | Shell Scripting | Basics, loops, arguments, error handling, functions | +| 19–20 | Shell Scripting Projects | Log rotation, backup, crontab, log analyzer | +| 21 | Shell Scripting Cheat Sheet | Personal reference guide | +| 22–25 | Git & GitHub | Init, branching, merge, rebase, stash, cherry pick, reset, revert, branching strategies | +| 26 | GitHub CLI | Managing GitHub from the terminal | +| 27 | GitHub Profile | Profile README, repo organization, developer branding | + +--- + +## Challenge Tasks + +### Task 1: Self-Assessment Checklist +Go through the checklist below. For each item, mark yourself honestly: +- **Can do confidently** +- **Need to revisit** +- **Haven't done yet** + +#### Linux +- [ ] Navigate the file system, create/move/delete files and directories +- [ ] Manage processes — list, kill, background/foreground +- [ ] Work with systemd — start, stop, enable, check status of services +- [ ] Read and edit text files using vi/vim or nano +- [ ] Troubleshoot CPU, memory, and disk issues using top, free, df, du +- [ ] Explain the Linux file system hierarchy (/, /etc, /var, /home, /tmp, etc.) +- [ ] Create users and groups, manage passwords +- [ ] Set file permissions using chmod (numeric and symbolic) +- [ ] Change file ownership with chown and chgrp +- [ ] Create and manage LVM volumes +- [ ] Check network connectivity — ping, curl, netstat, ss, dig, nslookup +- [ ] Explain DNS resolution, IP addressing, subnets, and common ports + +#### Shell Scripting +- [ ] Write a script with variables, arguments, and user input +- [ ] Use if/elif/else and case statements +- [ ] Write for, while, and until loops +- [ ] Define and call functions with arguments and return values +- [ ] Use grep, awk, sed, sort, uniq for text processing +- [ ] Handle errors with set -e, set -u, set -o pipefail, trap +- [ ] Schedule scripts with crontab + +#### Git & GitHub +- [ ] Initialize a repo, stage, commit, and view history +- [ ] Create and switch branches +- [ ] Push to and pull from GitHub +- [ ] Explain clone vs fork +- [ ] Merge branches — understand fast-forward vs merge commit +- [ ] Rebase a branch and explain when to use it vs merge +- [ ] Use git stash and git stash pop +- [ ] Cherry-pick a commit from another branch +- [ ] Explain squash merge vs regular merge +- [ ] Use git reset (soft, mixed, hard) and git revert +- [ ] Explain GitFlow, GitHub Flow, and Trunk-Based Development +- [ ] Use GitHub CLI to create repos, PRs, and issues + +--- + +### Task 2: Revisit Your Weak Spots +1. Pick **3 topics** from the checklist where you marked "Need to revisit" +2. Go back to that day's challenge and redo the hands-on tasks +3. Document what you re-learned in `day-28-notes.md` + +--- + +### Task 3: Quick-Fire Questions +Answer these from memory (no Googling). Then verify your answers: + +1. What does `chmod 755 script.sh` do? +2. What is the difference between a process and a service? +3. How do you find which process is using port 8080? +4. What does `set -euo pipefail` do in a shell script? +5. What is the difference between `git reset --hard` and `git revert`? +6. What branching strategy would you recommend for a team of 5 developers shipping weekly? +7. What does `git stash` do and when would you use it? +8. How do you schedule a script to run every day at 3 AM? +9. What is the difference between `git fetch` and `git pull`? +10. What is LVM and why would you use it instead of regular partitions? + +--- + +### Task 4: Organize Your Work +1. Make sure all your daily submissions (day-1 through day-27) are committed and pushed +2. Check that your `git-commands.md` is up to date +3. Check that your shell scripting cheat sheet is complete +4. Verify your GitHub profile and repos are clean (from Day 27) + +--- + +### Task 5: Teach It Back +Pick **one topic** you've learned and write a short explanation (5-10 lines) as if you're teaching it to someone who has never heard of it. Add it to your `day-28-notes.md`. + +Examples: +- Explain Git branching to a non-developer +- Explain file permissions to a new Linux user +- Explain what a crontab is and why sysadmins use it + +Teaching is the best test of understanding. + +--- + +## Submission +1. Add your `day-28-notes.md` to `2026/day-28/` +2. Push to your fork +3. Make sure all previous days are pushed and up to date + +--- + +## Learn in Public + +Share your self-assessment results or your "teach it back" explanation on LinkedIn. Be honest about what you found easy and what you need to work on. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-28/day-28-notes.md b/2026/day-28/day-28-notes.md new file mode 100644 index 0000000000..0a4d759e6c --- /dev/null +++ b/2026/day-28/day-28-notes.md @@ -0,0 +1,4 @@ +# ### Git Branching (Explained for a Non-Developer) + +Git branching is a way to work on different versions of the same project without breaking the main one. Imagine you’re writing a book: the **main branch** is the official version everyone reads. If you want to experiment with a new chapter or change the ending, you make a **branch**, which is like a separate copy where you can try thi + diff --git a/2026/day-29/Dockerfile b/2026/day-29/Dockerfile new file mode 100644 index 0000000000..2f12aa7656 --- /dev/null +++ b/2026/day-29/Dockerfile @@ -0,0 +1,8 @@ + +FROM ubuntu + +WORKDIR /app + +RUN echo "HELLO DOSTO" + + diff --git a/2026/day-29/README.md b/2026/day-29/README.md new file mode 100644 index 0000000000..42fd97b6cf --- /dev/null +++ b/2026/day-29/README.md @@ -0,0 +1,85 @@ +# Day 29 – Introduction to Docker + +## Task +Today's goal is to **understand what Docker is and run your first container**. + +You will: +- Learn why containers exist and how they differ from VMs +- Install Docker on your machine +- Run and explore containers from Docker Hub + +--- + +## Expected Output +- A markdown file: `day-29-docker-basics.md` +- Screenshots of your running containers + +--- + +## Challenge Tasks + +### Task 1: What is Docker? +Research and write short notes on: +- What is a container and why do we need them? +- Containers vs Virtual Machines — what's the real difference? +- What is the Docker architecture? (daemon, client, images, containers, registry) + +Draw or describe the Docker architecture in your own words. + +--- + +### Task 2: Install Docker +1. Install Docker on your machine (or use a cloud instance) +2. Verify the installation +3. Run the `hello-world` container +4. Read the output carefully — it explains what just happened + +--- + +### Task 3: Run Real Containers +1. Run an **Nginx** container and access it in your browser +2. Run an **Ubuntu** container in interactive mode — explore it like a mini Linux machine +3. List all running containers +4. List all containers (including stopped ones) +5. Stop and remove a container + +--- + +### Task 4: Explore +1. Run a container in **detached mode** — what's different? +2. Give a container a custom **name** +3. Map a **port** from the container to your host +4. Check **logs** of a running container +5. Run a command **inside** a running container + +--- + +## Hints +- `docker run`, `docker ps`, `docker stop`, `docker rm` +- Interactive mode: `-it` flag +- Detached mode: `-d` flag +- Port mapping: `-p host:container` +- Naming: `--name` +- Logs: `docker logs` +- Exec into container: `docker exec` + +--- + +## Why This Matters for DevOps +Docker is the foundation of modern deployment. Every CI/CD pipeline, Kubernetes cluster, and microservice architecture starts with containers. Today you took the first step. + +--- + +## Submission +1. Add your `day-29-docker-basics.md` to `2026/day-29/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your first Docker container screenshot on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-29/day-29-notes.md b/2026/day-29/day-29-notes.md new file mode 100644 index 0000000000..ea4a9edbb3 --- /dev/null +++ b/2026/day-29/day-29-notes.md @@ -0,0 +1,16 @@ +# Task:1 What is Docker? + +## What is a Container? +A container packages your app + its dependencies into one isolated unit. It runs the same everywhere — no more "works on my machine" issues. + +## Containers vs Virtual Machines +ContainerVMOSShares host kernelOwn full OSSizeMBsGBsStartupMillisecondsMinutesIsolationProcess-levelHardware-level +Key point: VMs virtualize hardware. Containers virtualize the OS. Containers are faster and lighter. + +## Docker Architecture + +Client — Your CLI (docker run, docker build), sends commands to the daemon +Daemon (dockerd) — Background engine that builds and runs containers +Image — Read-only blueprint, built from a Dockerfile +Container — A running instance of an image +Registry — Image storage hub (e.g. Docker Hub) diff --git a/2026/day-30/README.md b/2026/day-30/README.md new file mode 100644 index 0000000000..3fcd8c7b2d --- /dev/null +++ b/2026/day-30/README.md @@ -0,0 +1,91 @@ +# Day 30 – Docker Images & Container Lifecycle + +## Task +Today's goal is to **understand how images and containers actually work**. + +You will: +- Learn the relationship between images and containers +- Understand image layers and caching +- Master the full container lifecycle + +--- + +## Expected Output +- A markdown file: `day-30-images.md` +- Screenshots of key commands + +--- + +## Challenge Tasks + +### Task 1: Docker Images +1. Pull the `nginx`, `ubuntu`, and `alpine` images from Docker Hub +2. List all images on your machine — note the sizes +3. Compare `ubuntu` vs `alpine` — why is one much smaller? +4. Inspect an image — what information can you see? +5. Remove an image you no longer need + +--- + +### Task 2: Image Layers +1. Run `docker image history nginx` — what do you see? +2. Each line is a **layer**. Note how some layers show sizes and some show 0B +3. Write in your notes: What are layers and why does Docker use them? + +--- + +### Task 3: Container Lifecycle +Practice the full lifecycle on one container: +1. **Create** a container (without starting it) +2. **Start** the container +3. **Pause** it and check status +4. **Unpause** it +5. **Stop** it +6. **Restart** it +7. **Kill** it +8. **Remove** it + +Check `docker ps -a` after each step — observe the state changes. + +--- + +### Task 4: Working with Running Containers +1. Run an Nginx container in detached mode +2. View its **logs** +3. View **real-time logs** (follow mode) +4. **Exec** into the container and look around the filesystem +5. Run a single command inside the container without entering it +6. **Inspect** the container — find its IP address, port mappings, and mounts + +--- + +### Task 5: Cleanup +1. Stop all running containers in one command +2. Remove all stopped containers in one command +3. Remove unused images +4. Check how much disk space Docker is using + +--- + +## Hints +- Image history: `docker image history` +- Create without starting: `docker create` +- Follow logs: `docker logs -f` +- Inspect: `docker inspect` +- Cleanup: `docker system df`, `docker system prune` + +--- + +## Submission +1. Add your `day-30-images.md` to `2026/day-30/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share what surprised you about image layers or container states on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-30/day-30-notes.md b/2026/day-30/day-30-notes.md new file mode 100644 index 0000000000..d2b165ef52 --- /dev/null +++ b/2026/day-30/day-30-notes.md @@ -0,0 +1,9 @@ +# What are layers? + +Layers are snapshots of filesystem changes, stacked on top of each other to form a complete image. Each Dockerfile instruction that touches files creates a new layer. + +# Why does Docker use them? + +Speed — rebuild only what changed, cache the rest +Efficiency — shared layers aren't duplicated on disk +Transparency — docker image history shows exactly what built the image and how much each step costs in size diff --git a/2026/day-31/README.md b/2026/day-31/README.md new file mode 100644 index 0000000000..a357b35703 --- /dev/null +++ b/2026/day-31/README.md @@ -0,0 +1,95 @@ +# Day 31 – Dockerfile: Build Your Own Images + +## Task +Today's goal is to **write Dockerfiles and build custom images**. + +This is the skill that separates someone who uses Docker from someone who actually ships with Docker. + +--- + +## Expected Output +- A markdown file: `day-31-dockerfile.md` +- All Dockerfiles you create + +--- + +## Challenge Tasks + +### Task 1: Your First Dockerfile +1. Create a folder called `my-first-image` +2. Inside it, create a `Dockerfile` that: + - Uses `ubuntu` as the base image + - Installs `curl` + - Sets a default command to print `"Hello from my custom image!"` +3. Build the image and tag it `my-ubuntu:v1` +4. Run a container from your image + +**Verify:** The message prints on `docker run` + +--- + +### Task 2: Dockerfile Instructions +Create a new Dockerfile that uses **all** of these instructions: +- `FROM` — base image +- `RUN` — execute commands during build +- `COPY` — copy files from host to image +- `WORKDIR` — set working directory +- `EXPOSE` — document the port +- `CMD` — default command + +Build and run it. Understand what each line does. + +--- + +### Task 3: CMD vs ENTRYPOINT +1. Create an image with `CMD ["echo", "hello"]` — run it, then run it with a custom command. What happens? +2. Create an image with `ENTRYPOINT ["echo"]` — run it, then run it with additional arguments. What happens? +3. Write in your notes: When would you use CMD vs ENTRYPOINT? + +--- + +### Task 4: Build a Simple Web App Image +1. Create a small static HTML file (`index.html`) with any content +2. Write a Dockerfile that: + - Uses `nginx:alpine` as base + - Copies your `index.html` to the Nginx web directory +3. Build and tag it `my-website:v1` +4. Run it with port mapping and access it in your browser + +--- + +### Task 5: .dockerignore +1. Create a `.dockerignore` file in one of your project folders +2. Add entries for: `node_modules`, `.git`, `*.md`, `.env` +3. Build the image — verify that ignored files are not included + +--- + +### Task 6: Build Optimization +1. Build an image, then change one line and rebuild — notice how Docker uses **cache** +2. Reorder your Dockerfile so that frequently changing lines come **last** +3. Write in your notes: Why does layer order matter for build speed? + +--- + +## Hints +- Build: `docker build -t name:tag .` +- The `.` at the end is the build context +- `COPY . .` copies everything from host to container +- Nginx serves files from `/usr/share/nginx/html/` + +--- + +## Submission +1. Add your Dockerfiles and `day-31-dockerfile.md` to `2026/day-31/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your custom Docker image or Nginx screenshot on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-31/day-31-notes.md b/2026/day-31/day-31-notes.md new file mode 100644 index 0000000000..f12c90deee --- /dev/null +++ b/2026/day-31/day-31-notes.md @@ -0,0 +1,20 @@ +# CMD vs ENTRYPOINT + +## Use CMD when: + +The container can reasonably run different commands depending on context +You want a helpful default but full flexibility +Example: a base Ubuntu/Python image where users might run bash, python, or anything else + +## Use ENTRYPOINT when: + +Your container has one clear, dedicated purpose +You're shipping a tool and the container is that tool +Example: a container that wraps ffmpeg, curl, or your own app — users only pass flags/args, not a whole new command + +## Use both together when: + +You have a fixed executable but want sensible default arguments that are easy to swap +Example: ENTRYPOINT ["python", "app.py"] + CMD ["--port", "8080"] — the app always runs, but the port is overridable + + diff --git a/2026/day-31/my-first-image/.dockerignore b/2026/day-31/my-first-image/.dockerignore new file mode 100644 index 0000000000..3c65d57533 --- /dev/null +++ b/2026/day-31/my-first-image/.dockerignore @@ -0,0 +1,2 @@ + +.git diff --git a/2026/day-31/my-first-image/Dockerfile b/2026/day-31/my-first-image/Dockerfile new file mode 100644 index 0000000000..21e2933c78 --- /dev/null +++ b/2026/day-31/my-first-image/Dockerfile @@ -0,0 +1,58 @@ + +# Base image +#FROM ubuntu:latest AS builder + +# Installing curl + +#RUN apt-get update -y && apt-get install curl -y + +# Default command + +#CMD echo "Hello from my custom image!" + + +# First we tell the file which base image we want to use + +#FROM ubuntu:latest AS builder + +# Executing command during build + +#RUN apt-get update -y + +# To set working directory + +#WORKDIR /app + +# To copy files from host to image + +#COPY . . + +# Any port we want to tell the user to expose + +#EXPOSE 80 + +# Default command that will run during running container +#ENTRYPOINT echo "You're getting better" + + +# Dockerfile to run , port and access through browser + +# Base image + +FROM nginx:alpine AS builder + +# Setting workdirectory +WORKDIR /home + +# Copying my file + +COPY . /usr/share/nginx/html + +# The port to expose + +EXPOSE 80 + +# Command to run + +CMD ["nginx", "-g" , "daemon off;"] + diff --git a/2026/day-31/my-first-image/index.html b/2026/day-31/my-first-image/index.html new file mode 100644 index 0000000000..7911ed0e29 --- /dev/null +++ b/2026/day-31/my-first-image/index.html @@ -0,0 +1,35 @@ + + + + + + Hello World + + + +
+

Hello, World! 👋

+

This is a simple static HTML page.

+
+ + + diff --git a/2026/day-32/README.md b/2026/day-32/README.md new file mode 100644 index 0000000000..14da78840e --- /dev/null +++ b/2026/day-32/README.md @@ -0,0 +1,93 @@ +# Day 32 – Docker Volumes & Networking + +## Task +Today's goal is to **solve two real problems: data persistence and container communication**. + +Containers are ephemeral — they lose data when removed. And by default, containers can't easily talk to each other. Today you fix both. + +--- + +## Expected Output +- A markdown file: `day-32-volumes-networking.md` +- Screenshots of your experiments + +--- + +## Challenge Tasks + +### Task 1: The Problem +1. Run a Postgres or MySQL container +2. Create some data inside it (a table, a few rows — anything) +3. Stop and remove the container +4. Run a new one — is your data still there? + +Write what happened and why. + +--- + +### Task 2: Named Volumes +1. Create a named volume +2. Run the same database container, but this time **attach the volume** to it +3. Add some data, stop and remove the container +4. Run a brand new container with the **same volume** +5. Is the data still there? + +**Verify:** `docker volume ls`, `docker volume inspect` + +--- + +### Task 3: Bind Mounts +1. Create a folder on your host machine with an `index.html` file +2. Run an Nginx container and **bind mount** your folder to the Nginx web directory +3. Access the page in your browser +4. Edit the `index.html` on your host — refresh the browser + +Write in your notes: What is the difference between a named volume and a bind mount? + +--- + +### Task 4: Docker Networking Basics +1. List all Docker networks on your machine +2. Inspect the default `bridge` network +3. Run two containers on the default bridge — can they ping each other by **name**? +4. Run two containers on the default bridge — can they ping each other by **IP**? + +--- + +### Task 5: Custom Networks +1. Create a custom bridge network called `my-app-net` +2. Run two containers on `my-app-net` +3. Can they ping each other by **name** now? +4. Write in your notes: Why does custom networking allow name-based communication but the default bridge doesn't? + +--- + +### Task 6: Put It Together +1. Create a custom network +2. Run a **database container** (MySQL/Postgres) on that network with a volume for data +3. Run an **app container** (use any image) on the same network +4. Verify the app container can reach the database by container name + +--- + +## Hints +- Volumes: `docker volume create`, `-v volume_name:/path` +- Bind mount: `-v /host/path:/container/path` +- Networking: `docker network create`, `--network` +- Ping: `docker exec container1 ping container2` + +--- + +## Submission +1. Add your `day-32-volumes-networking.md` to `2026/day-32/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share what happened when you deleted a container without a volume on LinkedIn. The "aha moment" is real. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-32/data/index.html b/2026/day-32/data/index.html new file mode 100644 index 0000000000..0f3eb7f540 --- /dev/null +++ b/2026/day-32/data/index.html @@ -0,0 +1,447 @@ + + + + + + NGINX — Server Online + + + + +
+ + +
+ +
+
+ operational +
+
+ + +
+
// http server running
+

+ IT
+ WORKS. +

+

Your NGINX web server is live and serving requests. Replace this file with your own index.html located at the server root.

+
+ + +
+
+
Status
+
200
+
HTTP OK
+
+
+
Server
+
NGINX
+
latest / stable
+
+
+
Protocol
+
HTTP
+
port 80 / 443
+
+
+ + +
+
+
+
+
+
bash — nginx container
+
+
+
$docker run -d -p 80:80 -v $(pwd):/usr/share/nginx/html nginx
+
✔ Container started successfully
+
$curl http://localhost
+
✔ 200 OK — index.html served
+
$nginx -t
+
nginx: configuration file /etc/nginx/nginx.conf test is successful
+
$
+
+
+ + +
+
+
📁
+
Server Root
+
Place your files at /usr/share/nginx/html/ to serve them. This file is index.html.
+
+
+
⚙️
+
Config Location
+
Edit your NGINX config at /etc/nginx/nginx.conf or drop files in /etc/nginx/conf.d/.
+
+
+
📋
+
View Logs
+
Access logs: /var/log/nginx/access.log
Error logs: /var/log/nginx/error.log
+
+
+
🔄
+
Reload Config
+
After editing config, run nginx -s reload inside the container — no restart needed.
+
+
+ + + + +
+ + diff --git a/2026/day-32/day-32-volumes-networking.md b/2026/day-32/day-32-volumes-networking.md new file mode 100644 index 0000000000..b82bd11ce6 --- /dev/null +++ b/2026/day-32/day-32-volumes-networking.md @@ -0,0 +1,12 @@ +# What is the difference between a named volume and a bind mount? + +### A Named Volume is fully managed by Docker — you just give it a name, Docker decides where to store it on the host, and it handles all the internals. You don't need to know or care about the actual folder path. It's safe, portable, and ideal for persistent data like databases. + + +### A Bind Mount, on the other hand, maps a specific folder from your host machine directly into the container. You're in full control of the path, and any changes on either side reflect instantly. It's perfect for development when you want to edit code on your host and see changes live inside the container. + +# Why does custom networking allow name-based communication but the default bridge doesn't? + +### When we use default bridge, docker just connects container on a network,and doesn't assign any DNS server.So, it just understands another container by IP not by name. + +### But when we create custom bridge, docker automatically spins up an internal DNS server for the network.S, when we ping it understands not only ip but also name. diff --git a/2026/day-33/README.md b/2026/day-33/README.md new file mode 100644 index 0000000000..e9effa7f06 --- /dev/null +++ b/2026/day-33/README.md @@ -0,0 +1,89 @@ +# Day 33 – Docker Compose: Multi-Container Basics + +## Task +Today's goal is to **run multi-container applications with a single command**. + +Yesterday you manually created networks and volumes and ran containers one by one. Docker Compose does all of that in one YAML file. + +--- + +## Expected Output +- A markdown file: `day-33-compose.md` +- All `docker-compose.yml` files you create + +--- + +## Challenge Tasks + +### Task 1: Install & Verify +1. Check if Docker Compose is available on your machine +2. Verify the version + +--- + +### Task 2: Your First Compose File +1. Create a folder `compose-basics` +2. Write a `docker-compose.yml` that runs a single **Nginx** container with port mapping +3. Start it with `docker compose up` +4. Access it in your browser +5. Stop it with `docker compose down` + +--- + +### Task 3: Two-Container Setup +Write a `docker-compose.yml` that runs: +- A **WordPress** container +- A **MySQL** container + +They should: +- Be on the same network (Compose does this automatically) +- MySQL should have a named volume for data persistence +- WordPress should connect to MySQL using the service name + +Start it, access WordPress in your browser, and set it up. + +**Verify:** Stop and restart with `docker compose down` and `docker compose up` — is your WordPress data still there? + +--- + +### Task 4: Compose Commands +Practice and document these: +1. Start services in **detached mode** +2. View running services +3. View **logs** of all services +4. View logs of a **specific** service +5. **Stop** services without removing +6. **Remove** everything (containers, networks) +7. **Rebuild** images if you make a change + +--- + +### Task 5: Environment Variables +1. Add environment variables directly in your `docker-compose.yml` +2. Create a `.env` file and reference variables from it in your compose file +3. Verify the variables are being picked up + +--- + +## Hints +- Start: `docker compose up -d` +- Stop: `docker compose down` +- Logs: `docker compose logs -f` +- Compose creates a default network for all services automatically +- Service names in compose are the DNS names containers use to talk to each other + +--- + +## Submission +1. Add your compose files and `day-33-compose.md` to `2026/day-33/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your WordPress + MySQL running via Compose on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-33/compose-basics/docker-compose.yml b/2026/day-33/compose-basics/docker-compose.yml new file mode 100644 index 0000000000..926d243e0b --- /dev/null +++ b/2026/day-33/compose-basics/docker-compose.yml @@ -0,0 +1,27 @@ +services: + mysql: + image: mysql:8.0 + container_name: mysql + environment: + MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} + MYSQL_DATABASE: ${MYSQL_DATABASE} + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + volumes: + - mysql_data:/var/lib/mysql + wordpress: + image: wordpress:latest + ports: + - "8080:80" + environment: + WORDPRESS_DB_HOST: ${WORDPRESS_DB_HOST} + WORDPRESS_DB_NAME: ${WORDPRESS_DB_NAME} + WORDPRESS_DB_USER: ${WORDPRESS_DB_USER} + WORDPRESS_DB_PASSWORD: ${WORDPRESS_DB_PASSWORD} + depends_on: + - mysql + +volumes: + mysql_data: + + diff --git a/2026/day-34/README.md b/2026/day-34/README.md new file mode 100644 index 0000000000..3abe982dc5 --- /dev/null +++ b/2026/day-34/README.md @@ -0,0 +1,86 @@ +# Day 34 – Docker Compose: Real-World Multi-Container Apps + +## Task +Today's goal is to **build more complex, production-like setups with Docker Compose**. + +Yesterday was basics. Today you handle real scenarios — app + database + cache, healthchecks, restart policies, and service dependencies. + +--- + +## Expected Output +- A markdown file: `day-34-compose-advanced.md` +- All compose files and Dockerfiles you create + +--- + +## Challenge Tasks + +### Task 1: Build Your Own App Stack +Create a `docker-compose.yml` for a 3-service stack: +- A **web app** (use Python Flask, Node.js, or any language you know) +- A **database** (Postgres or MySQL) +- A **cache** (Redis) + +Write a simple Dockerfile for the web app. The app doesn't need to be complex — even a "Hello World" that connects to the database is enough. + +--- + +### Task 2: depends_on & Healthchecks +1. Add `depends_on` to your compose file so the app starts **after** the database +2. Add a **healthcheck** on the database service +3. Use `depends_on` with `condition: service_healthy` so the app waits for the database to be truly ready, not just started + +**Test:** Bring everything down and up — does the app wait for the DB? + +--- + +### Task 3: Restart Policies +1. Add `restart: always` to your database service +2. Manually kill the database container — does it come back? +3. Try `restart: on-failure` — how is it different? +4. Write in your notes: When would you use each restart policy? + +--- + +### Task 4: Custom Dockerfiles in Compose +1. Instead of using a pre-built image for your app, use `build:` in your compose file to build from a Dockerfile +2. Make a code change in your app +3. Rebuild and restart with one command + +--- + +### Task 5: Named Networks & Volumes +1. Define **explicit networks** in your compose file instead of relying on the default +2. Define **named volumes** for database data +3. Add **labels** to your services for better organization + +--- + +### Task 6: Scaling (Bonus) +1. Try scaling your web app to 3 replicas using `docker compose up --scale` +2. What happens? What breaks? +3. Write in your notes: Why doesn't simple scaling work with port mapping? + +--- + +## Hints +- Build from Dockerfile: `build: ./app` +- Healthcheck: `healthcheck:` with `test`, `interval`, `timeout` +- Rebuild: `docker compose up --build` +- Scale: `docker compose up --scale web=3` + +--- + +## Submission +1. Add your compose files, Dockerfiles, and `day-34-compose-advanced.md` to `2026/day-34/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your 3-service app stack running via Compose on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-34/app-stack/Dockerfile b/2026/day-34/app-stack/Dockerfile new file mode 100644 index 0000000000..863b5a71db --- /dev/null +++ b/2026/day-34/app-stack/Dockerfile @@ -0,0 +1,16 @@ +# Base image +FROM python:3.9 + +# Working directory +WORKDIR /app + +# Copy all files +COPY . . + +# To install requirements + +RUN pip install -r requirements.txt + +# Command to execute + +CMD ["python", "app.py"] diff --git a/2026/day-34/app-stack/README.md b/2026/day-34/app-stack/README.md new file mode 100644 index 0000000000..379e7da0a0 --- /dev/null +++ b/2026/day-34/app-stack/README.md @@ -0,0 +1,255 @@ +# 🐳 Docker Compose App Stack + +A 3-service application stack built with Docker Compose as part of a DevOps learning journey. + +## Stack + +| Service | Technology | Purpose | +|--------|------------|---------| +| `web` | Python Flask | Web application | +| `db` | MySQL 8.0 | Database | +| `cache` | Redis (Alpine) | Caching layer | + +--- + +## Project Structure + +``` +. +├── app.py # Flask application +├── requirements.txt # Python dependencies +├── Dockerfile # Docker image for Flask app +├── docker-compose.yml # Multi-container setup +└── .env # Environment variables (not committed) +``` + +--- + +## Files + +### app.py +Simple Flask web app that runs on port 5000. + +### Dockerfile +```dockerfile +# Base image +FROM python:3.9 + +# Working directory +WORKDIR /app + +# Copy all files +COPY . . + +# Install requirements +RUN pip install -r requirements.txt + +# Run the app +CMD ["python", "app.py"] +``` + +### .env +Create a `.env` file in the root directory with these variables: +``` +MYSQL_ROOT_PASSWORD=your_root_password +MYSQL_USER=your_user +MYSQL_PASSWORD=your_password +``` + +--- + +## docker-compose.yml + +```yaml +services: + web: + build: . + ports: + - "8080:5000" + networks: + - backend + depends_on: + db: + condition: service_healthy + cache: + condition: service_started + labels: + app: "myapp" + environment: "development" + + db: + image: mysql:8.0 + restart: on-failure + container_name: mysql + networks: + - backend + environment: + MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + MYSQL_DATABASE: mysqldb + healthcheck: + test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "--password=password"] + interval: 10s + timeout: 5s + retries: 5 + start_period: 10s + volumes: + - mysql_data:/var/lib/mysql + labels: + app: "myapp" + environment: "development" + + cache: + image: redis:alpine + networks: + - backend + labels: + app: "myapp" + environment: "development" + +volumes: + mysql_data: + +networks: + backend: +``` + +--- + +## Task 2: Healthchecks & depends_on + +The `db` service has a healthcheck using `mysqladmin ping`. The `web` service uses `condition: service_healthy` so it only starts after MySQL is confirmed healthy — not just started. + +| Condition | Meaning | +|-----------|---------| +| `service_healthy` | Wait for healthcheck to pass | +| `service_started` | Wait for container to just start | + +To verify healthcheck status: +```bash +docker-compose ps # Shows (healthy) next to db +docker inspect mysql | grep -A 10 Health +``` + +--- + +## Task 3: Restart Policies + +| Policy | When to use | +|--------|-------------| +| `no` | Development — don't auto restart while debugging | +| `always` | Critical production services that must run 24/7 | +| `on-failure` | Apps that should restart on error but not on manual stop | +| `unless-stopped` | Like always — but respects manual stops | + +--- + +## Task 5: Named Networks, Volumes & Labels + +### Networks +Explicit networks give you control over which services can talk to each other. Defined at the bottom of compose file and attached to each service: +```yaml +networks: + - backend +``` + +### Named Volumes +Persist data even after containers are removed: +```yaml +volumes: + - mysql_data:/var/lib/mysql +``` + +### Labels +Metadata tags for better organization — don't affect container behavior: +```yaml +labels: + app: "myapp" + environment: "development" +``` + +--- + +## Task 6: Scaling + +Scale web app to 3 replicas: +```bash +docker-compose up --scale web=3 -d +``` + +### What breaks with port mapping? +If 3 containers all try to bind to port `8080` on your machine — only one can use it. It causes a conflict. + +To scale properly you need to: +- Remove `container_name` from the service +- Remove `ports` from the service +- Add a **Load Balancer** (like Nginx) in front to distribute traffic + +--- + +## Usage + +### Start the stack +```bash +docker-compose up -d +``` + +### View running services +```bash +docker-compose ps +``` + +### View logs +```bash +docker-compose logs # All services +docker-compose logs web # Specific service +``` + +### Stop without removing +```bash +docker-compose stop +``` + +### Remove everything +```bash +docker-compose down +``` + +### Rebuild after code changes +```bash +docker-compose up --build +``` + +### Scale web service +```bash +docker-compose up --scale web=3 -d +``` + +--- + +## Access + +Once running, open your browser and visit: +``` +http://localhost:8080 +``` + +--- + +## Key Concepts Learned + +- **Multi-container setup** with Docker Compose +- **Custom Dockerfile** for a Python Flask app +- **Named volumes** for data persistence +- **Environment variables** via `.env` file +- **depends_on** with healthcheck conditions +- **Restart policies** for container recovery +- **Explicit networks** for service isolation +- **Labels** for better organization +- **Scaling** and why port mapping breaks it +- **Redis** as a caching layer + +--- + +*Built as part of a DevOps learning journey* 🚀 diff --git a/2026/day-34/app-stack/app.py b/2026/day-34/app-stack/app.py new file mode 100644 index 0000000000..067e49540e --- /dev/null +++ b/2026/day-34/app-stack/app.py @@ -0,0 +1,14 @@ +from flask import Flask + + +app = Flask(__name__) + + +@app.route('/') +def home(): + return("Hello from flask") + + +if __name__ == '__main__': + app.run(host='0.0.0.0', port=5000) + diff --git a/2026/day-34/app-stack/docker-compose.yml b/2026/day-34/app-stack/docker-compose.yml new file mode 100644 index 0000000000..a1f1998586 --- /dev/null +++ b/2026/day-34/app-stack/docker-compose.yml @@ -0,0 +1,48 @@ +services: + web: + build: . + networks: + - backend + depends_on: + db: + condition: service_healthy + cache: + condition: service_started + labels: + app: "myapp" + environment: "development" + + db: + image: mysql:8.0 + restart: on-failure + container_name: mysql + networks: + - backend + environment: + MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + MYSQL_DATABASE: mysqldb + healthcheck: + test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "--password=password"] + interval: 10s + timeout: 5s + retries: 5 + start_period: 10s + volumes: + - mysql_data:/var/lib/mysql + labels: + app: "myapp" + environment: "development" + cache: + image: redis:alpine + networks: + - backend + labels: + app: "myapp" + environment: "development" + +volumes: + mysql_data: +networks: + backend: diff --git a/2026/day-34/app-stack/requirements.txt b/2026/day-34/app-stack/requirements.txt new file mode 100644 index 0000000000..7e1060246f --- /dev/null +++ b/2026/day-34/app-stack/requirements.txt @@ -0,0 +1 @@ +flask diff --git a/2026/day-34/day-34-notes.md b/2026/day-34/day-34-notes.md new file mode 100644 index 0000000000..05d6fac175 --- /dev/null +++ b/2026/day-34/day-34-notes.md @@ -0,0 +1,12 @@ +# When would you use each restart policy? + +### restart: no +Use during development when you want to debug why a container crashed — you don't want it auto restarting before you can see the error. +### restart: always +Use for critical production services like databases, web servers — anything that must keep running 24/7 even after a system reboot. + +### restart: on-failure +Use for background jobs or scripts that might fail due to an error but shouldn't restart if you manually stop them. + +### unless-stopped +Use when you want always behavior but with one exception — if YOU manually stopped it, don't restart it. Good for services you sometimes need to temporarily turn off. diff --git a/2026/day-35/README.md b/2026/day-35/README.md new file mode 100644 index 0000000000..4a88214448 --- /dev/null +++ b/2026/day-35/README.md @@ -0,0 +1,89 @@ +# Day 35 – Multi-Stage Builds & Docker Hub + +## Task +Today's goal is to **build optimized images and share them with the world**. + +Multi-stage builds are how real teams ship small, secure images. Docker Hub is how you distribute them. Both are interview favourites. + +--- + +## Expected Output +- A markdown file: `day-35-multistage-hub.md` +- Dockerfiles demonstrating multi-stage builds +- An image pushed to your Docker Hub account + +--- + +## Challenge Tasks + +### Task 1: The Problem with Large Images +1. Write a simple Go, Java, or Node.js app (even a "Hello World" is fine) +2. Create a Dockerfile that builds and runs it in a **single stage** +3. Build the image and check its **size** + +Note down the size — you'll compare it later. + +--- + +### Task 2: Multi-Stage Build +1. Rewrite the Dockerfile using **multi-stage build**: + - Stage 1: Build the app (install dependencies, compile) + - Stage 2: Copy only the built artifact into a minimal base image (`alpine`, `distroless`, or `scratch`) +2. Build the image and check its size again +3. Compare the two sizes + +Write in your notes: Why is the multi-stage image so much smaller? + +--- + +### Task 3: Push to Docker Hub +1. Create a free account on [Docker Hub](https://hub.docker.com) (if you don't have one) +2. Log in from your terminal +3. Tag your image properly: `yourusername/image-name:tag` +4. Push it to Docker Hub +5. Pull it on a different machine (or after removing locally) to verify + +--- + +### Task 4: Docker Hub Repository +1. Go to Docker Hub and check your pushed image +2. Add a **description** to the repository +3. Explore the **tags** tab — understand how versioning works +4. Pull a specific tag vs `latest` — what happens? + +--- + +### Task 5: Image Best Practices +Apply these to one of your images and rebuild: +1. Use a **minimal base image** (alpine vs ubuntu — compare sizes) +2. **Don't run as root** — add a non-root USER in your Dockerfile +3. Combine `RUN` commands to **reduce layers** +4. Use **specific tags** for base images (not `latest`) + +Check the size before and after. + +--- + +## Hints +- Multi-stage: use `FROM ... AS builder` then `COPY --from=builder` +- Login: `docker login` +- Tag: `docker tag local-image:tag username/repo:tag` +- Push: `docker push username/repo:tag` +- Non-root user: `RUN adduser` + `USER` + +--- + +## Submission +1. Add your Dockerfiles and `day-35-multistage-hub.md` to `2026/day-35/` +2. Include the link to your Docker Hub repo +3. Commit and push to your fork + +--- + +## Learn in Public +Share your before/after image sizes on LinkedIn — the difference is always impressive. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-35/day-35-multistage-hub.md b/2026/day-35/day-35-multistage-hub.md new file mode 100644 index 0000000000..e7815fb569 --- /dev/null +++ b/2026/day-35/day-35-multistage-hub.md @@ -0,0 +1,6 @@ +# Why is the multi-stage image so much smaller? + +Stage 1 (build) → has everything needed to build the app — Node.js, npm, package files, build tools. This is heavy! +Stage 2 (deployer) → only copies the final built app — no npm, no build tools, no unnecessary files + +So the final image only contains what's needed to run the app, not what was needed to build it! diff --git a/2026/day-35/node-app/Dockerfile b/2026/day-35/node-app/Dockerfile new file mode 100644 index 0000000000..0ca49f482c --- /dev/null +++ b/2026/day-35/node-app/Dockerfile @@ -0,0 +1,31 @@ +# Base image +FROM node:20.11.0-bookworm-slim AS build + +# Working directory +WORKDIR /app + +# Copy files +COPY package*.json ./ + +# Installing dependencies + +RUN npm install + +COPY . . + +RUN useradd -m appuser + +FROM gcr.io/distroless/nodejs20-debian12 AS deployer + +COPY --from=build /app /app + +WORKDIR /app + +EXPOSE 3000 + +COPY --from=build /etc/passwd /etc/passwd +USER appuser + + +CMD ["app.js"] + diff --git a/2026/day-35/node-app/app.js b/2026/day-35/node-app/app.js new file mode 100644 index 0000000000..167681aaa4 --- /dev/null +++ b/2026/day-35/node-app/app.js @@ -0,0 +1,10 @@ +const express = require('express') +const app = express() + +app.get('/', (req, res) => { + res.send('Hello from Node.js!') +}) + +app.listen(3000, () => { + console.log('Server running on port 3000') +}) diff --git a/2026/day-35/node-app/package-lock.json b/2026/day-35/node-app/package-lock.json new file mode 100644 index 0000000000..25230c193d --- /dev/null +++ b/2026/day-35/node-app/package-lock.json @@ -0,0 +1,758 @@ +{ + "name": "node-app", + "lockfileVersion": 3, + "requires": true, + "packages": { + "": { + "dependencies": { + "express": "^5.2.1" + } + }, + "node_modules/accepts": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/accepts/-/accepts-2.0.0.tgz", + "integrity": "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng==", + "dependencies": { + "mime-types": "^3.0.0", + "negotiator": "^1.0.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/body-parser": { + "version": "2.2.2", + "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-2.2.2.tgz", + "integrity": "sha512-oP5VkATKlNwcgvxi0vM0p/D3n2C3EReYVX+DNYs5TjZFn/oQt2j+4sVJtSMr18pdRr8wjTcBl6LoV+FUwzPmNA==", + "dependencies": { + "bytes": "^3.1.2", + "content-type": "^1.0.5", + "debug": "^4.4.3", + "http-errors": "^2.0.0", + "iconv-lite": "^0.7.0", + "on-finished": "^2.4.1", + "qs": "^6.14.1", + "raw-body": "^3.0.1", + "type-is": "^2.0.1" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/bytes": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz", + "integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/call-bind-apply-helpers": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz", + "integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==", + "dependencies": { + "es-errors": "^1.3.0", + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/call-bound": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.4.tgz", + "integrity": "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg==", + "dependencies": { + "call-bind-apply-helpers": "^1.0.2", + "get-intrinsic": "^1.3.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/content-disposition": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-1.0.1.tgz", + "integrity": "sha512-oIXISMynqSqm241k6kcQ5UwttDILMK4BiurCfGEREw6+X9jkkpEe5T9FZaApyLGGOnFuyMWZpdolTXMtvEJ08Q==", + "engines": { + "node": ">=18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/content-type": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.5.tgz", + "integrity": "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/cookie": { + "version": "0.7.2", + "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.2.tgz", + "integrity": "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/cookie-signature": { + "version": "1.2.2", + "resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.2.2.tgz", + "integrity": "sha512-D76uU73ulSXrD1UXF4KE2TMxVVwhsnCgfAyTg9k8P6KGZjlXKrOLe4dJQKI3Bxi5wjesZoFXJWElNWBjPZMbhg==", + "engines": { + "node": ">=6.6.0" + } + }, + "node_modules/debug": { + "version": "4.4.3", + "resolved": "https://registry.npmjs.org/debug/-/debug-4.4.3.tgz", + "integrity": "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==", + "dependencies": { + "ms": "^2.1.3" + }, + "engines": { + "node": ">=6.0" + }, + "peerDependenciesMeta": { + "supports-color": { + "optional": true + } + } + }, + "node_modules/depd": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz", + "integrity": "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/dunder-proto": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz", + "integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==", + "dependencies": { + "call-bind-apply-helpers": "^1.0.1", + "es-errors": "^1.3.0", + "gopd": "^1.2.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/ee-first": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz", + "integrity": "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==" + }, + "node_modules/encodeurl": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-2.0.0.tgz", + "integrity": "sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/es-define-property": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz", + "integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-errors": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz", + "integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-object-atoms": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz", + "integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==", + "dependencies": { + "es-errors": "^1.3.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/escape-html": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/escape-html/-/escape-html-1.0.3.tgz", + "integrity": "sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow==" + }, + "node_modules/etag": { + "version": "1.8.1", + "resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz", + "integrity": "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/express": { + "version": "5.2.1", + "resolved": "https://registry.npmjs.org/express/-/express-5.2.1.tgz", + "integrity": "sha512-hIS4idWWai69NezIdRt2xFVofaF4j+6INOpJlVOLDO8zXGpUVEVzIYk12UUi2JzjEzWL3IOAxcTubgz9Po0yXw==", + "dependencies": { + "accepts": "^2.0.0", + "body-parser": "^2.2.1", + "content-disposition": "^1.0.0", + "content-type": "^1.0.5", + "cookie": "^0.7.1", + "cookie-signature": "^1.2.1", + "debug": "^4.4.0", + "depd": "^2.0.0", + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "etag": "^1.8.1", + "finalhandler": "^2.1.0", + "fresh": "^2.0.0", + "http-errors": "^2.0.0", + "merge-descriptors": "^2.0.0", + "mime-types": "^3.0.0", + "on-finished": "^2.4.1", + "once": "^1.4.0", + "parseurl": "^1.3.3", + "proxy-addr": "^2.0.7", + "qs": "^6.14.0", + "range-parser": "^1.2.1", + "router": "^2.2.0", + "send": "^1.1.0", + "serve-static": "^2.2.0", + "statuses": "^2.0.1", + "type-is": "^2.0.1", + "vary": "^1.1.2" + }, + "engines": { + "node": ">= 18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/finalhandler": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-2.1.1.tgz", + "integrity": "sha512-S8KoZgRZN+a5rNwqTxlZZePjT/4cnm0ROV70LedRHZ0p8u9fRID0hJUZQpkKLzro8LfmC8sx23bY6tVNxv8pQA==", + "dependencies": { + "debug": "^4.4.0", + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "on-finished": "^2.4.1", + "parseurl": "^1.3.3", + "statuses": "^2.0.1" + }, + "engines": { + "node": ">= 18.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/forwarded": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz", + "integrity": "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/fresh": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/fresh/-/fresh-2.0.0.tgz", + "integrity": "sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/function-bind": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz", + "integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==", + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/get-intrinsic": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz", + "integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==", + "dependencies": { + "call-bind-apply-helpers": "^1.0.2", + "es-define-property": "^1.0.1", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.1.1", + "function-bind": "^1.1.2", + "get-proto": "^1.0.1", + "gopd": "^1.2.0", + "has-symbols": "^1.1.0", + "hasown": "^2.0.2", + "math-intrinsics": "^1.1.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/get-proto": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz", + "integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==", + "dependencies": { + "dunder-proto": "^1.0.1", + "es-object-atoms": "^1.0.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/gopd": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz", + "integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/has-symbols": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz", + "integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/hasown": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz", + "integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==", + "dependencies": { + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/http-errors": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-2.0.1.tgz", + "integrity": "sha512-4FbRdAX+bSdmo4AUFuS0WNiPz8NgFt+r8ThgNWmlrjQjt1Q7ZR9+zTlce2859x4KSXrwIsaeTqDoKQmtP8pLmQ==", + "dependencies": { + "depd": "~2.0.0", + "inherits": "~2.0.4", + "setprototypeof": "~1.2.0", + "statuses": "~2.0.2", + "toidentifier": "~1.0.1" + }, + "engines": { + "node": ">= 0.8" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/iconv-lite": { + "version": "0.7.2", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.7.2.tgz", + "integrity": "sha512-im9DjEDQ55s9fL4EYzOAv0yMqmMBSZp6G0VvFyTMPKWxiSBHUj9NW/qqLmXUwXrrM7AvqSlTCfvqRb0cM8yYqw==", + "dependencies": { + "safer-buffer": ">= 2.1.2 < 3.0.0" + }, + "engines": { + "node": ">=0.10.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/inherits": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz", + "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==" + }, + "node_modules/ipaddr.js": { + "version": "1.9.1", + "resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz", + "integrity": "sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==", + "engines": { + "node": ">= 0.10" + } + }, + "node_modules/is-promise": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/is-promise/-/is-promise-4.0.0.tgz", + "integrity": "sha512-hvpoI6korhJMnej285dSg6nu1+e6uxs7zG3BYAm5byqDsgJNWwxzM6z6iZiAgQR4TJ30JmBTOwqZUw3WlyH3AQ==" + }, + "node_modules/math-intrinsics": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz", + "integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/media-typer": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/media-typer/-/media-typer-1.1.0.tgz", + "integrity": "sha512-aisnrDP4GNe06UcKFnV5bfMNPBUw4jsLGaWwWfnH3v02GnBuXX2MCVn5RbrWo0j3pczUilYblq7fQ7Nw2t5XKw==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/merge-descriptors": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-2.0.0.tgz", + "integrity": "sha512-Snk314V5ayFLhp3fkUREub6WtjBfPdCPY1Ln8/8munuLuiYhsABgBVWsozAG+MWMbVEvcdcpbi9R7ww22l9Q3g==", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/mime-db": { + "version": "1.54.0", + "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.54.0.tgz", + "integrity": "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/mime-types": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-3.0.2.tgz", + "integrity": "sha512-Lbgzdk0h4juoQ9fCKXW4by0UJqj+nOOrI9MJ1sSj4nI8aI2eo1qmvQEie4VD1glsS250n15LsWsYtCugiStS5A==", + "dependencies": { + "mime-db": "^1.54.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/ms": { + "version": "2.1.3", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz", + "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==" + }, + "node_modules/negotiator": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-1.0.0.tgz", + "integrity": "sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/object-inspect": { + "version": "1.13.4", + "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.4.tgz", + "integrity": "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/on-finished": { + "version": "2.4.1", + "resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.4.1.tgz", + "integrity": "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg==", + "dependencies": { + "ee-first": "1.1.1" + }, + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/once": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz", + "integrity": "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==", + "dependencies": { + "wrappy": "1" + } + }, + "node_modules/parseurl": { + "version": "1.3.3", + "resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz", + "integrity": "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/path-to-regexp": { + "version": "8.3.0", + "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-8.3.0.tgz", + "integrity": "sha512-7jdwVIRtsP8MYpdXSwOS0YdD0Du+qOoF/AEPIt88PcCFrZCzx41oxku1jD88hZBwbNUIEfpqvuhjFaMAqMTWnA==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/proxy-addr": { + "version": "2.0.7", + "resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.7.tgz", + "integrity": "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==", + "dependencies": { + "forwarded": "0.2.0", + "ipaddr.js": "1.9.1" + }, + "engines": { + "node": ">= 0.10" + } + }, + "node_modules/qs": { + "version": "6.15.0", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.15.0.tgz", + "integrity": "sha512-mAZTtNCeetKMH+pSjrb76NAM8V9a05I9aBZOHztWy/UqcJdQYNsf59vrRKWnojAT9Y+GbIvoTBC++CPHqpDBhQ==", + "dependencies": { + "side-channel": "^1.1.0" + }, + "engines": { + "node": ">=0.6" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/range-parser": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz", + "integrity": "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/raw-body": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-3.0.2.tgz", + "integrity": "sha512-K5zQjDllxWkf7Z5xJdV0/B0WTNqx6vxG70zJE4N0kBs4LovmEYWJzQGxC9bS9RAKu3bgM40lrd5zoLJ12MQ5BA==", + "dependencies": { + "bytes": "~3.1.2", + "http-errors": "~2.0.1", + "iconv-lite": "~0.7.0", + "unpipe": "~1.0.0" + }, + "engines": { + "node": ">= 0.10" + } + }, + "node_modules/router": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/router/-/router-2.2.0.tgz", + "integrity": "sha512-nLTrUKm2UyiL7rlhapu/Zl45FwNgkZGaCpZbIHajDYgwlJCOzLSk+cIPAnsEqV955GjILJnKbdQC1nVPz+gAYQ==", + "dependencies": { + "debug": "^4.4.0", + "depd": "^2.0.0", + "is-promise": "^4.0.0", + "parseurl": "^1.3.3", + "path-to-regexp": "^8.0.0" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/safer-buffer": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", + "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==" + }, + "node_modules/send": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/send/-/send-1.2.1.tgz", + "integrity": "sha512-1gnZf7DFcoIcajTjTwjwuDjzuz4PPcY2StKPlsGAQ1+YH20IRVrBaXSWmdjowTJ6u8Rc01PoYOGHXfP1mYcZNQ==", + "dependencies": { + "debug": "^4.4.3", + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "etag": "^1.8.1", + "fresh": "^2.0.0", + "http-errors": "^2.0.1", + "mime-types": "^3.0.2", + "ms": "^2.1.3", + "on-finished": "^2.4.1", + "range-parser": "^1.2.1", + "statuses": "^2.0.2" + }, + "engines": { + "node": ">= 18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/serve-static": { + "version": "2.2.1", + "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-2.2.1.tgz", + "integrity": "sha512-xRXBn0pPqQTVQiC8wyQrKs2MOlX24zQ0POGaj0kultvoOCstBQM5yvOhAVSUwOMjQtTvsPWoNCHfPGwaaQJhTw==", + "dependencies": { + "encodeurl": "^2.0.0", + "escape-html": "^1.0.3", + "parseurl": "^1.3.3", + "send": "^1.2.0" + }, + "engines": { + "node": ">= 18" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/express" + } + }, + "node_modules/setprototypeof": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.2.0.tgz", + "integrity": "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==" + }, + "node_modules/side-channel": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.1.0.tgz", + "integrity": "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw==", + "dependencies": { + "es-errors": "^1.3.0", + "object-inspect": "^1.13.3", + "side-channel-list": "^1.0.0", + "side-channel-map": "^1.0.1", + "side-channel-weakmap": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-list": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/side-channel-list/-/side-channel-list-1.0.0.tgz", + "integrity": "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA==", + "dependencies": { + "es-errors": "^1.3.0", + "object-inspect": "^1.13.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-map": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/side-channel-map/-/side-channel-map-1.0.1.tgz", + "integrity": "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA==", + "dependencies": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.5", + "object-inspect": "^1.13.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-weakmap": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/side-channel-weakmap/-/side-channel-weakmap-1.0.2.tgz", + "integrity": "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A==", + "dependencies": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.5", + "object-inspect": "^1.13.3", + "side-channel-map": "^1.0.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/statuses": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.2.tgz", + "integrity": "sha512-DvEy55V3DB7uknRo+4iOGT5fP1slR8wQohVdknigZPMpMstaKJQWhwiYBACJE3Ul2pTnATihhBYnRhZQHGBiRw==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/toidentifier": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/toidentifier/-/toidentifier-1.0.1.tgz", + "integrity": "sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==", + "engines": { + "node": ">=0.6" + } + }, + "node_modules/type-is": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/type-is/-/type-is-2.0.1.tgz", + "integrity": "sha512-OZs6gsjF4vMp32qrCbiVSkrFmXtG/AZhY3t0iAMrMBiAZyV9oALtXO8hsrHbMXF9x6L3grlFuwW2oAz7cav+Gw==", + "dependencies": { + "content-type": "^1.0.5", + "media-typer": "^1.1.0", + "mime-types": "^3.0.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/unpipe": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz", + "integrity": "sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/vary": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/vary/-/vary-1.1.2.tgz", + "integrity": "sha512-BNGbWLfd0eUPabhkXUVm0j8uuvREyTh5ovRa/dyow/BqAbZJyC+5fU+IzQOzmAKzYqYRAISoRhdQr3eIZ/PXqg==", + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/wrappy": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", + "integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==" + } + } +} diff --git a/2026/day-35/node-app/package.json b/2026/day-35/node-app/package.json new file mode 100644 index 0000000000..0e37769612 --- /dev/null +++ b/2026/day-35/node-app/package.json @@ -0,0 +1,16 @@ +{ + "dependencies": { + "express": "^5.2.1" + }, + "name": "app", + "version": "1.0.0", + "main": "index.js", + "devDependencies": {}, + "scripts": { + "test": "echo \"Error: no test specified\" && exit 1" + }, + "keywords": [], + "author": "", + "license": "ISC", + "description": "" +} diff --git a/2026/day-36/README.md b/2026/day-36/README.md new file mode 100644 index 0000000000..3d567897a5 --- /dev/null +++ b/2026/day-36/README.md @@ -0,0 +1,94 @@ +# Day 36 – Docker Project: Dockerize a Full Application + +## Task +Today's goal is to **take a real application and Dockerize it end-to-end**. + +No tutorials. No hand-holding. Pick an app, write the Dockerfile, set up Compose, and ship it. This is what you'll do on the job. + +--- + +## Expected Output +- A markdown file: `day-36-docker-project.md` +- Complete project with Dockerfile, docker-compose.yml, and app code +- Image pushed to Docker Hub + +--- + +## Challenge Tasks + +### Task 1: Pick Your App +Choose **one** of these (or use your own project): +- A **Python Flask/Django** app with a database +- A **Node.js Express** app with MongoDB +- A **static website** served by Nginx with a backend API +- Any app from your GitHub that doesn't have Docker yet + +If you don't have an app, clone a simple open-source one and Dockerize it. + +--- + +### Task 2: Write the Dockerfile +1. Create a Dockerfile for your application +2. Use a **multi-stage build** if applicable +3. Use a **non-root user** +4. Keep the image **small** — use alpine or slim base images +5. Add a `.dockerignore` file + +Build and test it locally. + +--- + +### Task 3: Add Docker Compose +Write a `docker-compose.yml` that includes: +1. Your **app** service (built from Dockerfile) +2. A **database** service (Postgres, MySQL, MongoDB — whatever your app needs) +3. **Volumes** for database persistence +4. A **custom network** +5. **Environment variables** for configuration (use `.env` file) +6. **Healthchecks** on the database + +Run `docker compose up` and verify everything works together. + +--- + +### Task 4: Ship It +1. Tag your app image +2. Push it to Docker Hub +3. Share the Docker Hub link +4. Write a `README.md` in your project with: + - What the app does + - How to run it with Docker Compose + - Any environment variables needed + +--- + +### Task 5: Test the Whole Flow +1. Remove all local images and containers +2. Pull from Docker Hub and run using only your compose file +3. Does it work fresh? If not — fix it until it does + +--- + +## Documentation +Create `day-36-docker-project.md` with: +- What app you chose and why +- Your Dockerfile (with comments explaining each line) +- Challenges you faced and how you solved them +- Final image size +- Docker Hub link + +--- + +## Submission +1. Add all project files and `day-36-docker-project.md` to `2026/day-36/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your Dockerized project on LinkedIn — include the Docker Hub link so others can pull and run it. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-36/flask-todo/Dockerfile b/2026/day-36/flask-todo/Dockerfile new file mode 100644 index 0000000000..1aef666a9d --- /dev/null +++ b/2026/day-36/flask-todo/Dockerfile @@ -0,0 +1,27 @@ + +# Base image +FROM python:3.9-slim AS builder + +# Working directory + +WORKDIR /app + +# Copying data + +COPY . . + +# Adding non-root user +RUN useradd -m appuser + +# running python command + +RUN pip install -r requirements.txt + +# Defining user +USER appuser + +# Running command +CMD ["python","app.py"] + +# Exposing port +EXPOSE 5000 diff --git a/2026/day-36/flask-todo/README.md b/2026/day-36/flask-todo/README.md new file mode 100644 index 0000000000..4296353095 --- /dev/null +++ b/2026/day-36/flask-todo/README.md @@ -0,0 +1,191 @@ +# 🐳 Dockerized Flask Todo + +A fully containerized Todo web application built with Python Flask and MySQL, orchestrated with Docker Compose. + +> Built by **Uttam Tripathi** as part of a DevOps learning journey. + +--- + +## Preview + +A glassmorphism-themed Todo app where you can: +- ✅ Add todos +- ❌ Delete todos +- 💾 Data persists in MySQL even after container restarts + +--- + +## Tech Stack + +| Layer | Technology | +|-------|------------| +| Web App | Python Flask | +| Database | MySQL 8.0 | +| Containerization | Docker | +| Orchestration | Docker Compose | +| Styling | Glassmorphism CSS | + +--- + +## Project Structure + +``` +flask-todo/ +├── app.py # Flask application +├── requirements.txt # Python dependencies +├── Dockerfile # Docker image for Flask app +├── docker-compose.yml # Multi-container setup +└── .env # Environment variables (not committed) +``` + +--- + +## Getting Started + +### 1. Clone the repo +```bash +git clone +cd flask-todo +``` + +### 2. Create .env file +``` +MYSQL_ROOT_PASSWORD=your_root_password +MYSQL_USER=your_user +MYSQL_PASSWORD=your_password +MYSQL_DATABASE=flaskdb +``` + +### 3. Start the stack +```bash +docker-compose up -d +``` + +### 4. Create the todos table +```bash +docker exec -it mysql mysql -u root -p +``` + +Then inside MySQL: +```sql +USE flaskdb; +CREATE TABLE todos ( + id INT AUTO_INCREMENT PRIMARY KEY, + task VARCHAR(255) NOT NULL +); +``` + +### 5. Access the app +Open your browser at: +``` +http://localhost:8080 +``` + +--- + +## Dockerfile + +```dockerfile +# Base image +FROM python:3.9-slim + +# Working directory +WORKDIR /app + +# Copy and install dependencies +COPY requirements.txt . +RUN pip install -r requirements.txt + +# Copy app files +COPY . . + +# Run the app +CMD ["python", "app.py"] +``` + +--- + +## docker-compose.yml + +```yaml +services: + web: + build: . + container_name: python-flask + ports: + - "8080:5000" + environment: + DB_HOST: db + DB_USER: ${MYSQL_USER} + DB_PASSWORD: ${MYSQL_PASSWORD} + DB_NAME: ${MYSQL_DATABASE} + networks: + - mynetwork + depends_on: + db: + condition: service_healthy + + db: + image: mysql:8.0 + container_name: mysql + environment: + MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + MYSQL_DATABASE: ${MYSQL_DATABASE} + volumes: + - myvolume:/var/lib/mysql + networks: + - mynetwork + healthcheck: + test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] + interval: 10s + timeout: 5s + retries: 5 + start_period: 30s + restart: on-failure + +volumes: + myvolume: + +networks: + mynetwork: +``` + +--- + +## Docker Hub + +Pull the image directly: +```bash +docker pull uttamtripathi-p/flask-todo:v1.0 +``` + +--- + +## Key Concepts Applied + +- **Multi-container setup** with Docker Compose +- **Custom Dockerfile** for Flask app +- **Named volumes** for MySQL data persistence +- **Environment variables** via .env file +- **Healthchecks** — app waits for DB to be truly ready +- **Restart policy** — MySQL restarts on failure +- **Custom network** for service isolation + +--- + +## Commands + +```bash +docker-compose up -d # Start stack +docker-compose down # Stop and remove containers +docker-compose up --build -d # Rebuild after code changes +docker-compose logs web # View Flask logs +docker-compose ps # View running services +docker system df # Check disk usage +``` + +--- + +*Built as part of a DevOps learning journey 🚀* diff --git a/2026/day-36/flask-todo/app.py b/2026/day-36/flask-todo/app.py new file mode 100644 index 0000000000..4685ff7275 --- /dev/null +++ b/2026/day-36/flask-todo/app.py @@ -0,0 +1,157 @@ +from flask import Flask, jsonify, request, render_template_string +import mysql.connector +import os + +app = Flask(__name__) + +def get_db(): + return mysql.connector.connect( + host=os.getenv('DB_HOST', 'db'), + user=os.getenv('DB_USER', 'flaskuser'), + password=os.getenv('DB_PASSWORD', 'password'), + database=os.getenv('DB_NAME', 'flaskdb') + ) + + +HTML = ''' + + + + Dockerized Flask Todo + + + +
+

🐳 Dockerized Flask Todo

+

by Uttam Tripathi  •  Flask + MySQL + Docker Compose

+
+
+
+

New Todo

+
+ + +
+
+
+

My Todos {{ todos|length }}

+ {% if todos %} + {% for todo in todos %} +
+ {{ todo[1] }} +
+ +
+
+ {% endfor %} + {% else %} +

No todos yet — add one above!

+ {% endif %} +
+
+ + +''' +@app.route('/') +def home(): + conn = get_db() + cursor = conn.cursor() + cursor.execute("SELECT * FROM todos") + todos = cursor.fetchall() + return render_template_string(HTML, todos=todos) + +@app.route('/todos', methods=['POST']) +def add_todo(): + task = request.form.get('task') + conn = get_db() + cursor = conn.cursor() + cursor.execute("INSERT INTO todos (task) VALUES (%s)", (task,)) + conn.commit() + return home() + +@app.route('/todos//delete', methods=['POST']) +def delete_todo(id): + conn = get_db() + cursor = conn.cursor() + cursor.execute("DELETE FROM todos WHERE id = %s", (id,)) + conn.commit() + return home() + +if __name__ == '__main__': + app.run(host='0.0.0.0', port=5000) diff --git a/2026/day-36/flask-todo/docker-compose.yml b/2026/day-36/flask-todo/docker-compose.yml new file mode 100644 index 0000000000..175d244405 --- /dev/null +++ b/2026/day-36/flask-todo/docker-compose.yml @@ -0,0 +1,41 @@ +services: + web: + build: . + container_name: python-flask + environment: + DB_HOST: db + DB_USER: ${MYSQL_USER} + DB_PASSWORD: ${MYSQL_PASSWORD} + DB_NAME: ${MYSQL_DATABASE} + ports: + - "8080:5000" + networks: + - mynetwork + + + db: + image: mysql:8.0 + container_name: mysql + environment: + MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + MYSQL_DATABASE: ${MYSQL_DATABASE} + volumes: + - myvolume:/var/lib/mysql + networks: + - mynetwork + healthcheck: + test: ["CMD", "mysql", "--user=root", "--password=your_root_password", "--silent", "--execute", "SELECT 1;"] + interval: 10s + timeout: 5s + retries: 5 + start_period: 30s + restart: on-failure + +volumes: + myvolume: + +networks: + mynetwork: + diff --git a/2026/day-36/flask-todo/requirements.txt b/2026/day-36/flask-todo/requirements.txt new file mode 100644 index 0000000000..12bbdcbccf --- /dev/null +++ b/2026/day-36/flask-todo/requirements.txt @@ -0,0 +1,2 @@ +flask +mysql-connector-python diff --git a/2026/day-37/README.md b/2026/day-37/README.md new file mode 100644 index 0000000000..7fd69a1ebe --- /dev/null +++ b/2026/day-37/README.md @@ -0,0 +1,84 @@ +# Day 37 – Docker Revision & Cheat Sheet + +## Goal +Take a **one-day pause** to consolidate everything from Days 29–36 so Docker actually sticks. + +## Expected Output +- A markdown file: `docker-cheatsheet.md` +- A markdown file: `day-37-revision.md` with self-check answers + +--- + +## Self-Assessment Checklist +Mark yourself honestly — **can do**, **shaky**, or **haven't done**: + +- [ ] Run a container from Docker Hub (interactive + detached) +- [ ] List, stop, remove containers and images +- [ ] Explain image layers and how caching works +- [ ] Write a Dockerfile from scratch with FROM, RUN, COPY, WORKDIR, CMD +- [ ] Explain CMD vs ENTRYPOINT +- [ ] Build and tag a custom image +- [ ] Create and use named volumes +- [ ] Use bind mounts +- [ ] Create custom networks and connect containers +- [ ] Write a docker-compose.yml for a multi-container app +- [ ] Use environment variables and .env files in Compose +- [ ] Write a multi-stage Dockerfile +- [ ] Push an image to Docker Hub +- [ ] Use healthchecks and depends_on + +--- + +## Quick-Fire Questions +Answer from memory, then verify: +1. What is the difference between an image and a container? +2. What happens to data inside a container when you remove it? +3. How do two containers on the same custom network communicate? +4. What does `docker compose down -v` do differently from `docker compose down`? +5. Why are multi-stage builds useful? +6. What is the difference between `COPY` and `ADD`? +7. What does `-p 8080:80` mean? +8. How do you check how much disk space Docker is using? + +--- + +## Build Your Docker Cheat Sheet +Create `docker-cheatsheet.md` organized by category: +- **Container commands** — run, ps, stop, rm, exec, logs +- **Image commands** — build, pull, push, tag, ls, rm +- **Volume commands** — create, ls, inspect, rm +- **Network commands** — create, ls, inspect, connect +- **Compose commands** — up, down, ps, logs, build +- **Cleanup commands** — prune, system df +- **Dockerfile instructions** — FROM, RUN, COPY, WORKDIR, EXPOSE, CMD, ENTRYPOINT + +Keep it short — one line per command, something you'd actually reference on the job. + +--- + +## Revisit Weak Spots +Pick **2 topics** you marked as shaky and redo the hands-on tasks from that day. + +--- + +## Suggested Flow (45–60 minutes) +- 10 min: go through the checklist honestly +- 10 min: answer quick-fire questions +- 20 min: build your cheat sheet +- 10 min: redo one weak area + +--- + +## Submission +1. Add `docker-cheatsheet.md` and `day-37-revision.md` to `2026/day-37/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your Docker cheat sheet on LinkedIn — help others revise too. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-37/day-37-revision.md b/2026/day-37/day-37-revision.md new file mode 100644 index 0000000000..95d705e079 --- /dev/null +++ b/2026/day-37/day-37-revision.md @@ -0,0 +1,52 @@ +## Container commands + +run= starts a container from a dockerfile +ps= lists all running containers +stop= stops the instructed container +rm= removes the container +exec= with this you can enter inside a container +logs= shows logs of the instructed container(docker logs ) + +## Image commands — + +build= use to build an image from a Dockerfile +pull= used to pull an image from docker-hub +push= used to push image from your local to docker-hub +tag= gives tag to an image before pushing it to docker-hub +ls= shows available images at your local +rmi= ued to remove marked image(docker rmi ) + +## Volume commands — + +create= used to create a new volume +ls= shows all volumes +inspect= inspects a volume , shows info like created-on, mountpoint etc.. +rm= removes an existing volume +## Network commands — +create= creates a new network +ls= shows all available networks +inspect= inspects a network, shows info like attached-containers, created-on and config info. +connect= connects a container to instructed container + +## Compose commands — + +up= build images and runs container for assigned services in compose file +down= stops and removes all compose running containers and networks +ps= shows all running containers started from compose file +logs= shows logs of file and its services +build= only builds images for services and doesn't start the container + +## Cleanup commands — + +prune= removes all unsused objects whether it is image,container,network,volume eg(docker image prune; removes all system images) a single command to remove all unused objects is (docker system prune) +system df= display information regarding the amount of disk space consumed by the Docker daemon + +## Dockerfile instructions — + +FROM= tells our dockerfile which base image to use +RUN= the command to be executed while image build process +WORKDIR= sets the working directory(which opens first when you open a container) +EXPOSE= tells the user which port to expose(it didn't do anything) +COPY= copies files from local system to the image +CMD=sets the default command that runs when the container starts, and it can be overridden at runtime (command is written in pieces) +ENTRYPOINT= you can write the full command, but the key difference from CMD is that ENTRYPOINT is harder to override at runtime, making it better for defining the main executable of a container diff --git a/2026/day-37/docker-cheatsheet.md b/2026/day-37/docker-cheatsheet.md new file mode 100644 index 0000000000..4b86804e8c --- /dev/null +++ b/2026/day-37/docker-cheatsheet.md @@ -0,0 +1,102 @@ +# 🐳 Docker Revision Notes + +--- + +## 🔲 Container Commands + +| Command | Description | +|--------|-------------| +| `docker run` | Starts a container from an image | +| `docker ps` | Lists all running containers | +| `docker stop ` | Stops the specified container | +| `docker rm ` | Removes the specified container | +| `docker exec` | Enter inside a running container | +| `docker logs ` | Shows logs of the specified container | + +--- + +## 🖼️ Image Commands + +| Command | Description | +|--------|-------------| +| `docker build` | Builds an image from a Dockerfile | +| `docker pull` | Pulls an image from Docker Hub | +| `docker push` | Pushes a local image to Docker Hub | +| `docker tag` | Tags an image before pushing to Docker Hub | +| `docker image ls` | Shows all locally available images | +| `docker rmi ` | Removes the specified image | + +--- + +## 💾 Volume Commands + +| Command | Description | +|--------|-------------| +| `docker volume create` | Creates a new volume | +| `docker volume ls` | Lists all volumes | +| `docker volume inspect` | Shows volume info (created-on, mountpoint, etc.) | +| `docker volume rm` | Removes an existing volume | + +--- + +## 🌐 Network Commands + +| Command | Description | +|--------|-------------| +| `docker network create` | Creates a new network | +| `docker network ls` | Lists all available networks | +| `docker network inspect` | Shows network info (attached containers, config, etc.) | +| `docker network connect` | Connects a container to a specified network | + +--- + +## 🧩 Compose Commands + +| Command | Description | +|--------|-------------| +| `docker compose up` | Builds images and starts containers for all services in the compose file | +| `docker compose down` | Stops and removes all compose containers and networks | +| `docker compose ps` | Lists all running containers started from the compose file | +| `docker compose logs` | Shows logs for the compose file and its services | +| `docker compose build` | Only builds images for services — does NOT start containers | + +--- + +## 🧹 Cleanup Commands + +| Command | Description | +|--------|-------------| +| `docker image prune` | Removes all unused images | +| `docker container prune` | Removes all stopped containers | +| `docker network prune` | Removes all unused networks | +| `docker volume prune` | Removes all unused volumes | +| `docker system prune` | Removes ALL unused objects (images, containers, networks, volumes) in one command | +| `docker system df` | Shows disk space consumed by the Docker daemon | + +--- + +## 📄 Dockerfile Instructions + +| Instruction | Description | +|------------|-------------| +| `FROM` | Specifies the base image to use | +| `RUN` | Executes a command during the **image build** process | +| `COPY` | Copies files from your local system into the image | +| `WORKDIR` | Sets the working directory (opened by default when you enter a container) | +| `EXPOSE` | Documents which port the app listens on — does **not** actually publish the port (use `-p` in `docker run` for that) | +| `CMD` | Sets the **default command** when the container starts — can be **overridden** at runtime — written as a JSON array e.g. `["node", "app.js"]` | +| `ENTRYPOINT` | Defines the **main executable** — harder to override at runtime — better for fixed entrypoints | + +### CMD vs ENTRYPOINT + +| | `CMD` | `ENTRYPOINT` | +|--|-------|--------------| +| Purpose | Default command | Main executable | +| Overridable at runtime? | ✅ Yes, easily | ❌ Only with `--entrypoint` flag | +| Often used together? | ✅ Yes | ✅ Yes | + +> **Tip:** Use `ENTRYPOINT` for the fixed command and `CMD` for default arguments that can be overridden. + +--- + +*Happy Dockering! 🚀* diff --git a/2026/day-38/README.md b/2026/day-38/README.md new file mode 100644 index 0000000000..39c95c996f --- /dev/null +++ b/2026/day-38/README.md @@ -0,0 +1,116 @@ +# Day 38 – YAML Basics + +## Task +Before writing a single CI/CD pipeline, you need to get comfortable with **YAML** — the language every pipeline is written in. + +You will: +- Understand YAML syntax and rules +- Write YAML files by hand +- Validate them + +--- + +## Expected Output +- A markdown file: `day-38-yaml.md` +- YAML files you create during the tasks + +--- + +## Challenge Tasks + +### Task 1: Key-Value Pairs +Create `person.yaml` that describes yourself with: +- `name` +- `role` +- `experience_years` +- `learning` (a boolean) + +**Verify:** Run `cat person.yaml` — does it look clean? No tabs? + +--- + +### Task 2: Lists +Add to `person.yaml`: +- `tools` — a list of 5 DevOps tools you know or are learning +- `hobbies` — a list using the inline format `[item1, item2]` + +Write in your notes: What are the two ways to write a list in YAML? + +--- + +### Task 3: Nested Objects +Create `server.yaml` that describes a server: +- `server` with nested keys: `name`, `ip`, `port` +- `database` with nested keys: `host`, `name`, `credentials` (nested further: `user`, `password`) + +**Verify:** Try adding a tab instead of spaces — what happens when you validate it? + +--- + +### Task 4: Multi-line Strings +In `server.yaml`, add a `startup_script` field using: +1. The `|` block style (preserves newlines) +2. The `>` fold style (folds into one line) + +Write in your notes: When would you use `|` vs `>`? + +--- + +### Task 5: Validate Your YAML +1. Install `yamllint` or use an online validator +2. Validate both your YAML files +3. Intentionally break the indentation — what error do you get? +4. Fix it and validate again + +--- + +### Task 6: Spot the Difference +Read both blocks and write what's wrong with the second one: + +```yaml +# Block 1 - correct +name: devops +tools: + - docker + - kubernetes +``` + +```yaml +# Block 2 - broken +name: devops +tools: +- docker + - kubernetes +``` + +--- + +## Hints +- YAML uses **spaces only** — never tabs +- Indentation is everything — 2 spaces is standard +- Strings don't need quotes unless they contain special characters (`:`, `#`, etc.) +- `true`/`false` are booleans, `"true"` is a string +- Validate online: yamllint.com + +--- + +## Documentation +Create `day-38-yaml.md` with: +- Your YAML files +- What you learned (3 key points) + +--- + +## Submission +1. Add your YAML files and `day-38-yaml.md` to `2026/day-38/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your YAML "aha moment" on LinkedIn — the tab vs space mistake gets everyone. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-38/day-38-yaml.md b/2026/day-38/day-38-yaml.md new file mode 100644 index 0000000000..9f2cb2f131 --- /dev/null +++ b/2026/day-38/day-38-yaml.md @@ -0,0 +1,16 @@ +# Two ways to write a list in yml + +## Block Style (multi-line) +### Each item on its own line +### Starts with a dash - and a space +### More readable, preferred for longer lists + +## Inline / Flow Style +### All items in a single line inside [ ] +### Separated by commas +### More compact, preferred for short lists + + +# When would you use | vs >? +## | will be used when want to run different commands in diff. lines +## > will be used when having user script or summary. diff --git a/2026/day-38/person.yml b/2026/day-38/person.yml new file mode 100644 index 0000000000..c8c5b159fe --- /dev/null +++ b/2026/day-38/person.yml @@ -0,0 +1,12 @@ +name: uttam +role: linux and devops roles +experience_years: fresher +learning: true +tools: + - docker containerization and compose + - linux + - kubernetes + - Github actions CI/CD + - git/github +hobbies: [Debugging, Teaching] + diff --git a/2026/day-38/server.yml b/2026/day-38/server.yml new file mode 100644 index 0000000000..8144da1d98 --- /dev/null +++ b/2026/day-38/server.yml @@ -0,0 +1,34 @@ +server: + name: + - nginx + - apache + ip: + - 172.273.232.0 + - 282.211.322.3 + port: + - 80 + - 5000 +database: + host: + - mysql + - flask + name: + - etcd + - sqld + credentials: + user: + - uttam + - tripathi + password: + - uttam123 + - tripathi123 + +startup_script: + run: | + It will keep + every line separate + runs: > + It will print + whole output + at once + diff --git a/2026/day-39/README.md b/2026/day-39/README.md new file mode 100644 index 0000000000..99ad74bd42 --- /dev/null +++ b/2026/day-39/README.md @@ -0,0 +1,96 @@ +# Day 39 – What is CI/CD? + +## Task +Before writing a single pipeline, understand **why CI/CD exists** and what it actually does. + +Today is a research and diagram day — no pipelines yet. Get the concepts right first. + +--- + +## Expected Output +- A markdown file: `day-39-cicd-concepts.md` +- A pipeline diagram (hand-drawn or text-based) + +--- + +## Challenge Tasks + +### Task 1: The Problem +Think about a team of 5 developers all pushing code to the same repo manually deploying to production. + +Write in your notes: +1. What can go wrong? +2. What does "it works on my machine" mean and why is it a real problem? +3. How many times a day can a team safely deploy manually? + +--- + +### Task 2: CI vs CD +Research and write short definitions (2-3 lines each): +1. **Continuous Integration** — what happens, how often, what it catches +2. **Continuous Delivery** — how it's different from CI, what "delivery" means +3. **Continuous Deployment** — how it differs from Delivery, when teams use it + +Write one real-world example for each. + +--- + +### Task 3: Pipeline Anatomy +A pipeline has these parts — write what each one does: +- **Trigger** — what starts the pipeline +- **Stage** — a logical phase (build, test, deploy) +- **Job** — a unit of work inside a stage +- **Step** — a single command or action inside a job +- **Runner** — the machine that executes the job +- **Artifact** — output produced by a job + +--- + +### Task 4: Draw a Pipeline +Draw a CI/CD pipeline for this scenario: +> A developer pushes code to GitHub. The app is tested, built into a Docker image, and deployed to a staging server. + +Include at least 3 stages. Hand-drawn and photographed is perfectly fine. + +--- + +### Task 5: Explore in the Wild +1. Open any popular open-source repo on GitHub (Kubernetes, React, FastAPI — pick one you know) +2. Find their `.github/workflows/` folder +3. Open one workflow YAML file +4. Write in your notes: + - What triggers it? + - How many jobs does it have? + - What does it do? (best guess) + +--- + +## Hints +- CI/CD is a practice, not just a tool +- GitHub Actions, Jenkins, GitLab CI, CircleCI — all are tools that implement CI/CD +- A pipeline failing is not a problem — it's CI/CD doing its job + +--- + +## Documentation +Create `day-39-cicd-concepts.md` with: +- Your CI vs CD vs CD definitions +- Pipeline anatomy notes +- Your pipeline diagram +- What you found in the open-source repo + +--- + +## Submission +1. Add your `day-39-cicd-concepts.md` to `2026/day-39/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your pipeline diagram on LinkedIn — even a rough hand-drawn one gets engagement. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-39/cicdpipeline.png b/2026/day-39/cicdpipeline.png new file mode 100644 index 0000000000..350bdef897 Binary files /dev/null and b/2026/day-39/cicdpipeline.png differ diff --git a/2026/day-39/day-39-cicd-concepts.md b/2026/day-39/day-39-cicd-concepts.md new file mode 100644 index 0000000000..73673ba03c --- /dev/null +++ b/2026/day-39/day-39-cicd-concepts.md @@ -0,0 +1,27 @@ +# Think about a team of 5 developers all pushing code to the same repo manually deploying to production. +## What can go wrong? +Maybe the code from one developer fails and the whole project collapses or the push conflict might occur. + +## What does "it works on my machine" mean and why is it a real problem? +Because one can have different dependencies which might not be installed in others.So, this is a real problem. + +## How many times a day can a team safely deploy manually? +At max, 2-3 times. + +# Pipeline Anatomy +A pipeline has these parts — + +## Trigger — It tells the pipeline when to start (like; someone pushes code, someone creates a pull req,etc..) +## Stage — This is just a logical phase where build,test and deployment happens +## Job — The task/work to be executed +## Step — a single command or action inside a job +## Runner — the machine that executes the job (like; virtual/local machine) +## Artifact - output produced by a job + + +## CI/CD/CD refers to three related but distinct practices in modern software development: +Continuous Integration (CI) is the practice of frequently merging developer code changes into a shared repository — often multiple times a day. Each merge triggers an automated build and test pipeline to catch integration bugs early. The goal is to detect problems as soon as they're introduced rather than at the end of a long development cycle +. +## Continuous Delivery (CD) extends CI by automatically preparing every passing build for release to a staging or production-like environment. The code is always in a deployable state, but an actual deployment to production requires a manual approval step. This gives teams control over when to release while ensuring the software is ready to release at any time. + +## Continuous Deployment (CD) goes one step further — every change that passes all automated tests is deployed to production automatically, with no human intervention. This is the most advanced practice and requires a very mature test suite and high confidence in automation. diff --git a/2026/day-39/pipeline-dev.jpeg b/2026/day-39/pipeline-dev.jpeg new file mode 100644 index 0000000000..850ac5e419 Binary files /dev/null and b/2026/day-39/pipeline-dev.jpeg differ diff --git a/2026/day-40/.github/workflows/hello.yml b/2026/day-40/.github/workflows/hello.yml new file mode 100644 index 0000000000..ddeb51208d --- /dev/null +++ b/2026/day-40/.github/workflows/hello.yml @@ -0,0 +1,25 @@ +name: hello +on: + push: + branch: +` - master +jobs: + greet: + runs-on: ubuntu-latest + steps: + - name: checkout code + uses: actions/checkout@v4 + - name: to print hello + run: | + echo "Hello from github actions" + echo "$(date)" + echo "Branch name: ${{ github.ref_name }}" + - name: Checkout repo + uses: actions/checkout@v3 + - name: List files + run: ls -la + - name: printing operating system + run: hostnamectl + + + diff --git a/2026/day-40/README.md b/2026/day-40/README.md new file mode 100644 index 0000000000..acc6eaf44b --- /dev/null +++ b/2026/day-40/README.md @@ -0,0 +1,102 @@ +# Day 40 – Your First GitHub Actions Workflow + +## Task +Today you write your **first GitHub Actions pipeline** and watch it run in the cloud. + +This is the moment CI/CD stops being a concept and becomes real. + +--- + +## Expected Output +- A workflow file: `.github/workflows/hello.yml` +- A markdown file: `day-40-first-workflow.md` +- Screenshot of your first green pipeline run + +--- + +## Challenge Tasks + +### Task 1: Set Up +1. Create a new **public** GitHub repository called `github-actions-practice` +2. Clone it locally +3. Create the folder structure: `.github/workflows/` + +--- + +### Task 2: Hello Workflow +Create `.github/workflows/hello.yml` with a workflow that: +1. Triggers on every `push` +2. Has one job called `greet` +3. Runs on `ubuntu-latest` +4. Has two steps: + - Step 1: Check out the code using `actions/checkout` + - Step 2: Print `Hello from GitHub Actions!` + +Push it. Go to the **Actions** tab on GitHub and watch it run. + +**Verify:** Is it green? Click into the job and read every step. + +--- + +### Task 3: Understand the Anatomy +Look at your workflow file and write in your notes what each key does: +- `on:` +- `jobs:` +- `runs-on:` +- `steps:` +- `uses:` +- `run:` +- `name:` (on a step) + +--- + +### Task 4: Add More Steps +Update `hello.yml` to also: +1. Print the current date and time +2. Print the name of the branch that triggered the run (hint: GitHub provides this as a variable) +3. List the files in the repo +4. Print the runner's operating system + +Push again — watch the new run. + +--- + +### Task 5: Break It On Purpose +1. Add a step that runs a command that will **fail** (e.g., `exit 1` or a misspelled command) +2. Push and observe what happens in the Actions tab +3. Fix it and push again + +Write in your notes: What does a failed pipeline look like? How do you read the error? + +--- + +## Hints +- Workflow files live in `.github/workflows/` and must end in `.yml` +- `uses: actions/checkout@v4` checks out your code onto the runner +- `run:` executes shell commands +- GitHub provides built-in variables like `${{ github.ref_name }}` for branch name +- Every push triggers a new run — check the Actions tab + +--- + +## Documentation +Create `day-40-first-workflow.md` with: +- Your workflow YAML +- Screenshot of the green run +- What each `on:`, `jobs:`, `steps:` key does (your own words) + +--- + +## Submission +1. Add `day-40-first-workflow.md` to `2026/day-40/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your first green pipeline screenshot on LinkedIn. That green checkmark hits different. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-40/day-40-first-workflow.md b/2026/day-40/day-40-first-workflow.md new file mode 100644 index 0000000000..3775b8ac77 --- /dev/null +++ b/2026/day-40/day-40-first-workflow.md @@ -0,0 +1,11 @@ +# Why a pipeline fails +## A pipeline fails when any step returns a non-zero exit code. Linux/bash convention is simple: + +## 0 = success +## anything else = failure +## When a step fails, GitHub Actions stops that job immediately and marks everything after it as skipped + +# How do you read the error? +## The mental order — always read bottom to top +## The last line tells you what failed, but the lines above tell you why. +## Most people stare at the bottom and miss the actual reason sitting a few lines up diff --git a/2026/day-40/first-pipeline.png b/2026/day-40/first-pipeline.png new file mode 100644 index 0000000000..71aa24ffb3 Binary files /dev/null and b/2026/day-40/first-pipeline.png differ diff --git a/2026/day-40/gh-ac-key.jpeg b/2026/day-40/gh-ac-key.jpeg new file mode 100644 index 0000000000..4c814664db Binary files /dev/null and b/2026/day-40/gh-ac-key.jpeg differ diff --git a/2026/day-40/pipeline-dev.jpeg b/2026/day-40/pipeline-dev.jpeg new file mode 100644 index 0000000000..850ac5e419 Binary files /dev/null and b/2026/day-40/pipeline-dev.jpeg differ diff --git a/2026/day-41/.github/workflows/hello.yml b/2026/day-41/.github/workflows/hello.yml new file mode 100644 index 0000000000..3e08b835a0 --- /dev/null +++ b/2026/day-41/.github/workflows/hello.yml @@ -0,0 +1,23 @@ +name: hello +on: + schedule: + - cron: "0 0 * * *" +jobs: + greet: + runs-on: ubuntu-latest + steps: + - name: checkout code + uses: actions/checkout@v4 + - name: to print hello + run: | + echo "Hello from github actions" + echo "$(date)" + echo "Branch name: ${{ github.ref_name }}" + + - name: List files + run: ls -la + - name: printing operating system + run: hostnamectl + + + diff --git a/2026/day-41/.github/workflows/manual.yml b/2026/day-41/.github/workflows/manual.yml new file mode 100644 index 0000000000..244b339080 --- /dev/null +++ b/2026/day-41/.github/workflows/manual.yml @@ -0,0 +1,19 @@ +name: manual trigger + +on: + workflow_dispatch: + inputs: + environment: + description: 'Select environment to deploy to' + required: true + type: choice + options: + - staging + - production + +jobs: + deploy: + runs-on: ubuntu-latest + steps: + - name: Print selected environment + run: echo "Deploying to ${{ github.event.inputs.environment }}" diff --git a/2026/day-41/.github/workflows/matrix.yml b/2026/day-41/.github/workflows/matrix.yml new file mode 100644 index 0000000000..0652907e4d --- /dev/null +++ b/2026/day-41/.github/workflows/matrix.yml @@ -0,0 +1,25 @@ +name: matrix build +on: + push: + branches: [main] + +jobs: + py_matrix: + runs-on: ${{ matrix.os }} + strategy: + fail-fast: false + matrix: + os: [ubuntu-latest] + python-version: ["3.10", "3.11", "3.12"] + exclude: + - os: ubuntu-latest + python-version: "3.10" + + steps: + - uses: actions/checkout@v2 + - name: Set up Python ${{ matrix.python-version }} + uses: actions/setup-python@v5 + with: + python-version: ${{ matrix.python-version }} + + diff --git a/2026/day-41/.github/workflows/pr_check.yml b/2026/day-41/.github/workflows/pr_check.yml new file mode 100644 index 0000000000..8a5279c024 --- /dev/null +++ b/2026/day-41/.github/workflows/pr_check.yml @@ -0,0 +1,16 @@ +name: pull request check +on: + pull_request: + branches: + - main + types: + - opened + - synchronize + +jobs: + pr_check: + runs-on: ubuntu-latest + steps: + - name: pr check running + run: | + echo "PR check running for branch: ${{ github.head_ref }}" diff --git a/2026/day-41/README.md b/2026/day-41/README.md new file mode 100644 index 0000000000..d7852edec4 --- /dev/null +++ b/2026/day-41/README.md @@ -0,0 +1,91 @@ +# Day 41 – Triggers & Matrix Builds + +## Task +Your pipeline runs on push. Today you learn **every way to trigger a workflow** and how to run jobs across multiple environments at once. + +--- + +## Expected Output +- New workflow files in your `github-actions-practice` repo +- A markdown file: `day-41-triggers.md` + +--- + +## Challenge Tasks + +### Task 1: Trigger on Pull Request +1. Create `.github/workflows/pr-check.yml` +2. Trigger it only when a pull request is **opened or updated** against `main` +3. Add a step that prints: `PR check running for branch: ` +4. Create a new branch, push a commit, and open a PR +5. Watch the workflow run automatically + +**Verify:** Does it show up on the PR page? + +--- + +### Task 2: Scheduled Trigger +1. Add a `schedule:` trigger to any workflow using cron syntax +2. Set it to run every day at midnight UTC +3. Write in your notes: What is the cron expression for every Monday at 9 AM? + +--- + +### Task 3: Manual Trigger +1. Create `.github/workflows/manual.yml` with a `workflow_dispatch:` trigger +2. Add an **input** that asks for an `environment` name (staging/production) +3. Print the input value in a step +4. Go to the **Actions** tab → find the workflow → click **Run workflow** + +**Verify:** Can you trigger it manually and see your input printed? + +--- + +### Task 4: Matrix Builds +Create `.github/workflows/matrix.yml` that: +1. Uses a matrix strategy to run the same job across: + - Python versions: `3.10`, `3.11`, `3.12` +2. Each job installs Python and prints the version +3. Watch all 3 run in parallel + +Then extend the matrix to also include 2 operating systems — how many total jobs run now? + +--- + +### Task 5: Exclude & Fail-Fast +1. In your matrix, **exclude** one specific combination (e.g., Python 3.10 on Windows) +2. Set `fail-fast: false` — trigger a failure in one job and observe what happens to the rest +3. Write in your notes: What does `fail-fast: true` (the default) do vs `false`? + +--- + +## Hints +- PR trigger: `on: pull_request: branches: [main]` +- Cron trigger: `on: schedule: - cron: '0 0 * * *'` +- Manual trigger: `on: workflow_dispatch: inputs:` +- Matrix: `strategy: matrix: python-version: [...]` +- Exclude: `exclude: - os: windows-latest python-version: "3.10"` + +--- + +## Documentation +Create `day-41-triggers.md` with: +- Each workflow YAML +- Screenshots of runs +- The cron expression answer from Task 2 + +--- + +## Submission +1. Add `day-41-triggers.md` to `2026/day-41/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your matrix build screenshot — seeing multiple jobs run in parallel for the first time is a great moment. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-41/day-41-(1).png b/2026/day-41/day-41-(1).png new file mode 100644 index 0000000000..67102ce72d Binary files /dev/null and b/2026/day-41/day-41-(1).png differ diff --git a/2026/day-41/day-41-(2).png b/2026/day-41/day-41-(2).png new file mode 100644 index 0000000000..10287b77fd Binary files /dev/null and b/2026/day-41/day-41-(2).png differ diff --git a/2026/day-41/day-41-triggers.md b/2026/day-41/day-41-triggers.md new file mode 100644 index 0000000000..ecaa7a8bd7 --- /dev/null +++ b/2026/day-41/day-41-triggers.md @@ -0,0 +1,8 @@ +# What does fail-fast: true (the default) do vs false? +## fail-fast: true +### Doesn't execute another matrix if one of them fails +## fail-fast: false +### runs another matrix even when one fails + +# What is the cron expression for every Monday at 9 AM? +## 0 9 * * 1 diff --git a/2026/day-42/README.md b/2026/day-42/README.md new file mode 100644 index 0000000000..fe2cf8db23 --- /dev/null +++ b/2026/day-42/README.md @@ -0,0 +1,120 @@ +# Day 42 – Runners: GitHub-Hosted & Self-Hosted + +## Task +Every job needs a machine to run on. Today you understand **runners** — GitHub's hosted ones and how to set up your own self-hosted runner on a real server. + +--- + +## Expected Output +- A self-hosted runner registered to your GitHub repo +- A workflow that runs a job on your self-hosted runner +- A markdown file: `day-42-runners.md` + +--- + +## Challenge Tasks + +### Task 1: GitHub-Hosted Runners +1. Create a workflow with 3 jobs, each on a different OS: + - `ubuntu-latest` + - `windows-latest` + - `macos-latest` +2. In each job, print: + - The OS name + - The runner's hostname + - The current user running the job +3. Watch all 3 run in parallel + +Write in your notes: What is a GitHub-hosted runner? Who manages it? + +--- + +### Task 2: Explore What's Pre-installed +1. On the `ubuntu-latest` runner, run a step that prints: + - Docker version + - Python version + - Node version + - Git version +2. Look up the GitHub docs for the full list of pre-installed software on `ubuntu-latest` + +Write in your notes: Why does it matter that runners come with tools pre-installed? + +--- + +### Task 3: Set Up a Self-Hosted Runner +1. Go to your GitHub repo → Settings → Actions → Runners → **New self-hosted runner** +2. Choose Linux as the OS +3. Follow the instructions to download and configure the runner on: + - Your local machine, OR + - A cloud VM (EC2, Utho, or any VPS) +4. Start the runner — verify it shows as **Idle** in GitHub + +**Verify:** Your runner appears in the Runners list with a green dot. + +--- + +### Task 4: Use Your Self-Hosted Runner +1. Create `.github/workflows/self-hosted.yml` +2. Set `runs-on: self-hosted` +3. Add steps that: + - Print the hostname of the machine (it should be YOUR machine/VM) + - Print the working directory + - Create a file and verify it exists on your machine after the run +4. Trigger it and watch it run on your own hardware + +**Verify:** Check your machine — is the file there? + +--- + +### Task 5: Labels +1. Add a **label** to your self-hosted runner (e.g., `my-linux-runner`) +2. Update your workflow to use `runs-on: [self-hosted, my-linux-runner]` +3. Trigger it — does it still pick up the job? + +Write in your notes: Why are labels useful when you have multiple self-hosted runners? + +--- + +### Task 6: GitHub-Hosted vs Self-Hosted +Fill this in your notes: + +| | GitHub-Hosted | Self-Hosted | +|---|---|---| +| Who manages it? | ? | ? | +| Cost | ? | ? | +| Pre-installed tools | ? | ? | +| Good for | ? | ? | +| Security concern | ? | ? | + +--- + +## Hints +- Runner setup script is generated by GitHub — just copy and run it +- Self-hosted runner runs as a background service: `./run.sh` +- To run as a service (persistent): `sudo ./svc.sh install && sudo ./svc.sh start` +- `runs-on: self-hosted` targets any self-hosted runner +- `runs-on: [self-hosted, linux, my-label]` targets specific ones + +--- + +## Documentation +Create `day-42-runners.md` with: +- Screenshot of your self-hosted runner showing as Idle in GitHub +- Screenshot of a job running on your self-hosted runner +- The comparison table from Task 6 + +--- + +## Submission +1. Add `day-42-runners.md` to `2026/day-42/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your self-hosted runner screenshot on LinkedIn — running CI on your own machine is a cool flex. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-42/day-42(2).png b/2026/day-42/day-42(2).png new file mode 100644 index 0000000000..40600f2dcb Binary files /dev/null and b/2026/day-42/day-42(2).png differ diff --git a/2026/day-42/day-42-runners.md b/2026/day-42/day-42-runners.md new file mode 100644 index 0000000000..d757e56dd3 --- /dev/null +++ b/2026/day-42/day-42-runners.md @@ -0,0 +1,11 @@ +# Why does it matter that runners come with tools pre-installed? +## Because through pre-installed tools we can run various commands and use tools such as docker and can even run a docker command inside the runner we want; such as running a containerwithout setting up. + +# Why are labels useful when you have multiple self-hosted runners? +## When you have many self-hosted runners, labels let you target the right machine for the right job. + +# What is a GitHub-hosted runner? Who manages it? +## A GitHub-hosted runner is a virtual machine provided and fully managed by GitHub — including maintenance, updates, and scaling. + +# Why does it matter that runners come with tools pre-installed? +## t saves setup time since tools like Docker, Node.js, and Git are ready to use directly in your workflow without any installation steps. \ No newline at end of file diff --git a/2026/day-42/day-42.png b/2026/day-42/day-42.png new file mode 100644 index 0000000000..7ad2c23124 Binary files /dev/null and b/2026/day-42/day-42.png differ diff --git a/2026/day-42/self-hosted-runner-idle.png b/2026/day-42/self-hosted-runner-idle.png new file mode 100644 index 0000000000..3c983fb370 Binary files /dev/null and b/2026/day-42/self-hosted-runner-idle.png differ diff --git a/2026/day-43/README.md b/2026/day-43/README.md new file mode 100644 index 0000000000..eaff40d08b --- /dev/null +++ b/2026/day-43/README.md @@ -0,0 +1,95 @@ +# Day 43 – Jobs, Steps, Env Vars & Conditionals + +## Task +Today you learn how to **control the flow** of your pipeline — multi-job workflows, passing data between jobs, environment variables, and running steps only when certain conditions are met. + +--- + +## Expected Output +- New workflow files in your `github-actions-practice` repo +- A markdown file: `day-43-jobs-steps.md` + +--- + +## Challenge Tasks + +### Task 1: Multi-Job Workflow +Create `.github/workflows/multi-job.yml` with 3 jobs: +- `build` — prints "Building the app" +- `test` — prints "Running tests" +- `deploy` — prints "Deploying" + +Make `test` run only **after** `build` succeeds. +Make `deploy` run only **after** `test` succeeds. + +**Verify:** Check the workflow graph in the Actions tab — does it show the dependency chain? + +--- + +### Task 2: Environment Variables +In a new workflow, use environment variables at 3 levels: +1. **Workflow level** — `APP_NAME: myapp` +2. **Job level** — `ENVIRONMENT: staging` +3. **Step level** — `VERSION: 1.0.0` + +Print all three in a single step and verify each is accessible. + +Then use a **GitHub context variable** — print the commit SHA and the actor (who triggered the run). + +--- + +### Task 3: Job Outputs +1. Create a job that **sets an output** — e.g., today's date as a string +2. Create a second job that **reads that output** and prints it +3. Pass the value using `outputs:` and `needs..outputs.` + +Write in your notes: Why would you pass outputs between jobs? + +--- + +### Task 4: Conditionals +In a workflow, add: +1. A step that only runs when the branch is `main` +2. A step that only runs when the previous step **failed** +3. A job that only runs on **push** events, not on pull requests +4. A step with `continue-on-error: true` — what does this do? + +--- + +### Task 5: Putting It Together +Create `.github/workflows/smart-pipeline.yml` that: +1. Triggers on push to any branch +2. Has a `lint` job and a `test` job running in parallel +3. Has a `summary` job that runs after both, prints whether it's a `main` branch push or a feature branch push, and prints the commit message + +--- + +## Hints +- Job dependency: `needs: [job-name]` +- Set output: `echo "date=$(date)" >> $GITHUB_OUTPUT` +- Read output: `${{ needs.job-name.outputs.date }}` +- Conditionals: `if: github.ref == 'refs/heads/main'` +- Commit message: `${{ github.event.commits[0].message }}` + +--- + +## Documentation +Create `day-43-jobs-steps.md` with: +- Key workflow snippets +- What `needs:` and `outputs:` do in your own words + +--- + +## Submission +1. Add `day-43-jobs-steps.md` to `2026/day-43/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share the dependency chain diagram from your multi-job workflow on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-43/day-43(2).png b/2026/day-43/day-43(2).png new file mode 100644 index 0000000000..2c01980e45 Binary files /dev/null and b/2026/day-43/day-43(2).png differ diff --git a/2026/day-43/day-43(3).png b/2026/day-43/day-43(3).png new file mode 100644 index 0000000000..1f7f90ee31 Binary files /dev/null and b/2026/day-43/day-43(3).png differ diff --git a/2026/day-43/day-43(4).png b/2026/day-43/day-43(4).png new file mode 100644 index 0000000000..814f27d25a Binary files /dev/null and b/2026/day-43/day-43(4).png differ diff --git a/2026/day-43/day-43-runners.md b/2026/day-43/day-43-runners.md new file mode 100644 index 0000000000..5d0c9e6a95 --- /dev/null +++ b/2026/day-43/day-43-runners.md @@ -0,0 +1,7 @@ +# Why would you pass outputs between jobs? +## Because jobs run in isolated environments — they cannot directly share variables or data with each other. +## So if Job A calculates something (e.g., version number, date, build status), and Job B needs that value — you must explicitly pass it via outputs. + +# A step with continue-on-error: true — what does this do? +## By default, if a step fails → workflow stops. +## With continue-on-error: true → step can fail but workflow keeps running normally. \ No newline at end of file diff --git a/2026/day-43/day-43.png b/2026/day-43/day-43.png new file mode 100644 index 0000000000..4a5fec639d Binary files /dev/null and b/2026/day-43/day-43.png differ diff --git a/2026/day-43/multi-job.yml b/2026/day-43/multi-job.yml new file mode 100644 index 0000000000..fb2a2888e3 --- /dev/null +++ b/2026/day-43/multi-job.yml @@ -0,0 +1,57 @@ +name: multi-job +on: workflow_dispatch +env: + APP_NAME: myapp +jobs: + build: + runs-on: ubuntu-latest + env: + ENVIRONMENT: staging + steps: + - name: build + env: + VERSION: 1.0.0 + run: | + echo "Building the app" + echo "The app name is $APP_NAME" + echo "The environment is $ENVIRONMENT" + echo "The version is $VERSION" + + test: + runs-on: ubuntu-latest + needs: build + steps: + - name: test + run: echo "Running tests" + - name: Run a specific script only on main branch + if: github.ref == 'refs/heads/main' + run: | + echo "Running additional tests for main branch" + deploy: + runs-on: ubuntu-latest + needs: test + steps: + - name: deploy + run: echo "Deploying" + outputs: + runs-on: ubuntu-latest + steps: + - name: set output + id: set_output + run: echo "date=$(date)" >> $GITHUB_OUTPUT + - name: read output + run: echo "The current date is ${{ steps.set_output.outputs.date}}" + - name: main step + id: main_step + run: echo "doing something" + + - name: This runs only if previous step failed + if: failure() + run: echo "Previous step failed, nothing to do" + - name: This step gets ignored if it failed and other runs normally + continue-on-error: true + run: | + echo "This step might fail but it won't affect the rest of the workflow" + - name: This step only runs on push events not on pull request + if: github.event_name == 'push' + run: echo "This runs only on push events" \ No newline at end of file diff --git a/2026/day-43/smart-pipeline.yml b/2026/day-43/smart-pipeline.yml new file mode 100644 index 0000000000..ffdc42b27e --- /dev/null +++ b/2026/day-43/smart-pipeline.yml @@ -0,0 +1,38 @@ +name: smart pipeline for lint +on: + push: + branches: + +jobs: + lint: + runs-on: ubuntu-latest + steps: + - name: check out source repository + uses: actions/checkout@v6 + - name: set up python environment + uses: actions/setup-python@v6 + with: + python-version: '3.10' + - name: install linter dependencies + run: | + python -m pip install --upgrade pip + pip install flake8 + - name: run linter + run: | + flake8 . + echo "Linting completed successfully" + + test: + runs-on: ubuntu-latest + steps: + - name: This is a test step + run: echo "running tests" + summary: + needs: [lint, test] + runs-on: ubuntu-latest + steps: + - name: This is a summary step + run: | + echo "This push was made on branch ${{ github.ref }}" + echo "The commit message was ${{ github.event.head_commit.message }}" + \ No newline at end of file diff --git a/2026/day-44/README.md b/2026/day-44/README.md new file mode 100644 index 0000000000..980e396626 --- /dev/null +++ b/2026/day-44/README.md @@ -0,0 +1,100 @@ +# Day 44 – Secrets, Artifacts & Running Real Tests in CI + +## Task +Today your pipeline starts doing **real work** — storing sensitive values securely, saving build outputs, and running actual tests from your previous days. + +--- + +## Expected Output +- New workflow files in your `github-actions-practice` repo +- A markdown file: `day-44-secrets-artifacts.md` +- A passing test run in CI + +--- + +## Challenge Tasks + +### Task 1: GitHub Secrets +1. Go to your repo → Settings → Secrets and Variables → Actions +2. Create a secret called `MY_SECRET_MESSAGE` +3. Create a workflow that reads it and prints: `The secret is set: true` (never print the actual value) +4. Try to print `${{ secrets.MY_SECRET_MESSAGE }}` directly — what does GitHub show? + +Write in your notes: Why should you never print secrets in CI logs? + +--- + +### Task 2: Use Secrets as Environment Variables +1. Pass a secret to a step as an environment variable +2. Use it in a shell command without ever hardcoding it +3. Add `DOCKER_USERNAME` and `DOCKER_TOKEN` as secrets (you'll need these on Day 45) + +--- + +### Task 3: Upload Artifacts +1. Create a step that generates a file — e.g., a test report or a log file +2. Use `actions/upload-artifact` to save it +3. After the workflow runs, download the artifact from the Actions tab + +**Verify:** Can you see and download it from GitHub? + +--- + +### Task 4: Download Artifacts Between Jobs +1. Job 1: generate a file and upload it as an artifact +2. Job 2: download the artifact from Job 1 and use it (print its contents) + +Write in your notes: When would you use artifacts in a real pipeline? + +--- + +### Task 5: Run Real Tests in CI +Take any script from your earlier days (Python or Shell) and run it in CI: +1. Add your script to the `github-actions-practice` repo +2. Write a workflow that: + - Checks out the code + - Installs any dependencies needed + - Runs the script + - Fails the pipeline if the script exits with a non-zero code +3. Intentionally break the script — verify the pipeline goes red +4. Fix it — verify it goes green again + +--- + +### Task 6: Caching +1. Add `actions/cache` to a workflow that installs dependencies +2. Run it twice — observe the time difference +3. Write in your notes: What is being cached and where is it stored? + +--- + +## Hints +- Secrets: `${{ secrets.SECRET_NAME }}` +- Upload artifact: `uses: actions/upload-artifact@v4` +- Download artifact: `uses: actions/download-artifact@v4` +- Cache: `uses: actions/cache@v4` +- GitHub masks secret values in logs automatically + +--- + +## Documentation +Create `day-44-secrets-artifacts.md` with: +- Screenshots of artifact download +- Screenshot of your passing test run +- What you learned about secrets management + +--- + +## Submission +1. Add `day-44-secrets-artifacts.md` to `2026/day-44/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your first real test run passing in CI on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-44/day-44(2).png b/2026/day-44/day-44(2).png new file mode 100644 index 0000000000..de632e1e2a Binary files /dev/null and b/2026/day-44/day-44(2).png differ diff --git a/2026/day-44/day-44-secrets-artifacts.md b/2026/day-44/day-44-secrets-artifacts.md new file mode 100644 index 0000000000..9ee2ee34a3 --- /dev/null +++ b/2026/day-44/day-44-secrets-artifacts.md @@ -0,0 +1,7 @@ +# Why should you never print secrets in CI logs? +## Secret information like passwords, API keys, tokens, etc. can get leaked if printed in logs. + +# When would you use artifacts in a real pipeline? +## In a real pipeline, artifacts are used to pass build outputs between jobs — for example, compiling code in a build job and passing the binary to a test or deploy job without rebuilding it. They're also useful for storing reports like test results, code coverage, or security scan outputs so you can download and review them after the pipeline finishes. +# What are Secrets? +## Secrets are sensitive values like passwords, API keys, tokens etc. that should never be hardcoded directly in your code or workflow files. diff --git a/2026/day-44/day-44.png b/2026/day-44/day-44.png new file mode 100644 index 0000000000..e313075fc7 Binary files /dev/null and b/2026/day-44/day-44.png differ diff --git a/2026/day-44/secrets.yml b/2026/day-44/secrets.yml new file mode 100644 index 0000000000..c08d1bcd9e --- /dev/null +++ b/2026/day-44/secrets.yml @@ -0,0 +1,47 @@ +name: secrets +on: workflow-dispatch + +jobs: + secret: + runs-on: ubuntu-latest + steps: + - name: This step tells if secret exists or not + run: | + echo "The secret is set: ${{ secrets.MY_SECRET_MESSAGE != '' }}" + - name: passing secret in a environment variale + env: + MY_SECRET: ${{ secrets.MY_SECRET_MESSAGE }} + run: | + echo $MY_SECRET + - name: Login to Docker Hub + uses: docker/login-action@v3 + with: + username: ${{ secrets.DOCKER_USERNAME }} + password: ${{ secrets.DOCKER_TOKEN }} + + artifact: + runs-on: ubuntu-latest + steps: + - name: file creation step + run: | + echo "The test report is successfull" >> main.test + + - name: saving my test file + uses: actions/upload-artifact@v4 + with: + name: main-test + path: main.test + + print: + runs-on: ubuntu-latest + needs: artifact + steps: + - name: Download arifact + uses: actions/download-artifact@v4 + with: + name: main-test + path: main.test + + - name: print the artifact + run: | + cat main.test/main.test diff --git a/2026/day-44/tests_ci.yml b/2026/day-44/tests_ci.yml new file mode 100644 index 0000000000..6dc4056a4f --- /dev/null +++ b/2026/day-44/tests_ci.yml @@ -0,0 +1,34 @@ +name: real tests +on: + push: + branches: [main] + +jobs: + running_script: + runs-on: ubuntu-latest + steps: + - name: checkout code + uses: actions/checkout@v4 + + - name: Running the code + run: | + chmod +x ./disk_check.sh + ./disk_check.sh + + + caching: + runs-on: ubuntu-latest + steps: + - name: checkout code + uses: actions/checkout@v4 + - name: cache node modules + uses: actions/cache@v4 + with: + path: ~/.npm + key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }} + restored-keys: + ${{ runner.os }}-node- + + - name: Install dependencies + run: npm install + diff --git a/2026/day-45/Dockerfile b/2026/day-45/Dockerfile new file mode 100644 index 0000000000..7f13fd54a0 --- /dev/null +++ b/2026/day-45/Dockerfile @@ -0,0 +1,13 @@ +FROM python:3.11-slim + +WORKDIR /app + +COPY . . + +RUN pip install -r requirements.txt + +EXPOSE 5000 + +CMD ["python","app.py"] + + diff --git a/2026/day-45/README.md b/2026/day-45/README.md new file mode 100644 index 0000000000..11de33cd4b --- /dev/null +++ b/2026/day-45/README.md @@ -0,0 +1,99 @@ +# Day 45 – Docker Build & Push in GitHub Actions + +## Task +Today you build a **complete CI/CD pipeline** — code pushed to GitHub automatically builds a Docker image and ships it to Docker Hub. No manual steps. + +This is exactly what happens in real production pipelines. + +--- + +## Expected Output +- A complete workflow: `.github/workflows/docker-publish.yml` +- Your Docker image live on Docker Hub +- A status badge in your repo README +- A markdown file: `day-45-docker-cicd.md` + +--- + +## Challenge Tasks + +### Task 1: Prepare +1. Use the app you Dockerized on Day 36 (or any simple Dockerfile) +2. Add the Dockerfile to your `github-actions-practice` repo (or create a minimal one) +3. Make sure `DOCKER_USERNAME` and `DOCKER_TOKEN` secrets are set from Day 44 + +--- + +### Task 2: Build the Docker Image in CI +Create `.github/workflows/docker-publish.yml` that: +1. Triggers on push to `main` +2. Checks out the code +3. Builds the Docker image and tags it + +**Verify:** Check the build step logs — does the image build successfully? + +--- + +### Task 3: Push to Docker Hub +Add steps to: +1. Log in to Docker Hub using your secrets +2. Tag the image as `username/repo:latest` and also `username/repo:sha-` +3. Push both tags + +**Verify:** Go to Docker Hub — is your image there with both tags? + +--- + +### Task 4: Only Push on Main +Add a condition so the push step only runs on the `main` branch — not on feature branches or PRs. + +Test it: push to a feature branch and verify the image is built but NOT pushed. + +--- + +### Task 5: Add a Status Badge +1. Get the badge URL for your `docker-publish` workflow from the Actions tab +2. Add it to your `README.md` +3. Push — the badge should show green + +--- + +### Task 6: Pull and Run It +1. On your local machine (or a cloud server), pull the image you just pushed +2. Run it +3. Confirm it works + +Write in your notes: What is the full journey from `git push` to a running container? + +--- + +## Hints +- Docker login: `uses: docker/login-action@v3` +- Build and push: `uses: docker/build-push-action@v5` +- Short SHA: `${{ github.sha }}` (use `cut` or `slice` to get first 7 chars) +- Badge URL format: `https://github.com///actions/workflows/.yml/badge.svg` + +--- + +## Documentation +Create `day-45-docker-cicd.md` with: +- Your complete workflow YAML +- Docker Hub link to your image +- Screenshot of the pipeline run +- The full journey described in Task 6 + +--- + +## Submission +1. Add `day-45-docker-cicd.md` to `2026/day-45/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your Docker Hub image link and the green badge on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-45/app.py b/2026/day-45/app.py new file mode 100644 index 0000000000..7a814f7774 --- /dev/null +++ b/2026/day-45/app.py @@ -0,0 +1,40 @@ +from flask import Flask, jsonify +import psutil +import platform +from datetime import datetime + +app = Flask(__name__) + +@app.route("/") +def home(): + return jsonify({ + "message": "System Stats API is running", + "timestamp": datetime.now().strftime("%Y-%m-%d %H:%M:%S") + }) + +@app.route("/health") +def health(): + return jsonify({"status": "healthy"}), 200 + +@app.route("/stats") +def stats(): + return jsonify({ + "cpu_percent": psutil.cpu_percent(interval=1), + "memory": { + "total_mb": round(psutil.virtual_memory().total / 1024 / 1024, 2), + "used_mb": round(psutil.virtual_memory().used / 1024 / 1024, 2), + "percent": psutil.virtual_memory().percent + }, + "disk": { + "total_gb": round(psutil.disk_usage('/').total / 1024 / 1024 / 1024, 2), + "used_gb": round(psutil.disk_usage('/').used / 1024 / 1024 / 1024, 2), + "percent": psutil.disk_usage('/').percent + }, + "platform": platform.system(), + "python_version": platform.python_version() + }) + +if __name__ == "__main__": + app.run(host="0.0.0.0", port=5000) + + diff --git a/2026/day-45/docker_build-push.yml b/2026/day-45/docker_build-push.yml new file mode 100644 index 0000000000..5e0ebaf024 --- /dev/null +++ b/2026/day-45/docker_build-push.yml @@ -0,0 +1,30 @@ +name: Docker build and push +on: + push: + branches: [main] + +jobs: + build_and_push: + runs-on: ubuntu-latest + steps: + - name: code checkout + uses: actions/checkout@v4 + + - name: Set up Docker + uses: docker/setup-docker-action@v5 + + - name: Logging in docker hub + uses: docker/login-action@v4 + with: + username: ${{ secrets.DOCKER_USERNAME }} + password: ${{ secrets.DOCKER_TOKEN }} + + + - name: Build and push + uses: docker/build-push-action@v7 + with: + push: ${{ github.ref == 'refs/heads/main' }} + tags: | + uttamtripathi/auto_build_push:latest + uttamtripathi/auto_build_push:${{ github.sha }} + diff --git a/2026/day-45/requirements.txt b/2026/day-45/requirements.txt new file mode 100644 index 0000000000..4a930b6379 --- /dev/null +++ b/2026/day-45/requirements.txt @@ -0,0 +1,2 @@ +flask +psutil diff --git a/2026/day-46/README.md b/2026/day-46/README.md new file mode 100644 index 0000000000..27f643e1df --- /dev/null +++ b/2026/day-46/README.md @@ -0,0 +1,132 @@ +# Day 46 – Reusable Workflows & Composite Actions + +## Task +You've been writing workflows from scratch every time. In the real world, teams **don't repeat themselves** — they create reusable workflows that any repo can call like a function. Today you learn `workflow_call` and composite actions. + +--- + +## Expected Output +- A reusable workflow and a caller workflow in your `github-actions-practice` repo +- A custom composite action +- A markdown file: `day-46-reusable-workflows.md` + +--- + +## Challenge Tasks + +### Task 1: Understand `workflow_call` +Before writing any code, research and answer in your notes: +1. What is a **reusable workflow**? +2. What is the `workflow_call` trigger? +3. How is calling a reusable workflow different from using a regular action (`uses:`)? +4. Where must a reusable workflow file live? + +--- + +### Task 2: Create Your First Reusable Workflow +Create `.github/workflows/reusable-build.yml`: +1. Set the trigger to `workflow_call` +2. Add an `inputs:` section with: + - `app_name` (string, required) + - `environment` (string, required, default: `staging`) +3. Add a `secrets:` section with: + - `docker_token` (required) +4. Create a job that: + - Checks out the code + - Prints `Building for ` + - Prints `Docker token is set: true` (never print the actual secret) + +**Verify:** This file alone won't run — it needs a caller. That's next. + +--- + +### Task 3: Create a Caller Workflow +Create `.github/workflows/call-build.yml`: +1. Trigger on push to `main` +2. Add a job that uses your reusable workflow: + ```yaml + jobs: + build: + uses: ./.github/workflows/reusable-build.yml + with: + app_name: "my-web-app" + environment: "production" + secrets: + docker_token: ${{ secrets.DOCKER_TOKEN }} + ``` +3. Push to `main` and watch it run + +**Verify:** In the Actions tab, do you see the caller triggering the reusable workflow? Click into the job — can you see the inputs printed? + +--- + +### Task 4: Add Outputs to the Reusable Workflow +Extend `reusable-build.yml`: +1. Add an `outputs:` section that exposes a `build_version` value +2. Inside the job, generate a version string (e.g., `v1.0-`) and set it as output +3. In your caller workflow, add a second job that: + - Depends on the build job (`needs:`) + - Reads and prints the `build_version` output + +**Verify:** Does the second job print the version from the reusable workflow? + +--- + +### Task 5: Create a Composite Action +Create a **custom composite action** in your repo at `.github/actions/setup-and-greet/action.yml`: +1. Define inputs: `name` and `language` (default: `en`) +2. Add steps that: + - Print a greeting in the specified language + - Print the current date and runner OS + - Set an output called `greeted` with value `true` +3. Use the composite action in a new workflow with `uses: ./.github/actions/setup-and-greet` + +**Verify:** Does your custom action run and print the greeting? + +--- + +### Task 6: Reusable Workflow vs Composite Action +Fill this in your notes: + +| | Reusable Workflow | Composite Action | +|---|---|---| +| Triggered by | `workflow_call` | `uses:` in a step | +| Can contain jobs? | ? | ? | +| Can contain multiple steps? | ? | ? | +| Lives where? | ? | ? | +| Can accept secrets directly? | ? | ? | +| Best for | ? | ? | + +--- + +## Hints +- Reusable workflows must be in `.github/workflows/` directory +- Caller syntax: `uses: ./.github/workflows/file.yml` (same repo) or `uses: org/repo/.github/workflows/file.yml@main` (cross-repo) +- Composite action: `action.yml` with `runs: using: "composite"` +- Reusable workflow outputs: `on: workflow_call: outputs: name: value: ${{ jobs.job-id.outputs.name }}` +- A reusable workflow can be called by at most 20 unique caller workflows in a single run + +--- + +## Documentation +Create `day-46-reusable-workflows.md` with: +- Your reusable workflow and caller workflow YAML +- Your composite action YAML +- The comparison table from Task 6 +- Screenshot of the caller workflow triggering the reusable one + +--- + +## Submission +1. Add `day-46-reusable-workflows.md` to `2026/day-46/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share how you built your first reusable workflow on LinkedIn — this is a real production skill. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-46/action.yml b/2026/day-46/action.yml new file mode 100644 index 0000000000..043660b8a1 --- /dev/null +++ b/2026/day-46/action.yml @@ -0,0 +1,40 @@ +name: Setup and Greet +description: A composite action that greets a user + +inputs: + name: + required: true + description: "Name of the person to greet" + language: + default: "en" + description: "Language of the greeting" + +outputs: + greeted: + description: "Whether the greeting was done" + value: ${{ steps.set-output.outputs.greeted }} + +runs: + using: composite + steps: + - name: Print greeting + shell: bash + run: | + if [ "${{ inputs.language }}" == "en" ]; then + echo "Hello, ${{ inputs.name }}!" + elif [ "${{ inputs.language }}" == "hi" ]; then + echo "Namaste, ${{ inputs.name }}!" + else + echo "Hey, ${{ inputs.name }}!" + fi + + - name: Print date and OS + shell: bash + run: | + echo "Current date: $(date)" + echo "Runner OS: ${{ runner.os }}" + + - name: Set greeted output + id: set-output + shell: bash + run: echo "greeted=true" >> $GITHUB_OUTPUT diff --git a/2026/day-46/call-build.yml b/2026/day-46/call-build.yml new file mode 100644 index 0000000000..e63853ebd8 --- /dev/null +++ b/2026/day-46/call-build.yml @@ -0,0 +1,19 @@ +name: call build +on: + push: + branches: [main] + +jobs: + build: + uses: ./.github/workflows/reusable-build.yml + with: + app_name: "my-webapp" + environment: "production" + secrets: + docker_token: ${{ secrets.DOCKER_TOKEN }} + test: + needs: build + runs-on: ubuntu-latest + steps: + - name: Print build version + run: echo ${{ needs.build.outputs.VERSION }} diff --git a/2026/day-46/day-46-notes.md b/2026/day-46/day-46-notes.md new file mode 100644 index 0000000000..72de08078b --- /dev/null +++ b/2026/day-46/day-46-notes.md @@ -0,0 +1,12 @@ +# What is a reusable workflow? +## A workflow that can be called and reused by other workflows instead of repeating the same steps in every workflow file. + +# What is the workflow_call trigger? +## It's the trigger that marks a workflow as reusable — meaning it can be called by another workflow instead of running on its own like push or pull request. + +# How is calling a reusable workflow different from using a regular action (uses:)? +## A regular action runs a single step — like login, build, checkout. A reusable workflow runs an entire job with multiple steps inside it. Think of action as one task and reusable workflow as a full pipeline. + +# Where must a reusable workflow file live? +## It must be inside the .github/workflows/ folder. And the repository must be either public or in the same organization to be called from another workflow. + diff --git a/2026/day-46/day-46.png b/2026/day-46/day-46.png new file mode 100644 index 0000000000..0bb0d33fa1 Binary files /dev/null and b/2026/day-46/day-46.png differ diff --git a/2026/day-46/reusable-build.yml b/2026/day-46/reusable-build.yml new file mode 100644 index 0000000000..534ca7d5d7 --- /dev/null +++ b/2026/day-46/reusable-build.yml @@ -0,0 +1,37 @@ +name: reusable workflow +on: + workflow_call: + inputs: + app_name: + description: "App name" + required: true + type: string + environment: + description: "Environment" + required: true + default: "staging" + type: string + secrets: + docker_token: + required: true + outputs: + VERSION: + description: "output" + value: ${{ jobs.reusable.outputs.VERSION }} + +jobs: + reusable: + runs-on: ubuntu-latest + outputs: + VERSION: ${{ steps.version.outputs.VERSION }} + steps: + - name: code checkout + uses: actions/checkout@v4 + - name: building app + run: | + echo "building ${{ inputs.app_name }} for ${{ inputs.environment }}" + echo "docker_token is set: ${{ secrets.docker_token != ''}}" + - name: Generate version + id: version + run: echo "VERSION=v1.0-$(echo ${{ github.sha }} | cut -c1-7)" >> $GITHUB_OUTPUT + diff --git a/2026/day-46/test-composite.yml b/2026/day-46/test-composite.yml new file mode 100644 index 0000000000..0fcea41f5f --- /dev/null +++ b/2026/day-46/test-composite.yml @@ -0,0 +1,21 @@ +name: Test Composite Action +on: + push: + branches: [main] + +jobs: + greet: + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v4 + + - name: Run composite action + id: greet + uses: ./.github/actions/setup-and-greet + with: + name: "Uttam" + language: "hi" + + - name: Print output + run: echo "Greeted status = ${{ steps.greet.outputs.greeted }}" diff --git a/2026/day-47/README.md b/2026/day-47/README.md new file mode 100644 index 0000000000..6279290536 --- /dev/null +++ b/2026/day-47/README.md @@ -0,0 +1,151 @@ +# Day 47 – Advanced Triggers: PR Events, Cron Schedules & Event-Driven Pipelines + +## Task +You've used `push` and basic `pull_request` triggers. But GitHub Actions supports **dozens of event types** — today you go deep into PR lifecycle events, scheduled cron jobs, and chaining workflows together. + +--- + +## Expected Output +- Multiple workflow files demonstrating advanced triggers +- A markdown file: `day-47-advanced-triggers.md` +- At least one scheduled workflow running on your repo + +--- + +## Challenge Tasks + +### Task 1: Pull Request Event Types +Create `.github/workflows/pr-lifecycle.yml` that triggers on `pull_request` with **specific activity types**: +1. Trigger on: `opened`, `synchronize`, `reopened`, `closed` +2. Add steps that: + - Print which event type fired: `${{ github.event.action }}` + - Print the PR title: `${{ github.event.pull_request.title }}` + - Print the PR author: `${{ github.event.pull_request.user.login }}` + - Print the source branch and target branch +3. Add a conditional step that only runs when the PR is **merged** (closed + merged = true) + +Test it: create a PR, push an update to it, then merge it. Watch the workflow fire each time with a different event type. + +--- + +### Task 2: PR Validation Workflow +Create `.github/workflows/pr-checks.yml` — a real-world PR gate: +1. Trigger on `pull_request` to `main` +2. Add a job `file-size-check` that: + - Checks out the code + - Fails if any file in the PR is larger than 1 MB +3. Add a job `branch-name-check` that: + - Reads the branch name from `${{ github.head_ref }}` + - Fails if it doesn't follow the pattern `feature/*`, `fix/*`, or `docs/*` +4. Add a job `pr-body-check` that: + - Reads the PR body: `${{ github.event.pull_request.body }}` + - Warns (but doesn't fail) if the PR description is empty + +**Verify:** Open a PR from a badly named branch — does the check fail? + +--- + +### Task 3: Scheduled Workflows (Cron Deep Dive) +Create `.github/workflows/scheduled-tasks.yml`: +1. Add a `schedule` trigger with cron: `'30 2 * * 1'` (every Monday at 2:30 AM UTC) +2. Add **another** cron entry: `'0 */6 * * *'` (every 6 hours) +3. In the job, print which schedule triggered using `${{ github.event.schedule }}` +4. Add a step that acts as a **health check** — curl a URL and check the response code + +Write in your notes: +- The cron expression for: every weekday at 9 AM IST +- The cron expression for: first day of every month at midnight +- Why GitHub says scheduled workflows may be delayed or skipped on inactive repos + +**Important:** Also add `workflow_dispatch` so you can test it manually without waiting for the schedule. + +--- + +### Task 4: Path & Branch Filters +Create `.github/workflows/smart-triggers.yml`: +1. Trigger on push but **only** when files in `src/` or `app/` change: + ```yaml + on: + push: + paths: + - 'src/**' + - 'app/**' + ``` +2. Add `paths-ignore` in a second workflow that skips runs when only docs change: + ```yaml + paths-ignore: + - '*.md' + - 'docs/**' + ``` +3. Add branch filters to only trigger on `main` and `release/*` branches +4. Test it: push a change to a `.md` file — does the workflow skip? + +Write in your notes: When would you use `paths` vs `paths-ignore`? + +--- + +### Task 5: `workflow_run` — Chain Workflows Together +Create two workflows: +1. `.github/workflows/tests.yml` — runs tests on every push +2. `.github/workflows/deploy-after-tests.yml` — triggers **only after** `tests.yml` completes successfully: + ```yaml + on: + workflow_run: + workflows: ["Run Tests"] + types: [completed] + ``` +3. In the deploy workflow, add a conditional: + - Only proceed if the triggering workflow **succeeded** (`${{ github.event.workflow_run.conclusion == 'success' }}`) + - Print a warning and exit if it failed + +**Verify:** Push a commit — does the test workflow run first, then trigger the deploy workflow? + +--- + +### Task 6: `repository_dispatch` — External Event Triggers +1. Create `.github/workflows/external-trigger.yml` with trigger `repository_dispatch` +2. Set it to respond to event type: `deploy-request` +3. Print the client payload: `${{ github.event.client_payload.environment }}` +4. Trigger it using `curl` or `gh`: + ```bash + gh api repos///dispatches \ + -f event_type=deploy-request \ + -f client_payload='{"environment":"production"}' + ``` + +Write in your notes: When would an external system (like a Slack bot or monitoring tool) trigger a pipeline? + +--- + +## Hints +- PR merge check: `if: github.event.pull_request.merged == true` +- Cron syntax: `minute hour day-of-month month day-of-week` +- Scheduled workflows only run on the **default branch** +- `workflow_run` gives you access to the triggering workflow's conclusion and artifacts +- `repository_dispatch` requires a personal access token with `repo` scope +- Path filters use glob patterns — `**` matches nested directories + +--- + +## Documentation +Create `day-47-advanced-triggers.md` with: +- Your workflow YAML files +- The cron expressions from Task 3 +- Screenshot of the PR checks running on a pull request +- Explanation of `workflow_run` vs `workflow_call` in your own words + +--- + +## Submission +1. Add `day-47-advanced-triggers.md` to `2026/day-47/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your PR validation workflow on LinkedIn — automated PR gates are a real DevOps flex. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-47/day-47-advanced-triggers.md b/2026/day-47/day-47-advanced-triggers.md new file mode 100644 index 0000000000..f3673272fa --- /dev/null +++ b/2026/day-47/day-47-advanced-triggers.md @@ -0,0 +1,23 @@ +# Why GitHub says scheduled workflows may be delayed or skipped on inactive repos? + +## Becasue GitHub runs scheduled workflows on shared infrastructure — millions of repos use it. +## So GitHub prioritizes active repos over inactive ones. + +# When would you use paths vs paths-ignore? +## Use paths when want to run on a file/folder with specific pattern, use paths-ignore when want to ignre file/folder of a pattern. + +# When would an external system (like a Slack bot or monitoring tool) trigger a pipeline? + +## If it hits the repo's API +## A Slack bot has a /deploy command → someone types it → bot hits GitHub API → workflow runs +## A monitoring tool detects server is down → automatically hits GitHub API → workflow runs to restart server + +# Explanation of workflow_run vs workflow_call +## workflow_run runs a workflow when the desired state or conditions are fulfilled. +## workflow_call it makes a workflow reusable like a function. + +# The cron expression for: every weekday at 9 AM IST +## '30 3 * * 1-5' + +# The cron expression for: first day of every month at midnight +## 0 0 1 * * diff --git a/2026/day-47/day-47.png b/2026/day-47/day-47.png new file mode 100644 index 0000000000..57a415eaf7 Binary files /dev/null and b/2026/day-47/day-47.png differ diff --git a/2026/day-47/deploy-after-tests.yml b/2026/day-47/deploy-after-tests.yml new file mode 100644 index 0000000000..19e3d3d426 --- /dev/null +++ b/2026/day-47/deploy-after-tests.yml @@ -0,0 +1,21 @@ +name: deploy +on: + workflow_run: + workflows: ["Run Tests"] + types: [completed] + +jobs: + condition: + runs-on: ubuntu-latest + steps: + - name: condition to fulfill + run: | + + if [ "${{ github.event.workflow_run.conclusion }}" == 'success' ]; then + echo "The workflow is running successfully" + else + echo "Trigger Workflow failed" + exit 1 + fi + - name: deploy-tests + run: echo "This message is printed when workflow is triggered after successfull completion of another workflow" \ No newline at end of file diff --git a/2026/day-47/external-trigger.yml b/2026/day-47/external-trigger.yml new file mode 100644 index 0000000000..3c5c0a17ac --- /dev/null +++ b/2026/day-47/external-trigger.yml @@ -0,0 +1,20 @@ +name: Repository dispatch trigger +on: + repository_dispatch: + types: ["deploy-request"] + + +jobs: + external_trigger: + runs-on: ubuntu-latest + steps: + - name: Respond to event type + run: echo "${{ github.event.client_payload.environment }}" + + + + + + + + diff --git a/2026/day-47/pr-checks.yml b/2026/day-47/pr-checks.yml new file mode 100644 index 0000000000..545063fb37 --- /dev/null +++ b/2026/day-47/pr-checks.yml @@ -0,0 +1,60 @@ +name: Pr validation workflow +on: + pull_request: + branches: [main] + + +jobs: + file-size-check: + runs-on: ubuntu-latest + steps: + - name: checkout code + uses: actions/checkout@v4 + - name: quick fail if file> 1mb + run: | + echo "Checking file sizes in PR" + + for file in $(find . -type f);do + size=$(stat -c%s "$file") + if [ "$size" -gt 1048576 ]; then + echo "X $file is larger than 1MB" + exit 1 + fi + done + + echo "✅ All files are less than 1mb" + + branch-name-check: + runs-on: ubuntu-latest + steps: + - name: branch name check + run: echo "${{ github.head_ref }}" + - name: failing if branch is not in recognized pattern + env: + BRANCH: "${{ github.head_ref }}" + run: | + + if [[ "$BRANCH" != feature/* && \ + "$BRANCH" != fix/* && \ + "$BRANCH" != docs/* ]]; then + echo "((FAILED)) '$BRANCH' does not match allowed patterns." + echo "Allowed= feature/*, fix/*, docs/*" + exit 1 + fi + + echo "✅ branch pattern is verified" + + + pr-body-check: + runs-on: ubuntu-latest + steps: + - name: Reads the PR body + run: | + if [ -z "${{ github.event.pull_request.body }}" ]; then + echo "<<>>" + echo "The file is empty" + else + echo "${{ github.event.pull_request.body }}" + fi + + diff --git a/2026/day-47/pr-lifecycle.yml b/2026/day-47/pr-lifecycle.yml new file mode 100644 index 0000000000..341f5ba32e --- /dev/null +++ b/2026/day-47/pr-lifecycle.yml @@ -0,0 +1,20 @@ +name: pull request events +on: + pull_request: + types: [opened, synchronize, reopened, closed] + + +jobs: + log-event: + runs-on: ubuntu-latest + steps: + - name: print trigger action + run: echo "${{ github.event.action }}" + - name: print pr title + run: echo "${{ github.event.pull_request.title }}" + - name: print pull request author + run: echo "${{ github.event.pull_request.user.login }}" + - name: print source branch + run: echo "${{ github.head_ref }}" + - name: print target branch + run: echo "${{ github.base_ref }}" diff --git a/2026/day-47/scheduled-tasks.yml b/2026/day-47/scheduled-tasks.yml new file mode 100644 index 0000000000..4decbf6476 --- /dev/null +++ b/2026/day-47/scheduled-tasks.yml @@ -0,0 +1,28 @@ +name: scheduled workflows with cronjob +on: + workflow_dispatch: + schedule: + - cron: '30 2 * * 1' + - cron: '0 */6 * * *' + +jobs: + schedule_triggered: + runs-on: ubuntu-latest + steps: + - name: schedule trigger used + run: | + if [ -z "${{ github.event.schedule }}" ]; then + echo "Triggered manually via workflow_dispatch" + else + echo "Triggered by cron: ${{ github.event.schedule }}" + fi + - name: Health check + run: | + response=$(curl -s -o /dev/null -w "%{http_code}" -L https://google.com ) + if [ "$response" -ne 200 ]; then + echo "Health check failed! Response code: $response" + exit 1 + else + echo "Health check passed!" + fi + diff --git a/2026/day-47/smart-triggers-ignore.yml b/2026/day-47/smart-triggers-ignore.yml new file mode 100644 index 0000000000..fff3c7ae86 --- /dev/null +++ b/2026/day-47/smart-triggers-ignore.yml @@ -0,0 +1,17 @@ +name: files to ignore +on: + push: + branches: + - main + - release/* + + paths-ignore: + + - '*.md' + - 'docs/**' +jobs: + path_to_ignore: + runs-on: ubuntu-latest + steps: + - name: branch where is workflow + run: echo "${{ github.ref_name }}" \ No newline at end of file diff --git a/2026/day-47/smart-triggers.yml b/2026/day-47/smart-triggers.yml new file mode 100644 index 0000000000..17cbee2916 --- /dev/null +++ b/2026/day-47/smart-triggers.yml @@ -0,0 +1,15 @@ +name: smart triggers +on: + push: + branches: + - main + - release/* + paths: + - 'src/**' + - 'app/**' +jobs: + path_to_trigger: + runs-on: ubuntu-latest + steps: + - name: branch where is workflow + run: echo "${{ github.ref_name }}" \ No newline at end of file diff --git a/2026/day-47/tests.yml b/2026/day-47/tests.yml new file mode 100644 index 0000000000..502843bf89 --- /dev/null +++ b/2026/day-47/tests.yml @@ -0,0 +1,11 @@ +name: Run Tests +on: + push: + branches: + +jobs: + tests: + runs-on: ubuntu-latest + steps: + - name: Running test + run: echo "The tests are running" \ No newline at end of file diff --git a/2026/day-48/README.md b/2026/day-48/README.md new file mode 100644 index 0000000000..d3eae5a38b --- /dev/null +++ b/2026/day-48/README.md @@ -0,0 +1,161 @@ +# Day 48 – GitHub Actions Project: End-to-End CI/CD Pipeline + +## Task +You've learned workflows, triggers, secrets, Docker builds, reusable workflows, and advanced events. Today you **put it all together** in one project — a complete, production-style CI/CD pipeline that builds, tests, and deploys using everything you've learned from Day 40 to Day 47. + +This is your GitHub Actions capstone. + +--- + +## Expected Output +- A GitHub repo with a working app, Dockerfile, and complete CI/CD pipeline +- At least 3 workflow files working together +- A markdown file: `day-48-actions-project.md` +- Screenshot of your full pipeline in action + +--- + +## Challenge Tasks + +### Task 1: Set Up the Project Repo +1. Create a new repo called `github-actions-capstone` (or use your existing `github-actions-practice`) +2. Add a simple app — pick any one: + - A Python Flask/FastAPI app with one endpoint + - A Node.js Express app with one endpoint + - Your Dockerized app from Day 36 +3. Add a `Dockerfile` and a basic test (even a script that curls the health endpoint counts) +4. Add a `README.md` with a project description + +--- + +### Task 2: Reusable Workflow — Build & Test +Create `.github/workflows/reusable-build-test.yml`: +1. Trigger: `workflow_call` +2. Inputs: `python_version` (or `node_version`), `run_tests` (boolean, default: true) +3. Steps: + - Check out code + - Set up the language runtime + - Install dependencies + - Run tests (only if `run_tests` is true) + - Set output: `test_result` with value `passed` or `failed` + +This workflow does NOT deploy — it only builds and tests. + +--- + +### Task 3: Reusable Workflow — Docker Build & Push +Create `.github/workflows/reusable-docker.yml`: +1. Trigger: `workflow_call` +2. Inputs: `image_name` (string), `tag` (string) +3. Secrets: `docker_username`, `docker_token` +4. Steps: + - Check out code + - Log in to Docker Hub + - Build and push the image with the given tag + - Set output: `image_url` with the full image path + +--- + +### Task 4: PR Pipeline +Create `.github/workflows/pr-pipeline.yml`: +1. Trigger: `pull_request` to `main` (types: `opened`, `synchronize`) +2. Call the reusable build-test workflow: + - Run tests: `true` +3. Add a standalone job `pr-comment` that: + - Runs after the build-test job + - Prints a summary: "PR checks passed for branch: ``" +4. Do **NOT** build or push Docker images on PRs + +**Verify:** Open a PR — does it run tests only (no Docker push)? + +--- + +### Task 5: Main Branch Pipeline +Create `.github/workflows/main-pipeline.yml`: +1. Trigger: `push` to `main` +2. Job 1: Call the reusable build-test workflow +3. Job 2 (depends on Job 1): Call the reusable Docker workflow + - Tag: `latest` and `sha-` +4. Job 3 (depends on Job 2): `deploy` job that: + - Prints "Deploying image: `` to production" + - Uses `environment: production` (set this up in repo Settings → Environments) + - Requires manual approval if you've set up environment protection rules + +**Verify:** Merge a PR to `main` — does it run tests → build Docker → deploy in sequence? + +--- + +### Task 6: Scheduled Health Check +Create `.github/workflows/health-check.yml`: +1. Trigger: `schedule` with cron `'0 */12 * * *'` (every 12 hours) + `workflow_dispatch` for manual testing +2. Steps: + - Pull your latest Docker image + - Run the container in detached mode + - Wait 5 seconds, then curl the health endpoint + - Print pass/fail based on the response + - Stop and remove the container +3. Add a step that creates a summary using `$GITHUB_STEP_SUMMARY`: + ```bash + echo "## Health Check Report" >> $GITHUB_STEP_SUMMARY + echo "- Image: myapp:latest" >> $GITHUB_STEP_SUMMARY + echo "- Status: PASSED" >> $GITHUB_STEP_SUMMARY + echo "- Time: $(date)" >> $GITHUB_STEP_SUMMARY + ``` + +--- + +### Task 7: Add Badges & Documentation +1. Add status badges for all your workflows to the repo `README.md` +2. Add a **pipeline architecture diagram** in your notes — draw (or describe) the flow: + ``` + PR opened → build & test → PR checks pass + Merge to main → build & test → Docker build & push → deploy + Every 12 hours → health check + ``` +3. Fill in your notes: What would you add next? (Slack notifications? Multi-environment? Rollback?) + +--- + +## Brownie Points: Add Security to Your Pipeline +Want to go above and beyond? Add a **DevSecOps** step to your main pipeline: +1. Add `aquasecurity/trivy-action` after the Docker build step to scan your image for vulnerabilities +2. Fail the pipeline if any **CRITICAL** severity CVE is found +3. Upload the scan report as an artifact + +This is a preview of what you'll do in depth on **Day 49**. If you get this working today, you're already thinking like a DevSecOps engineer. + +--- + +## Hints +- Environment protection: Repo Settings → Environments → Add `production` → enable "Required reviewers" +- `$GITHUB_STEP_SUMMARY` renders markdown in the Actions run summary page +- Short SHA for tags: `$(echo ${{ github.sha }} | cut -c1-7)` +- Reusable workflow outputs: accessed via `${{ needs..outputs. }}` +- Use `actions/github-script` if you want to post PR comments programmatically + +--- + +## Documentation +Create `day-48-actions-project.md` with: +- Your pipeline architecture (the flow diagram from Task 7) +- All workflow YAML files +- Screenshot of a PR running the test-only pipeline +- Screenshot of a main branch push running the full pipeline +- Docker Hub link to your pushed image +- What you'd improve next + +--- + +## Submission +1. Add `day-48-actions-project.md` to `2026/day-48/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your complete pipeline architecture on LinkedIn — you just built production-grade CI/CD from scratch using only GitHub Actions. That's serious DevOps skill. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-48/day-48(1).png b/2026/day-48/day-48(1).png new file mode 100644 index 0000000000..9f043219b0 Binary files /dev/null and b/2026/day-48/day-48(1).png differ diff --git a/2026/day-48/day-48-actions-project.md b/2026/day-48/day-48-actions-project.md new file mode 100644 index 0000000000..ae41cb0b09 --- /dev/null +++ b/2026/day-48/day-48-actions-project.md @@ -0,0 +1,270 @@ +# pipeline architecture + +# workflow files(.yml) +## main-pipeline.yml +```yaml + name: main branch pipeline + + on: + push: + branches: [master] + +jobs: + build-test: + uses: ./.github/workflows/reusable-build-test.yml + with: + run_tests: true + + prep: + runs-on: ubuntu-latest + outputs: + short_sha: ${{ steps.vars.outputs.short_sha }} + steps: + - id: vars + run: echo "short_sha=$(echo $GITHUB_SHA | cut -c1-7)" >> $GITHUB_OUTPUT + + build-push: + uses: ./.github/workflows/reusable-docker.yml + needs: [ prep,build-test ] + with: + image_name: ${{ github.event.repository.name }} + tag: ${{ needs.prep.outputs.short_sha }} + secrets: + docker_username: ${{ secrets.DOCKER_USERNAME }} + docker_token: ${{ secrets.DOCKER_TOKEN }} + + deploy: + runs-on: ubuntu-latest + needs: [build-push, prep] + environment: environment + steps: + - name: deploy message + run: | + echo "Deploying image: ${{ secrets.DOCKER_USERNAME }}/github-actions-capstone:${{ needs.prep.outputs.short_sha }}" + - name: environment info + run: | + echo "The environment being used is : ${{ vars.SITE }}" + - name: success message + if: success() +run: echo "SUCCESSFULL" +``` +## health-check.yml + +```yaml +name: health check +on: + workflow_dispatch: + schedule: + - cron: '0 */12 * * *' + +jobs: + pull_image: + runs-on: ubuntu-latest + outputs: + health_check_result: ${{ steps.summary.outputs.github_output }} + steps: + - name: pull image + run: | + docker pull ${{ secrets.DOCKER_USERNAME }}/github-actions-capstone:latest + + - name: run container + run: | + docker rm -f health_check_container || true + docker run -d -p 5000:5000 --name health_check_container \ + ${{ secrets.DOCKER_USERNAME }}/github-actions-capstone:latest + + - name: healthcheck after waiting 5 seconds + run: | + sleep 5 + if curl -sf http://localhost:5000/health; then + echo "Health check passed" + else + echo "Health check failed" + exit 1 + fi + + - name: cleanup + if: always() # ✅ always runs + run: | + docker rm -f health_check_container || true + + - name: summary step + id: summary + if: always() # ✅ always runs + run: | + if [ "${{ job.status }}" == "success" ]; then + STATUS="PASSED ✅" + else + STATUS="FAILED ❌" + fi + echo "## Health Check Report" >> $GITHUB_STEP_SUMMARY + echo "- Image: ${{ secrets.DOCKER_USERNAME }}/github-actions-capstone:latest" >> $GITHUB_STEP_SUMMARY + echo "- Status: $STATUS" >> $GITHUB_STEP_SUMMARY + echo "- Time: $(date)" >> $GITHUB_STEP_SUMMARY + echo "github_output=$STATUS" >> $GITHUB_OUTPUT +``` +## pr-pipeline.yml + +```yaml +name: pull requests pipeline +on: + pull_request: + branches: [master] + types: [ opened , synchronize] + +jobs: + pr-pipeline: + uses: ./.github/workflows/reusable-build-test.yml + with: + run_tests: true + pr-comment: + runs-on: ubuntu-latest + needs: pr-pipeline + steps: + - name: pr checks + run: | + echo "PR checks passed for branch: ${{ github.ref }}" +``` + +## reusable-build-test.yml + +```yaml +name: reusable worfklow build & test +on: + workflow_call: + inputs: + python_version: + description: "python version to use" + default: "3.13" + required: false + type: string + run_tests: + description: "Tests to run" + type: boolean + default: true + required: false + outputs: + test-result: + description: "Test value passed or failed" + value: ${{ jobs.build-and-test.outputs.test_result }} + +jobs: + build-and-test: + runs-on: ubuntu-latest + outputs: + test_result: ${{ steps.set_result.outputs.test_result }} + steps: + - name: code checkout + uses: actions/checkout@v4 + - name: setup language runtime + uses: actions/setup-python@v5 + with: + python-version: ${{ inputs.python_version }} + - name: installing dependencies + run: | + pip install -r requirements.txt + pip install -r requirements-cicd.txt + - name: run tests + id: run_tests + if: ${{ inputs.run_tests }} + run: | + flake8 app.py + - name: set output + id: set_result + if: always() + run: | + if [[ "${{ steps.run_tests.outcome }}" == "success" || "${{ steps.run_tests.outcome }}" == "skipped" ]]; then + echo "test_result=passed" >> $GITHUB_OUTPUT + else + echo "test_result=failed" >> $GITHUB_OUTPUT + fi +``` +## reuable-docker.yml +```yaml +name: reusable workflow docker build & push +on: + workflow_call: + inputs: + image_name: + description: "name of image" + required: true + type: string + tag: + description: "tag of the image" + required: true + type: string + outputs: + image_url: + description: "full image path" + value: ${{ jobs.build-and-push.outputs.image_url }} + + secrets: + docker_username: + description: "dockerhub username" + required: true + docker_token: + description: "dockerhub secret token" + required: true + + +jobs: + build-and-push: + runs-on: ubuntu-latest + outputs: + image_url: ${{ steps.image_url.outputs.image_url }} + steps: + - name: checkout code + uses: actions/checkout@v4 + + - name: set lowercase image name # ✅ add this step + id: image + run: | + echo "name=$(echo '${{ inputs.image_name }}' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT + + - name: login to docker hub + uses: docker/login-action@v3 + with: + username: ${{ secrets.docker_username }} + password: ${{ secrets.docker_token }} + + - name: build and push + uses: docker/build-push-action@v6 + with: + context: . + push: true + tags: | + ${{ secrets.docker_username }}/${{ steps.image.outputs.name }}:latest + ${{ secrets.docker_username }}/${{ steps.image.outputs.name }}:${{ inputs.tag }} + + - name: image url output + id: image_url + if: success() + run: | + echo "image_url=${{ secrets.docker_username }}/${{ steps.image.outputs.name }}:${{ inputs.tag }}" >> $GITHUB_OUTPUT +``` + +# Screenshot of a PR running the test-only pipeline + + +![alt text](day-48(1).png) + + + +# Screenshot of a main branch push running the full pipeline + + +![alt text](day-48.png) + + +# Docker Hub link to my pushed image +## https://hub.docker.com/repository/docker/uttamtripathi/github-actions-capstone + + +# What I'll improve next + +## Currently my app deploys even without my manual approval +## So next time I'll improve this + + + + diff --git a/2026/day-48/day-48.png b/2026/day-48/day-48.png new file mode 100644 index 0000000000..695de0d087 Binary files /dev/null and b/2026/day-48/day-48.png differ diff --git a/2026/day-49/README.md b/2026/day-49/README.md new file mode 100644 index 0000000000..39fabbdeb9 --- /dev/null +++ b/2026/day-49/README.md @@ -0,0 +1,219 @@ +# Day 49 – DevSecOps: Add Security to Your CI/CD Pipeline + +## Task +You can build and deploy automatically. But what if your Docker image has a known vulnerability? What if someone accidentally commits a password? Today you learn **DevSecOps** — adding simple, automated security checks to your pipeline so problems are caught **before** they reach production. + +Don't worry — this isn't a security course. You're just adding a few smart steps to the pipeline you already built. + +--- + +## Expected Output +- Security scanning added to your `github-actions-capstone` repo (from Day 48) +- A markdown file: `day-49-devsecops.md` +- Screenshot of a security scan running in your pipeline + +--- + +## What is DevSecOps? + +Think of it like this: + +**Without DevSecOps:** +> You build the app → deploy it → a security team finds a vulnerability weeks later → you scramble to fix it + +**With DevSecOps:** +> You open a PR → the pipeline automatically checks for vulnerabilities → you fix it before it ever gets merged + +**That's it.** DevSecOps = adding security checks to the pipeline you already have. Not a separate process — just a few extra steps. + +--- + +## Key Principles (Keep These in Mind) + +1. **Catch problems early** — A vulnerability found in a PR takes 5 minutes to fix. The same vulnerability found in production takes days. + +2. **Automate the checks** — Don't rely on someone remembering to check. Let the pipeline do it every time. + +3. **Block on critical issues** — If a scan finds a serious vulnerability, the pipeline should fail — just like a failing test. + +4. **Never put secrets in code** — Use GitHub Secrets (you learned this on Day 44). No `.env` files, no hardcoded API keys. + +5. **Give only the access needed** — Your workflow doesn't need write access to everything. Limit permissions. + +--- + +## Challenge Tasks + +### Task 1: Scan Your Docker Image for Vulnerabilities +Your Docker image might use a base image with known security issues. Let's find out. + +Add this step to your main branch pipeline (after Docker build, before deploy): +```yaml +- name: Scan Docker Image for Vulnerabilities + uses: aquasecurity/trivy-action@master + with: + image-ref: 'your-username/your-app:latest' + format: 'table' + exit-code: '1' + severity: 'CRITICAL,HIGH' +``` + +What this does: +- `trivy` scans your Docker image for known CVEs (Common Vulnerabilities and Exposures) +- `format: 'table'` prints a readable table in the logs +- `exit-code: '1'` means **fail the pipeline** if CRITICAL or HIGH vulnerabilities are found +- If it passes, your image is clean — proceed to push and deploy + +Push and check the Actions tab. Read the scan output. + +**Verify:** Can you see the vulnerability table in the logs? Did it pass or fail? + +Write in your notes: What CVEs (if any) were found? What base image are you using? + +--- + +### Task 2: Enable GitHub's Built-in Secret Scanning +GitHub can automatically detect if someone pushes a secret (API key, token, password) to your repo. + +1. Go to your repo → Settings → **Code security and analysis** +2. Enable **Secret scanning** +3. If available, also enable **Push protection** — this blocks the push entirely if a secret is detected + +That's it — no workflow changes needed. GitHub does this automatically. + +Write in your notes: +- What is the difference between secret scanning and push protection? +- What happens if GitHub detects a leaked AWS key in your repo? + +--- + +### Task 3: Scan Dependencies for Known Vulnerabilities +If your app uses packages (pip, npm, etc.), those packages might have known vulnerabilities. + +Add this to your **PR pipeline** (not the main pipeline): +```yaml +- name: Check Dependencies for Vulnerabilities + uses: actions/dependency-review-action@v4 + with: + fail-on-severity: critical +``` + +This checks any **new** dependencies added in the PR against a vulnerability database. If a dependency has a critical CVE, the PR check fails. + +Test it: +1. Open a PR that adds a package to your app +2. Check the Actions tab — did the dependency review run? + +**Verify:** Does the dependency review show up as a check on your PR? + +--- + +### Task 4: Add Permissions to Your Workflows +By default, workflows get broad permissions. Lock them down. + +Add this block near the top of your workflow files (after `on:`): +```yaml +permissions: + contents: read +``` + +If a workflow needs to comment on PRs, add: +```yaml +permissions: + contents: read + pull-requests: write +``` + +Update at least 2 of your existing workflow files with a `permissions` block. + +Write in your notes: Why is it a good practice to limit workflow permissions? What could go wrong if a compromised action has write access to your repo? + +--- + +### Task 5: See the Full Secure Pipeline +Look at what your pipeline does now: + +``` +PR opened + → build & test + → dependency vulnerability check ← NEW (Day 49) + → PR checks pass or fail + +Merge to main + → build & test + → Docker build + → Trivy image scan (fail on CRITICAL) ← NEW (Day 49) + → Docker push (only if scan passes) + → deploy + +Always active + → GitHub secret scanning ← NEW (Day 49) + → push protection for secrets ← NEW (Day 49) +``` + +Draw this diagram in your notes. You just built a **DevSecOps pipeline** — security is now part of your automation, not an afterthought. + +--- + +## Brownie Points (Optional — For the Curious) + +### Pin Actions to Commit SHAs +Tags like `@v4` can be moved by the action author. For extra security, pin to the exact commit: +```yaml +# Instead of this: +uses: actions/checkout@v4 + +# Use this: +uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1 +``` +This protects against supply chain attacks where a tag is silently changed. + +### Upload Scan Results to GitHub Security Tab +Add SARIF output to Trivy and upload it — your scan results will appear in the repo's **Security** tab: +```yaml +- uses: aquasecurity/trivy-action@master + with: + image-ref: 'your-username/your-app:latest' + format: 'sarif' + output: 'trivy-results.sarif' +- uses: github/codeql-action/upload-sarif@v3 + with: + sarif_file: 'trivy-results.sarif' +``` + +### Learn About OIDC (Keyless Authentication) +Instead of storing cloud credentials as long-lived secrets, GitHub Actions can use OIDC to get short-lived tokens automatically. Research: "GitHub Actions OIDC" — it's how production pipelines authenticate to AWS, GCP, and Azure without storing any keys. + +--- + +## Hints +- Trivy action docs: look up `aquasecurity/trivy-action` on GitHub +- `exit-code: '1'` = fail the step, `exit-code: '0'` = just warn +- Dependency review only works on `pull_request` events (not on push) +- Permissions block goes at the workflow level or the job level +- GitHub secret scanning is free for public repos + +--- + +## Documentation +Create `day-49-devsecops.md` with: +- What DevSecOps means in your own words (2-3 sentences) +- Screenshot of Trivy scan output in your pipeline +- Your updated pipeline diagram with security steps +- What you learned about secret scanning and dependency review + +--- + +## Submission +1. Add `day-49-devsecops.md` to `2026/day-49/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your pipeline diagram on LinkedIn — "My CI/CD pipeline now scans for vulnerabilities automatically." Simple, powerful, and impressive. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-50/.gitignore b/2026/day-50/.gitignore new file mode 100644 index 0000000000..4680e38f01 --- /dev/null +++ b/2026/day-50/.gitignore @@ -0,0 +1,2 @@ +minikube-linux-amd64 +kubectl diff --git a/2026/day-50/README.md b/2026/day-50/README.md new file mode 100644 index 0000000000..a066eac943 --- /dev/null +++ b/2026/day-50/README.md @@ -0,0 +1,214 @@ +# Day 50 – Kubernetes Architecture and Cluster Setup + +## Task +You have been building and shipping containers with Docker. But what happens when you need to run hundreds of containers across multiple servers? You need an orchestrator. Today you start your Kubernetes journey — understand the architecture, set up a local cluster, and run your first `kubectl` commands. + +This is where things get real. + +--- + +## Expected Output +- A running local Kubernetes cluster (kind or minikube) +- A markdown file: `day-50-k8s-setup.md` +- Screenshot of `kubectl get nodes` showing your cluster is ready + +--- + +## Challenge Tasks + +### Task 1: Recall the Kubernetes Story +Before touching a terminal, write down from memory: + +1. Why was Kubernetes created? What problem does it solve that Docker alone cannot? +2. Who created Kubernetes and what was it inspired by? +3. What does the name "Kubernetes" mean? + +Do not look anything up yet. Write what you remember from the session, then verify against the official docs. + +--- + +### Task 2: Draw the Kubernetes Architecture +From memory, draw or describe the Kubernetes architecture. Your diagram should include: + +**Control Plane (Master Node):** +- API Server — the front door to the cluster, every command goes through it +- etcd — the database that stores all cluster state +- Scheduler — decides which node a new pod should run on +- Controller Manager — watches the cluster and makes sure the desired state matches reality + +**Worker Node:** +- kubelet — the agent on each node that talks to the API server and manages pods +- kube-proxy — handles networking rules so pods can communicate +- Container Runtime — the engine that actually runs containers (containerd, CRI-O) + +After drawing, verify your understanding: +- What happens when you run `kubectl apply -f pod.yaml`? Trace the request through each component. +- What happens if the API server goes down? +- What happens if a worker node goes down? + +--- + +### Task 3: Install kubectl +`kubectl` is the CLI tool you will use to talk to your Kubernetes cluster. + +Install it: +```bash +# macOS +brew install kubectl + +# Linux (amd64) +curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" +chmod +x kubectl +sudo mv kubectl /usr/local/bin/ + +# Windows (with chocolatey) +choco install kubernetes-cli +``` + +Verify: +```bash +kubectl version --client +``` + +--- + +### Task 4: Set Up Your Local Cluster +Choose **one** of the following. Both give you a fully functional Kubernetes cluster on your machine. + +**Option A: kind (Kubernetes in Docker)** +```bash +# Install kind +# macOS +brew install kind + +# Linux +curl -Lo ./kind https://kind.sigs.k8s.io/dl/latest/kind-linux-amd64 +chmod +x ./kind +sudo mv ./kind /usr/local/bin/kind + +# Create a cluster +kind create cluster --name devops-cluster + +# Verify +kubectl cluster-info +kubectl get nodes +``` + +**Option B: minikube** +```bash +# Install minikube +# macOS +brew install minikube + +# Linux +curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 +sudo install minikube-linux-amd64 /usr/local/bin/minikube + +# Start a cluster +minikube start + +# Verify +kubectl cluster-info +kubectl get nodes +``` + +Write down: Which one did you choose and why? + +--- + +### Task 5: Explore Your Cluster +Now that your cluster is running, explore it: + +```bash +# See cluster info +kubectl cluster-info + +# List all nodes +kubectl get nodes + +# Get detailed info about your node +kubectl describe node + +# List all namespaces +kubectl get namespaces + +# See ALL pods running in the cluster (across all namespaces) +kubectl get pods -A +``` + +Look at the pods running in the `kube-system` namespace: +```bash +kubectl get pods -n kube-system +``` + +You should see pods like `etcd`, `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, `coredns`, and `kube-proxy`. These are the architecture components you drew in Task 2 — running as pods inside the cluster. + +**Verify:** Can you match each running pod in `kube-system` to a component in your architecture diagram? + +--- + +### Task 6: Practice Cluster Lifecycle +Build muscle memory with cluster operations: + +```bash +# Delete your cluster +kind delete cluster --name devops-cluster +# (or: minikube delete) + +# Recreate it +kind create cluster --name devops-cluster +# (or: minikube start) + +# Verify it is back +kubectl get nodes +``` + +Try these useful commands: +```bash +# Check which cluster kubectl is connected to +kubectl config current-context + +# List all available contexts (clusters) +kubectl config get-contexts + +# See the full kubeconfig +kubectl config view +``` + +Write down: What is a kubeconfig? Where is it stored on your machine? + +--- + +## Hints +- kind requires Docker to be running (it creates clusters using containers) +- minikube can use Docker, VirtualBox, or other drivers +- The default kubeconfig file is at `~/.kube/config` +- `kubectl get pods -A` is short for `kubectl get pods --all-namespaces` +- If `kubectl` cannot connect, check if your cluster is running: `kind get clusters` or `minikube status` +- `-o wide` flag gives extra details: `kubectl get nodes -o wide` + +--- + +## Documentation +Create `day-50-k8s-setup.md` with: +- Kubernetes history in your own words (3-4 sentences) +- Your architecture diagram (text-based or image) +- Which tool you chose (kind/minikube) and why +- Screenshot of `kubectl get nodes` and `kubectl get pods -n kube-system` +- What each kube-system pod does + +--- + +## Submission +1. Add `day-50-k8s-setup.md` to `2026/day-50/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Started my Kubernetes journey today. Set up a local cluster, explored the architecture, and saw the control plane components running as actual pods. The orchestration chapter begins." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-50/day-50-k8s-setup.md b/2026/day-50/day-50-k8s-setup.md new file mode 100644 index 0000000000..0297afb0a0 --- /dev/null +++ b/2026/day-50/day-50-k8s-setup.md @@ -0,0 +1,16 @@ +# Architecture diagram +## ![alt text](k8s-arch.jpeg) + +# kubectl get nodes & kubectl get pods +## ![alt text](kubectl.png) + +# Why was Kubernetes created? What problem does it solve that Docker alone cannot? +## Kubernetes was created because Docker alone cannot manage hundreds of containers across multiple hosts, cannot auto-heal failed containers, cannot do load balancing, and cannot scale them automatically based on traffic/load. + +# Who created Kubernetes and what was it inspired by? +## Kubernetes was created by Google, inspired by their internal system called Borg (which Google used to manage their own infrastructure), and was open-sourced in 2014 so that others could also contribute. + +# What does the name "Kubernetes" mean? +## It means Helmsman or Pilot (the person who steers a ship) in Greek. That's why the Kubernetes logo is a ship's wheel. + + diff --git a/2026/day-50/k8s-arch.jpeg b/2026/day-50/k8s-arch.jpeg new file mode 100644 index 0000000000..7981482634 Binary files /dev/null and b/2026/day-50/k8s-arch.jpeg differ diff --git a/2026/day-50/kubectl.png b/2026/day-50/kubectl.png new file mode 100644 index 0000000000..aba3c6dd29 Binary files /dev/null and b/2026/day-50/kubectl.png differ diff --git a/2026/day-51/README.md b/2026/day-51/README.md new file mode 100644 index 0000000000..ee93b6e8db --- /dev/null +++ b/2026/day-51/README.md @@ -0,0 +1,247 @@ +# Day 51 – Kubernetes Manifests and Your First Pods + +## Task +Yesterday you set up a cluster. Today you actually deploy something. You will learn the structure of a Kubernetes manifest file and use it to create Pods — the smallest deployable unit in Kubernetes. By the end of today, you should be able to write a Pod definition from scratch without looking at docs. + +--- + +## Expected Output +- At least 3 Pod manifests written by hand +- A markdown file: `day-51-pods.md` +- Screenshot of `kubectl get pods` showing your running pods + +--- + +## The Anatomy of a Kubernetes Manifest + +Every Kubernetes resource is defined using a YAML manifest with four required top-level fields: + +```yaml +apiVersion: v1 # Which API version to use +kind: Pod # What type of resource +metadata: # Name, labels, namespace + name: my-pod + labels: + app: my-app +spec: # The actual specification (what you want) + containers: + - name: my-container + image: nginx:latest + ports: + - containerPort: 80 +``` + +- `apiVersion` — tells Kubernetes which API group to use. For Pods, it is `v1`. +- `kind` — the resource type. Today it is `Pod`. Later you will use `Deployment`, `Service`, etc. +- `metadata` — the identity of your resource. `name` is required. `labels` are key-value pairs used for organization and selection. +- `spec` — the desired state. For a Pod, this means which containers to run, which images, which ports, etc. + +--- + +## Challenge Tasks + +### Task 1: Create Your First Pod (Nginx) +Create a file called `nginx-pod.yaml`: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: nginx-pod + labels: + app: nginx +spec: + containers: + - name: nginx + image: nginx:latest + ports: + - containerPort: 80 +``` + +Apply it: +```bash +kubectl apply -f nginx-pod.yaml +``` + +Verify: +```bash +kubectl get pods +kubectl get pods -o wide +``` + +Wait until the STATUS shows `Running`. Then explore: +```bash +# Detailed info about the pod +kubectl describe pod nginx-pod + +# Read the logs +kubectl logs nginx-pod + +# Get a shell inside the container +kubectl exec -it nginx-pod -- /bin/bash + +# Inside the container, run: +curl localhost:80 +exit +``` + +**Verify:** Can you see the Nginx welcome page when you curl from inside the pod? + +--- + +### Task 2: Create a Custom Pod (BusyBox) +Write a new manifest `busybox-pod.yaml` from scratch (do not copy-paste the nginx one): + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: busybox-pod + labels: + app: busybox + environment: dev +spec: + containers: + - name: busybox + image: busybox:latest + command: ["sh", "-c", "echo Hello from BusyBox && sleep 3600"] +``` + +Apply and verify: +```bash +kubectl apply -f busybox-pod.yaml +kubectl get pods +kubectl logs busybox-pod +``` + +Notice the `command` field — BusyBox does not run a long-lived server like Nginx. Without a command that keeps it running, the container would exit immediately and the pod would go into `CrashLoopBackOff`. + +**Verify:** Can you see "Hello from BusyBox" in the logs? + +--- + +### Task 3: Imperative vs Declarative +You have been using the declarative approach (writing YAML, then `kubectl apply`). Kubernetes also supports imperative commands: + +```bash +# Create a pod without a YAML file +kubectl run redis-pod --image=redis:latest + +# Check it +kubectl get pods +``` + +Now extract the YAML that Kubernetes generated: +```bash +kubectl get pod redis-pod -o yaml +``` + +Compare this output with your hand-written manifests. Notice how much extra metadata Kubernetes adds automatically (status, timestamps, uid, resource version). + +You can also use dry-run to generate YAML without creating anything: +```bash +kubectl run test-pod --image=nginx --dry-run=client -o yaml +``` + +This is a powerful trick — use it to quickly scaffold a manifest, then customize it. + +**Verify:** Save the dry-run output to a file and compare its structure with your nginx-pod.yaml. What fields are the same? What is different? + +--- + +### Task 4: Validate Before Applying +Before applying a manifest, you can validate it: + +```bash +# Check if the YAML is valid without actually creating the resource +kubectl apply -f nginx-pod.yaml --dry-run=client + +# Validate against the cluster's API (server-side validation) +kubectl apply -f nginx-pod.yaml --dry-run=server +``` + +Now intentionally break your YAML (remove the `image` field or add an invalid field) and run dry-run again. See what error you get. + +**Verify:** What error does Kubernetes give when the image field is missing? + +--- + +### Task 5: Pod Labels and Filtering +Labels are how Kubernetes organizes and selects resources. You added labels in your manifests — now use them: + +```bash +# List all pods with their labels +kubectl get pods --show-labels + +# Filter pods by label +kubectl get pods -l app=nginx +kubectl get pods -l environment=dev + +# Add a label to an existing pod +kubectl label pod nginx-pod environment=production + +# Verify +kubectl get pods --show-labels + +# Remove a label +kubectl label pod nginx-pod environment- +``` + +Write a manifest for a third pod with at least 3 labels (app, environment, team). Apply it and practice filtering. + +--- + +### Task 6: Clean Up +Delete all the pods you created: + +```bash +# Delete by name +kubectl delete pod nginx-pod +kubectl delete pod busybox-pod +kubectl delete pod redis-pod + +# Or delete using the manifest file +kubectl delete -f nginx-pod.yaml + +# Verify everything is gone +kubectl get pods +``` + +Notice that when you delete a standalone Pod, it is gone forever. There is no controller to recreate it. This is why in production you use Deployments (coming on Day 52) instead of bare Pods. + +--- + +## Hints +- `kubectl apply -f` creates or updates a resource from a file +- `kubectl get pods -o wide` shows the node and IP address +- `kubectl describe pod ` shows events — very useful for debugging +- `kubectl logs ` shows container stdout/stderr +- `kubectl exec -it -- /bin/sh` gives you a shell (use `/bin/sh` if `/bin/bash` is not available) +- Labels are just key-value pairs — they have no meaning to Kubernetes itself, only to selectors +- `--dry-run=client -o yaml` is your best friend for generating manifest templates + +--- + +## Documentation +Create `day-51-pods.md` with: +- The four required fields of a Kubernetes manifest and what each does +- Your nginx, busybox, and third pod manifests +- Difference between imperative (`kubectl run`) and declarative (`kubectl apply -f`) +- Screenshot of your pods running +- What happens when you delete a standalone Pod? + +--- + +## Submission +1. Add `day-51-pods.md` and your YAML files to `2026/day-51/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Wrote my first Kubernetes Pod manifests from scratch today. Created pods, got a shell inside them, and learned the difference between imperative and declarative approaches." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-51/day-51-pods.md b/2026/day-51/day-51-pods.md new file mode 100644 index 0000000000..e7b2d9f1f5 --- /dev/null +++ b/2026/day-51/day-51-pods.md @@ -0,0 +1,53 @@ +# The four required fields of a Kubernetes manifest and what each does? +## apiVersion — which version of the Kubernetes API to use (e.g. apps/v1, v1) +## kind — the type of resource to create (e.g. Deployment, Service, Pod) +## metadata — identifying info about the object, at minimum a name +## spec — the desired state of the object; what you want Kubernetes to create/maintain + +## nginx pod: +```yml +apiVersion: v1 +kind: Pod +metadata: + name: nginx-pod +spec: + containers: + - name: nginx + image: nginx +``` + +## busybox pod: +```yml +apiVersion: v1 +kind: Pod +metadata: + name: busybox-pod +spec: + containers: + - name: busybox + image: busybox + command: ["sleep", "3600"] +``` + +## third pod (alpine): + +```yml +apiVersion: v1 +kind: Pod +metadata: + name: third-pod +spec: + containers: + - name: alpine + image: alpine + command: ["sleep", "3600"] +``` + +## Difference between imperative (kubectl run) and declarative (kubectl apply -f) +### Imperative (kubectl run) — you tell Kubernetes what to do directly via a command, quick but not reproducible. Declarative (kubectl apply -f) — you define the desired state in a YAML file and Kubernetes figures out how to get there, repeatable and version-controllable. + +## What happens when you delete a standalone Pod? +### It's gone permanently. Unlike a Pod managed by a Deployment or ReplicaSet, there's no controller watching it — so Kubernetes does not reschedule or recreate it. + + + diff --git a/2026/day-52/README.md b/2026/day-52/README.md new file mode 100644 index 0000000000..e000fb2044 --- /dev/null +++ b/2026/day-52/README.md @@ -0,0 +1,269 @@ +# Day 52 – Kubernetes Namespaces and Deployments + +## Task +Yesterday you created standalone Pods. The problem? Delete a Pod and it is gone forever — no one recreates it. Today you fix that with Deployments, the real way to run applications in Kubernetes. You will also learn Namespaces, which let you organize and isolate resources inside a cluster. + +--- + +## Expected Output +- At least 2 namespaces created and used +- A Deployment running with multiple replicas +- A scaled Deployment and a rolling update performed +- A markdown file: `day-52-namespaces-deployments.md` +- Screenshot of `kubectl get deployments` and `kubectl get pods` across namespaces + +--- + +## Challenge Tasks + +### Task 1: Explore Default Namespaces +Kubernetes comes with built-in namespaces. List them: + +```bash +kubectl get namespaces +``` + +You should see at least: +- `default` — where your resources go if you do not specify a namespace +- `kube-system` — Kubernetes internal components (API server, scheduler, etc.) +- `kube-public` — publicly readable resources +- `kube-node-lease` — node heartbeat tracking + +Check what is running inside `kube-system`: +```bash +kubectl get pods -n kube-system +``` + +These are the control plane components keeping your cluster alive. Do not touch them. + +**Verify:** How many pods are running in `kube-system`? + +--- + +### Task 2: Create and Use Custom Namespaces +Create two namespaces — one for a development environment and one for staging: + +```bash +kubectl create namespace dev +kubectl create namespace staging +``` + +Verify they exist: +```bash +kubectl get namespaces +``` + +You can also create a namespace from a manifest: +```yaml +# namespace.yaml +apiVersion: v1 +kind: Namespace +metadata: + name: production +``` + +```bash +kubectl apply -f namespace.yaml +``` + +Now run a pod in a specific namespace: +```bash +kubectl run nginx-dev --image=nginx:latest -n dev +kubectl run nginx-staging --image=nginx:latest -n staging +``` + +List pods across all namespaces: +```bash +kubectl get pods -A +``` + +Notice that `kubectl get pods` without `-n` only shows the `default` namespace. You must specify `-n ` or use `-A` to see everything. + +**Verify:** Does `kubectl get pods` show these pods? What about `kubectl get pods -A`? + +--- + +### Task 3: Create Your First Deployment +A Deployment tells Kubernetes: "I want X replicas of this Pod running at all times." If a Pod crashes, the Deployment controller recreates it automatically. + +Create a file `nginx-deployment.yaml`: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + namespace: dev + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.24 + ports: + - containerPort: 80 +``` + +Key differences from a standalone Pod: +- `kind: Deployment` instead of `kind: Pod` +- `apiVersion: apps/v1` instead of `v1` +- `replicas: 3` tells Kubernetes to maintain 3 identical pods +- `selector.matchLabels` connects the Deployment to its Pods +- `template` is the Pod template — the Deployment creates Pods using this blueprint + +Apply it: +```bash +kubectl apply -f nginx-deployment.yaml +``` + +Check the result: +```bash +kubectl get deployments -n dev +kubectl get pods -n dev +``` + +You should see 3 pods with names like `nginx-deployment-xxxxx-yyyyy`. + +**Verify:** What do the READY, UP-TO-DATE, and AVAILABLE columns mean in the deployment output? + +--- + +### Task 4: Self-Healing — Delete a Pod and Watch It Come Back +This is the key difference between a Deployment and a standalone Pod. + +```bash +# List pods +kubectl get pods -n dev + +# Delete one of the deployment's pods (use an actual pod name from your output) +kubectl delete pod -n dev + +# Immediately check again +kubectl get pods -n dev +``` + +The Deployment controller detects that only 2 of 3 desired replicas exist and immediately creates a new one. The deleted pod is replaced within seconds. + +**Verify:** Is the replacement pod's name the same as the one you deleted, or different? + +--- + +### Task 5: Scale the Deployment +Change the number of replicas: + +```bash +# Scale up to 5 +kubectl scale deployment nginx-deployment --replicas=5 -n dev +kubectl get pods -n dev + +# Scale down to 2 +kubectl scale deployment nginx-deployment --replicas=2 -n dev +kubectl get pods -n dev +``` + +Watch how Kubernetes creates or terminates pods to match the desired count. + +You can also scale by editing the manifest — change `replicas: 4` in your YAML file and run `kubectl apply -f nginx-deployment.yaml` again. + +**Verify:** When you scaled down from 5 to 2, what happened to the extra pods? + +--- + +### Task 6: Rolling Update +Update the Nginx image version to trigger a rolling update: + +```bash +kubectl set image deployment/nginx-deployment nginx=nginx:1.25 -n dev +``` + +Watch the rollout in real time: +```bash +kubectl rollout status deployment/nginx-deployment -n dev +``` + +Kubernetes replaces pods one by one — old pods are terminated only after new ones are healthy. This means zero downtime. + +Check the rollout history: +```bash +kubectl rollout history deployment/nginx-deployment -n dev +``` + +Now roll back to the previous version: +```bash +kubectl rollout undo deployment/nginx-deployment -n dev +kubectl rollout status deployment/nginx-deployment -n dev +``` + +Verify the image is back to the previous version: +```bash +kubectl describe deployment nginx-deployment -n dev | grep Image +``` + +**Verify:** What image version is running after the rollback? + +--- + +### Task 7: Clean Up +```bash +kubectl delete deployment nginx-deployment -n dev +kubectl delete pod nginx-dev -n dev +kubectl delete pod nginx-staging -n staging +kubectl delete namespace dev staging production +``` + +Deleting a namespace removes everything inside it. Be very careful with this in production. + +```bash +kubectl get namespaces +kubectl get pods -A +``` + +**Verify:** Are all your resources gone? + +--- + +## Hints +- `kubectl get -n ` — target a specific namespace +- `kubectl get -A` — list resources across all namespaces +- `selector.matchLabels` in a Deployment must match `template.metadata.labels` — if they do not match, the Deployment will not manage the Pods +- `kubectl scale deployment --replicas=N` — quick way to scale +- `kubectl set image` updates a container image without editing the YAML +- `kubectl rollout undo` rolls back to the previous revision +- `kubectl rollout history` shows past revisions of a Deployment +- Deployments create ReplicaSets behind the scenes — you can see them with `kubectl get replicasets -n ` + +--- + +## Documentation +Create `day-52-namespaces-deployments.md` with: +- What namespaces are and why you would use them +- Your Deployment manifest and an explanation of each section +- What happens when you delete a Pod managed by a Deployment vs a standalone Pod +- How scaling works (both imperative and declarative) +- How rolling updates and rollbacks work +- Screenshot of your Deployment and Pods running + +--- + +## Submission +1. Add `day-52-namespaces-deployments.md` and your YAML files to `2026/day-52/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Learned Kubernetes Namespaces and Deployments today. Created self-healing deployments, scaled them up and down, and performed a zero-downtime rolling update with rollback." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-52/day-52(1).png b/2026/day-52/day-52(1).png new file mode 100644 index 0000000000..f4746270e2 Binary files /dev/null and b/2026/day-52/day-52(1).png differ diff --git a/2026/day-52/day-52(2).png b/2026/day-52/day-52(2).png new file mode 100644 index 0000000000..5941f253dd Binary files /dev/null and b/2026/day-52/day-52(2).png differ diff --git a/2026/day-52/day-52(3).png b/2026/day-52/day-52(3).png new file mode 100644 index 0000000000..4c1d7206cc Binary files /dev/null and b/2026/day-52/day-52(3).png differ diff --git a/2026/day-52/day-52-namespaces-deployments.md b/2026/day-52/day-52-namespaces-deployments.md new file mode 100644 index 0000000000..f74184255b --- /dev/null +++ b/2026/day-52/day-52-namespaces-deployments.md @@ -0,0 +1,70 @@ +# Important screenshots of today's session +![alt text](day-52(3)-1.png) + +![alt text](day-52(1)-1.png) + +![alt text](day-52(2)-1.png) + +![alt text](day-52-1.png) + + +```yml +# namespace.yml +kind: Namespace +apiVersion: v1 +metadata: + name: production +``` + + + + +## Deployment manifest with explanation +```yml +# deployment.yml +kind: Deployment # what resource to create +apiVersion: apps/v1 # which api group the source belongs +metadata: # identity/info of resourece + name: nginx-deployment # What name to give + namespace: dev # which namespace will it belong + labels: # these are like identification mark + app: nginx +spec: # deployment's information of what to create like replicas and pod template + replicas: 5 + selector: + matchLabels: # find and own pods that have these labels + app: nginx # Label to use + template: # blueprint for creating each pod + metadata: # identity of each pod that gets created + labels: + app: nginx # every pod gets this label + + spec: # specification of container + containers: + - name: nginx # name of conatiner + image: nginx:1.24 # Image to use + ports: # ports on which conatiner will run + - containerPort: 80 +``` + +## What namespaces are and why you would use them? +### namespaces are like different environment inside the cluster. Like different rooms inside a house, so that our request doesn't get in wrong room. +### we use them to avoid name conflicts, two teams can have same pod named 'nginx' as long as they are in different namespace. + +## What happens when you delete a Pod managed by a Deployment vs a standalone Pod? +### The pod get recreated if deleted from deployment as the manifest will maintain the desired state by creating Replicasets. +## A Deployment creates one (or more during updates) ReplicaSet, and the ReplicaSet maintains the desired number of Pods. +### But in case of standalone pod it will not be recreated. + +## How scaling works (both imperative and declarative) +### Imperative scaling → you manually run a command (like kubectl scale) to change replicas right now. No automation. +### Declarative scaling → you define desired state in YAML (replicas: 3), and Kubernetes ensures it stays that way. + +## How rolling updates and rollbacks work +### In a rolling update, old pods keep running while creating new pods and after successfull creation,old pods gets deleted. +### Rollback = going back to a previous working version of your Deployment. +### How it actually works +#### Every time you update a Deployment, Kubernetes keeps revision history ((ReplicaSets)) +#### If the new version is broken, you can revert to a previous ReplicaSet + + diff --git a/2026/day-52/day-52.png b/2026/day-52/day-52.png new file mode 100644 index 0000000000..6224a5220d Binary files /dev/null and b/2026/day-52/day-52.png differ diff --git a/2026/day-53/README.md b/2026/day-53/README.md new file mode 100644 index 0000000000..ba807cef34 --- /dev/null +++ b/2026/day-53/README.md @@ -0,0 +1,316 @@ +# Day 53 – Kubernetes Services + +## Task +You have Deployments running multiple Pods, but how do you actually talk to them? Pods get random IP addresses that change every time they restart. Services solve this by giving your Pods a stable network endpoint. Today you will create different types of Services and understand when to use each one. + +--- + +## Expected Output +- A Deployment exposed using ClusterIP, NodePort, and LoadBalancer services +- Verified Pod-to-Service communication from inside the cluster +- A markdown file: `day-53-services.md` +- Screenshot of `kubectl get services` showing your running services + +--- + +## Why Services? + +Every Pod gets its own IP address. But there are two problems: +1. Pod IPs are **not stable** — when a Pod restarts or gets replaced, it gets a new IP +2. A Deployment runs **multiple Pods** — which IP do you connect to? + +A Service solves both problems. It provides: +- A **stable IP and DNS name** that never changes +- **Load balancing** across all Pods that match its selector + +``` +[Client] --> [Service (stable IP)] --> [Pod 1] + --> [Pod 2] + --> [Pod 3] +``` + +--- + +## Challenge Tasks + +### Task 1: Deploy the Application +First, create a Deployment that you will expose with Services. Create `app-deployment.yaml`: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: web-app + labels: + app: web-app +spec: + replicas: 3 + selector: + matchLabels: + app: web-app + template: + metadata: + labels: + app: web-app + spec: + containers: + - name: nginx + image: nginx:1.25 + ports: + - containerPort: 80 +``` + +```bash +kubectl apply -f app-deployment.yaml +kubectl get pods -o wide +``` + +Note the individual Pod IPs. These will change if pods restart — that is the problem Services fix. + +**Verify:** Are all 3 pods running? Note down their IP addresses. + +--- + +### Task 2: ClusterIP Service (Internal Access) +ClusterIP is the default Service type. It gives your Pods a stable internal IP that is only reachable from within the cluster. + +Create `clusterip-service.yaml`: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: web-app-clusterip +spec: + type: ClusterIP + selector: + app: web-app + ports: + - port: 80 + targetPort: 80 +``` + +Key fields: +- `selector.app: web-app` — this Service routes traffic to all Pods with the label `app: web-app` +- `port: 80` — the port the Service listens on +- `targetPort: 80` — the port on the Pod to forward traffic to + +```bash +kubectl apply -f clusterip-service.yaml +kubectl get services +``` + +You should see `web-app-clusterip` with a CLUSTER-IP address. This IP is stable — it will not change even if Pods restart. + +Now test it from inside the cluster: +```bash +# Run a temporary pod to test connectivity +kubectl run test-client --image=busybox:latest --rm -it --restart=Never -- sh + +# Inside the test pod, run: +wget -qO- http://web-app-clusterip +exit +``` + +You should see the Nginx welcome page. The Service load-balanced your request to one of the 3 Pods. + +**Verify:** Does the Service respond? Try running the wget command multiple times — the Service distributes traffic across all healthy Pods. + +--- + +### Task 3: Discover Services with DNS +Kubernetes has a built-in DNS server. Every Service gets a DNS entry automatically: + +``` +..svc.cluster.local +``` + +Test this: +```bash +kubectl run dns-test --image=busybox:latest --rm -it --restart=Never -- sh + +# Inside the pod: +# Short name (works within the same namespace) +wget -qO- http://web-app-clusterip + +# Full DNS name +wget -qO- http://web-app-clusterip.default.svc.cluster.local + +# Look up the DNS entry +nslookup web-app-clusterip +exit +``` + +Both the short name and the full DNS name resolve to the same ClusterIP. In practice, you use the short name when communicating within the same namespace and the full name when reaching across namespaces. + +**Verify:** What IP does `nslookup` return? Does it match the CLUSTER-IP from `kubectl get services`? + +--- + +### Task 4: NodePort Service (External Access via Node) +A NodePort Service exposes your application on a port on every node in the cluster. This lets you access the Service from outside the cluster. + +Create `nodeport-service.yaml`: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: web-app-nodeport +spec: + type: NodePort + selector: + app: web-app + ports: + - port: 80 + targetPort: 80 + nodePort: 30080 +``` + +- `nodePort: 30080` — the port opened on every node (must be in range 30000-32767) +- Traffic flow: `:30080` -> Service -> Pod:80 + +```bash +kubectl apply -f nodeport-service.yaml +kubectl get services +``` + +Access the service: +```bash +# If using Minikube +minikube service web-app-nodeport --url + +# If using Kind, get the node IP first +kubectl get nodes -o wide +# Then curl :30080 + +# If using Docker Desktop +curl http://localhost:30080 +``` + +**Verify:** Can you see the Nginx welcome page from your browser or terminal using the NodePort? + +--- + +### Task 5: LoadBalancer Service (Cloud External Access) +In a cloud environment (AWS, GCP, Azure), a LoadBalancer Service provisions a real external load balancer that routes traffic to your nodes. + +Create `loadbalancer-service.yaml`: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: web-app-loadbalancer +spec: + type: LoadBalancer + selector: + app: web-app + ports: + - port: 80 + targetPort: 80 +``` + +```bash +kubectl apply -f loadbalancer-service.yaml +kubectl get services +``` + +On a local cluster (Minikube, Kind, Docker Desktop), the EXTERNAL-IP will show `` because there is no cloud provider to create a real load balancer. This is expected. + +If you are using Minikube: +```bash +# Minikube can simulate a LoadBalancer +minikube tunnel +# In another terminal, check again: +kubectl get services +``` + +In a real cloud cluster, the EXTERNAL-IP would be a public IP address or hostname provisioned by the cloud provider. + +**Verify:** What does the EXTERNAL-IP column show? Why is it `` on a local cluster? + +--- + +### Task 6: Understand the Service Types Side by Side +Check all three services: + +```bash +kubectl get services -o wide +``` + +Compare them: + +| Type | Accessible From | Use Case | +|------|----------------|----------| +| ClusterIP | Inside the cluster only | Internal communication between services | +| NodePort | Outside via `:` | Development, testing, direct node access | +| LoadBalancer | Outside via cloud load balancer | Production traffic in cloud environments | + +Each type builds on the previous one: +- LoadBalancer creates a NodePort, which creates a ClusterIP +- So a LoadBalancer service also has a ClusterIP and a NodePort + +Verify this: +```bash +kubectl describe service web-app-loadbalancer +``` + +You should see all three: a ClusterIP, a NodePort, and the LoadBalancer configuration. + +**Verify:** Does the LoadBalancer service also have a ClusterIP and NodePort assigned? + +--- + +### Task 7: Clean Up +```bash +kubectl delete -f app-deployment.yaml +kubectl delete -f clusterip-service.yaml +kubectl delete -f nodeport-service.yaml +kubectl delete -f loadbalancer-service.yaml + +kubectl get pods +kubectl get services +``` + +Only the built-in `kubernetes` service in the default namespace should remain. + +**Verify:** Is everything cleaned up? + +--- + +## Hints +- `selector` in a Service must match `labels` on the Pods — if they do not match, the Service routes traffic to nothing +- `kubectl get endpoints ` shows which Pod IPs a Service is currently routing to +- `port` is what the Service listens on; `targetPort` is what the Pod listens on — they do not have to be the same number +- NodePort range is 30000-32767; if you do not specify `nodePort`, Kubernetes picks one automatically +- Use `kubectl describe service ` to see the full configuration including Endpoints +- `kubectl get services -o wide` shows the selector each service uses +- To test ClusterIP services, you must test from inside the cluster (use a temporary pod) + +--- + +## Documentation +Create `day-53-services.md` with: +- What problem Services solve and how they relate to Pods and Deployments +- Your three Service manifests with an explanation of each type +- The difference between ClusterIP, NodePort, and LoadBalancer +- How Kubernetes DNS works for service discovery +- What Endpoints are and how to inspect them +- Screenshot of your services and the test output + +--- + +## Submission +1. Add `day-53-services.md` and your YAML files to `2026/day-53/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Learned Kubernetes Services today — ClusterIP for internal traffic, NodePort for node-level access, and LoadBalancer for production. Services give Pods a stable identity and load balancing." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-53/day-53(1).png b/2026/day-53/day-53(1).png new file mode 100644 index 0000000000..bf35e6dd1d Binary files /dev/null and b/2026/day-53/day-53(1).png differ diff --git a/2026/day-53/day-53-services.md b/2026/day-53/day-53-services.md new file mode 100644 index 0000000000..43837dba50 --- /dev/null +++ b/2026/day-53/day-53-services.md @@ -0,0 +1,65 @@ + +# What problem Services solve and how they relate to Pods and Deployments +## Pods are ephemeral and get new IP addresses when they restart; Services provide a single, permanent IP and DNS name to act as a stable entry point. They decouple the requester from the specific backend Pods, ensuring traffic always finds a healthy instance. + + +# Your three Service manifests with an explanation of each type +```yml +# loadbalancer-service.yml +kind: Service +apiVersion: v1 +metadata: + name: web-app-loadbalancer +spec: + type: LoadBalancer + selector: + app: web-app + ports: + - port: 80 + targetPort: 80 +``` + +```yml +# cluster-service.yml +kind: Service +apiVersion: v1 +metadata: + name: web-app-clusterip +spec: + type: ClusterIP + selector: + app: web-app + ports: + - port: 80 + targetPort: 80 +``` + +```yml +# nodeport-service.yml +kind: Service +apiVersion: v1 +metadata: + name: web-app-nodeport +spec: + type: NodePort + selector: + app: web-app + ports: + - port: 80 + targetPort: 80 + nodePort: 30080 +``` + + + +# The difference between ClusterIP, NodePort, and LoadBalancer +## ClusterIP is for internal communication, +## NodePort is a basic way to expose services to the outside world via the host IP, and +## LoadBalancer is the enterprise-standard for external access using a dedicated cloud IP. Think of them as levels of visibility: Internal $\rightarrow$ Host Network $\rightarrow$ Public Internet. + +# How Kubernetes DNS works for service discovery +## Kubernetes runs a built-in DNS service (CoreDNS) that watches for new Services and creates a record for each (e.g., my-svc.my-namespace.svc.cluster.local). Pods can simply "call" another service by its name instead of tracking volatile IP addresses. + +# What Endpoints are and how to inspect them +## Endpoints are the list of actual Pod IP addresses that match a Service's selector and are currently "Ready" to receive traffic. You can inspect them using kubectl get endpoints or see them detailed under the "Endpoints" section of kubectl describe svc . + diff --git a/2026/day-53/day-53.png b/2026/day-53/day-53.png new file mode 100644 index 0000000000..47b98fc150 Binary files /dev/null and b/2026/day-53/day-53.png differ diff --git a/2026/day-54/README.md b/2026/day-54/README.md new file mode 100644 index 0000000000..29f5f8e18a --- /dev/null +++ b/2026/day-54/README.md @@ -0,0 +1,112 @@ +# Day 54 – Kubernetes ConfigMaps and Secrets + +## Task +Your application needs configuration — database URLs, feature flags, API keys. Hardcoding these into container images means rebuilding every time a value changes. Kubernetes solves this with ConfigMaps for non-sensitive config and Secrets for sensitive data. + +--- + +## Expected Output +- ConfigMaps created from literals and from a file +- Secrets created and consumed in a Pod +- A markdown file: `day-54-configmaps-secrets.md` + +--- + +## Challenge Tasks + +### Task 1: Create a ConfigMap from Literals +1. Use `kubectl create configmap` with `--from-literal` to create a ConfigMap called `app-config` with keys `APP_ENV=production`, `APP_DEBUG=false`, and `APP_PORT=8080` +2. Inspect it with `kubectl describe configmap app-config` and `kubectl get configmap app-config -o yaml` +3. Notice the data is stored as plain text — no encoding, no encryption + +**Verify:** Can you see all three key-value pairs? + +--- + +### Task 2: Create a ConfigMap from a File +1. Write a custom Nginx config file that adds a `/health` endpoint returning "healthy" +2. Create a ConfigMap from this file using `kubectl create configmap nginx-config --from-file=default.conf=` +3. The key name (`default.conf`) becomes the filename when mounted into a Pod + +**Verify:** Does `kubectl get configmap nginx-config -o yaml` show the file contents? + +--- + +### Task 3: Use ConfigMaps in a Pod +1. Write a Pod manifest that uses `envFrom` with `configMapRef` to inject all keys from `app-config` as environment variables. Use a busybox container that prints the values. +2. Write a second Pod manifest that mounts `nginx-config` as a volume at `/etc/nginx/conf.d`. Use the nginx image. +3. Test that the mounted config works: `kubectl exec -- curl -s http://localhost/health` + +Use environment variables for simple key-value settings. Use volume mounts for full config files. + +**Verify:** Does the `/health` endpoint respond? + +--- + +### Task 4: Create a Secret +1. Use `kubectl create secret generic db-credentials` with `--from-literal` to store `DB_USER=admin` and `DB_PASSWORD=s3cureP@ssw0rd` +2. Inspect with `kubectl get secret db-credentials -o yaml` — the values are base64-encoded +3. Decode a value: `echo '' | base64 --decode` + +**base64 is encoding, not encryption.** Anyone with cluster access can decode Secrets. The real advantages are RBAC separation, tmpfs storage on nodes, and optional encryption at rest. + +**Verify:** Can you decode the password back to plaintext? + +--- + +### Task 5: Use Secrets in a Pod +1. Write a Pod manifest that injects `DB_USER` as an environment variable using `secretKeyRef` +2. In the same Pod, mount the entire `db-credentials` Secret as a volume at `/etc/db-credentials` with `readOnly: true` +3. Verify: each Secret key becomes a file, and the content is the decoded plaintext value + +**Verify:** Are the mounted file values plaintext or base64? + +--- + +### Task 6: Update a ConfigMap and Observe Propagation +1. Create a ConfigMap `live-config` with a key `message=hello` +2. Write a Pod that mounts this ConfigMap as a volume and reads the file in a loop every 5 seconds +3. Update the ConfigMap: `kubectl patch configmap live-config --type merge -p '{"data":{"message":"world"}}'` +4. Wait 30-60 seconds — the volume-mounted value updates automatically +5. Environment variables from earlier tasks do NOT update — they are set at pod startup only + +**Verify:** Did the volume-mounted value change without a pod restart? + +--- + +### Task 7: Clean Up +Delete all pods, ConfigMaps, and Secrets you created. + +--- + +## Hints +- `--from-literal=KEY=VALUE` for command-line values, `--from-file=key=filename` for file contents +- `envFrom` injects all keys; `env` with `valueFrom` injects individual keys +- `echo -n 'value' | base64` — always use `-n` to avoid encoding a trailing newline +- Volume-mounted ConfigMaps/Secrets auto-update; environment variables do not +- `kubectl get secret -o jsonpath='{.data.KEY}' | base64 --decode` extracts and decodes a value + +--- + +## Documentation +Create `day-54-configmaps-secrets.md` with: +- What ConfigMaps and Secrets are and when to use each +- The difference between environment variables and volume mounts +- Why base64 is encoding, not encryption +- How ConfigMap updates propagate to volumes but not env vars + +--- + +## Submission +1. Add `day-54-configmaps-secrets.md` to `2026/day-54/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Learned Kubernetes ConfigMaps and Secrets today. Injected config as environment variables and volume mounts, and discovered that base64 encoding is not encryption." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-54/day-54(1).png b/2026/day-54/day-54(1).png new file mode 100644 index 0000000000..4d9a651959 Binary files /dev/null and b/2026/day-54/day-54(1).png differ diff --git a/2026/day-54/day-54(2).png b/2026/day-54/day-54(2).png new file mode 100644 index 0000000000..bc35d293b9 Binary files /dev/null and b/2026/day-54/day-54(2).png differ diff --git a/2026/day-54/day-54-configmaps-secrets.md b/2026/day-54/day-54-configmaps-secrets.md new file mode 100644 index 0000000000..ff0080a07a --- /dev/null +++ b/2026/day-54/day-54-configmaps-secrets.md @@ -0,0 +1,27 @@ +# What ConfigMaps and Secrets are and when to use each +### A conifgmap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume. +### ConfigMap does not provide secrecy or encryption. If the data you want to store are confidential, use a Secret rather than a ConfigMap. + +# The difference between environment variables and volume mounts +### Use Environment Variables When: +#### Configuration data is small (< 1MB total) +#### Values are simple key-value pairs +#### Application expects standard environment variables +#### Configuration is truly static during pod lifetime +#### You need maximum portability across platforms + +### Use Volume Mounts When: +#### Configuration files are large or complex +#### You need structured data (JSON, YAML, XML) +#### Configuration might change during runtime +#### You have binary data or certificates +#### File permissions and ownership matter +#### You need atomic updates to multiple files + +# Why base64 is encoding, not encryption +### Base64 is an encoding scheme, not encryption, because it is a reversible, keyless transformation designed solely for data format compatibility rather than security + +# How ConfigMap updates propagate to volumes but not env vars +### Volumes: When a ConfigMap is mounted as a volume, the kubelet eventually updates the files in the container (delay depends on sync period and cache strategy). The application must detect and reload the file changes. +### Environment Variables: Values are injected once at pod startup. Updates do not propagate; a pod restart (e.g., via kubectl rollout restart) is required for changes to take effect. + diff --git a/2026/day-54/day-54.png b/2026/day-54/day-54.png new file mode 100644 index 0000000000..550c8c6e5d Binary files /dev/null and b/2026/day-54/day-54.png differ diff --git a/2026/day-55/README.md b/2026/day-55/README.md new file mode 100644 index 0000000000..fb47bfb946 --- /dev/null +++ b/2026/day-55/README.md @@ -0,0 +1,118 @@ +# Day 55 – Persistent Volumes (PV) and Persistent Volume Claims (PVC) + +## Task +Containers are ephemeral — when a Pod dies, everything inside it disappears. That is a serious problem for databases and anything that needs to survive a restart. Today you fix this with Persistent Volumes and Persistent Volume Claims. + +--- + +## Expected Output +- Data loss demonstrated with an ephemeral Pod +- A PV and PVC created, bound, and data persisting across Pod deletions +- A markdown file: `day-55-persistent-volumes.md` + +--- + +## Challenge Tasks + +### Task 1: See the Problem — Data Lost on Pod Deletion +1. Write a Pod manifest that uses an `emptyDir` volume and writes a timestamped message to `/data/message.txt` +2. Apply it, verify the data exists with `kubectl exec` +3. Delete the Pod, recreate it, check the file again — the old message is gone + +**Verify:** Is the timestamp the same or different after recreation? + +--- + +### Task 2: Create a PersistentVolume (Static Provisioning) +1. Write a PV manifest with `capacity: 1Gi`, `accessModes: ReadWriteOnce`, `persistentVolumeReclaimPolicy: Retain`, and `hostPath` pointing to `/tmp/k8s-pv-data` +2. Apply it and check `kubectl get pv` — status should be `Available` + +Access modes to know: +- `ReadWriteOnce (RWO)` — read-write by a single node +- `ReadOnlyMany (ROX)` — read-only by many nodes +- `ReadWriteMany (RWX)` — read-write by many nodes + +`hostPath` is fine for learning, not for production. + +**Verify:** What is the STATUS of the PV? + +--- + +### Task 3: Create a PersistentVolumeClaim +1. Write a PVC manifest requesting `500Mi` of storage with `ReadWriteOnce` access +2. Apply it and check both `kubectl get pvc` and `kubectl get pv` +3. Both should show `Bound` — Kubernetes matched them by capacity and access mode + +**Verify:** What does the VOLUME column in `kubectl get pvc` show? + +--- + +### Task 4: Use the PVC in a Pod — Data That Survives +1. Write a Pod manifest that mounts the PVC at `/data` using `persistentVolumeClaim.claimName` +2. Write data to `/data/message.txt`, then delete and recreate the Pod +3. Check the file — it should contain data from both Pods + +**Verify:** Does the file contain data from both the first and second Pod? + +--- + +### Task 5: StorageClasses and Dynamic Provisioning +1. Run `kubectl get storageclass` and `kubectl describe storageclass` +2. Note the provisioner, reclaim policy, and volume binding mode +3. With dynamic provisioning, developers only create PVCs — the StorageClass handles PV creation automatically + +**Verify:** What is the default StorageClass in your cluster? + +--- + +### Task 6: Dynamic Provisioning +1. Write a PVC manifest that includes `storageClassName: standard` (or your cluster's default) +2. Apply it — a PV should appear automatically in `kubectl get pv` +3. Use this PVC in a Pod, write data, verify it works + +**Verify:** How many PVs exist now? Which was manual, which was dynamic? + +--- + +### Task 7: Clean Up +1. Delete all pods first +2. Delete PVCs — check `kubectl get pv` to see what happened +3. The dynamic PV is gone (Delete reclaim policy). The manual PV shows `Released` (Retain policy). +4. Delete the remaining PV manually + +**Verify:** Which PV was auto-deleted and which was retained? Why? + +--- + +## Hints +- PVs are cluster-wide (not namespaced), PVCs are namespaced +- PV status: `Available` -> `Bound` -> `Released` +- If a PVC stays `Pending`, check for matching capacity and access modes +- `hostPath` data is lost if the Pod moves to a different node +- `storageClassName: ""` disables dynamic provisioning +- Reclaim policies: `Retain` (keep data) vs `Delete` (remove data) + +--- + +## Documentation +Create `day-55-persistent-volumes.md` with: +- Why containers need persistent storage +- What PVs and PVCs are and how they relate +- Static vs dynamic provisioning +- Access modes and reclaim policies + +--- + +## Submission +1. Add `day-55-persistent-volumes.md` to `2026/day-55/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Learned Kubernetes Persistent Volumes and PVCs today. Proved container data is ephemeral, then fixed it with PVs. Also explored dynamic provisioning with StorageClasses." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-55/day-55_PV__PVC/PersisentVolume.yml b/2026/day-55/day-55_PV__PVC/PersisentVolume.yml new file mode 100644 index 0000000000..380f394f7e --- /dev/null +++ b/2026/day-55/day-55_PV__PVC/PersisentVolume.yml @@ -0,0 +1,15 @@ +kind: PersistentVolume +apiVersion: v1 +metadata: + name: static-provision-volume + labels: + day: day-55 +spec: + storageClassName: manual + capacity: + storage: 1Gi + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: /tmp/k8s-pv-data diff --git a/2026/day-55/day-55_PV__PVC/day-55-persistent-volumes.md b/2026/day-55/day-55_PV__PVC/day-55-persistent-volumes.md new file mode 100644 index 0000000000..e2ac109b96 --- /dev/null +++ b/2026/day-55/day-55_PV__PVC/day-55-persistent-volumes.md @@ -0,0 +1,75 @@ +# Kubernetes Persistent Storage + +--- + +## Why Containers Need Persistent Storage + +- Containers are ephemeral — when a container restarts or dies, all data inside is lost +- Default container storage is tied to the container lifecycle +- Apps like databases, file uploads, logs need data to survive restarts +- Multiple containers may need to share the same data +- Without persistent storage, stateful apps cannot run reliably in Kubernetes + +--- + +## What PVs and PVCs Are and How They Relate + +### PersistentVolume (PV) + +- A piece of actual storage provisioned in the cluster +- Created by the cluster admin (or dynamically by a provisioner) +- Lives independently of any Pod +- Has its own lifecycle — not tied to a Pod or namespace + +### PersistentVolumeClaim (PVC) + +- A request for storage made by a user/app +- You specify how much storage you need and what access mode +- Kubernetes finds a matching PV and binds them together +- Pod uses the PVC, not the PV directly + +### How They Relate + +- PVC is like a ticket — PV is the actual storage +- Kubernetes matches a PVC to a suitable PV based on size, access mode, and StorageClass +- Once bound, that PV is exclusively reserved for that PVC + +--- + +## Static vs Dynamic Provisioning + +### Static Provisioning + +- Admin manually creates PVs in advance +- PVCs then bind to one of the available pre-created PVs +- Admin must know storage needs ahead of time +- If no matching PV exists, PVC stays Pending + +### Dynamic Provisioning + +- No need to pre-create PVs manually +- PVC references a StorageClass +- Kubernetes automatically provisions a PV when the PVC is created +- Needs a provisioner running in the cluster (e.g. local-path, AWS EBS, GCE PD) +- More flexible and scalable than static + +--- + +## Access Modes + +- `ReadWriteOnce (RWO)` — mounted as read-write by a single node only +- `ReadOnlyMany (ROX)` — mounted as read-only by many nodes simultaneously +- `ReadWriteMany (RWX)` — mounted as read-write by many nodes simultaneously +- Not all storage backends support all access modes +- For example, AWS EBS only supports RWO, NFS supports RWX + +--- + +## Reclaim Policies + +- `Retain` — PV is kept after PVC is deleted, data is preserved, admin must manually clean up +- `Delete` — PV and the underlying storage are automatically deleted when PVC is deleted +- `Recycle` — deprecated, used to do a basic scrub and make PV available again +- Default policy depends on the StorageClass being used +- Use `Retain` when data must not be lost accidentally +- Use `Delete` for temporary or dev workloads where cleanup should be automatic diff --git a/2026/day-55/day-55_PV__PVC/pod.yml b/2026/day-55/day-55_PV__PVC/pod.yml new file mode 100644 index 0000000000..e042b2c0a7 --- /dev/null +++ b/2026/day-55/day-55_PV__PVC/pod.yml @@ -0,0 +1,27 @@ +# Problem - Data lost on pod recreation + + +kind: Pod +apiVersion: v1 +metadata: + name: ephermal-pod + namespace: volumes +spec: + containers: + - name: busybox + image: busybox:latest + command: ["/bin/sh"] + args: + - "-c" + - | + mkdir -p /data + MSG="[$(date '+%Y-%m-%d %H:%M:%S')] Message written" + echo "$MSG" > /data/message.txt + echo "$MSG" + tail -f /dev/null + volumeMounts: + - mountPath: /cache + name: empty-volume + volumes: + - name: empty-volume + emptyDir: {} diff --git a/2026/day-55/day-55_PV__PVC/pvc-dynamic.yml b/2026/day-55/day-55_PV__PVC/pvc-dynamic.yml new file mode 100644 index 0000000000..c4b77aad13 --- /dev/null +++ b/2026/day-55/day-55_PV__PVC/pvc-dynamic.yml @@ -0,0 +1,13 @@ +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: myclaim + namespace: volumes +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi + storageClassName: standard + diff --git a/2026/day-55/day-55_PV__PVC/pvc-pod-dynamic.yml b/2026/day-55/day-55_PV__PVC/pvc-pod-dynamic.yml new file mode 100644 index 0000000000..a38ba9e479 --- /dev/null +++ b/2026/day-55/day-55_PV__PVC/pvc-pod-dynamic.yml @@ -0,0 +1,17 @@ +kind: Pod +apiVersion: v1 +metadata: + name: pvc-consumer + namespace: volumes +spec: + containers: + - name: busybox + image: busybox:latest + command: ["sh", "-c", "while true; do echo writing; echo hello >> /data/out.txt; sleep 5; done"] + volumeMounts: + - name: storage + mountPath: /data + volumes: + - name: storage + persistentVolumeClaim: + claimName: myclaim diff --git a/2026/day-55/day-55_PV__PVC/pvc-pod.yml b/2026/day-55/day-55_PV__PVC/pvc-pod.yml new file mode 100644 index 0000000000..32a5b1966a --- /dev/null +++ b/2026/day-55/day-55_PV__PVC/pvc-pod.yml @@ -0,0 +1,25 @@ +apiVersion: v1 +kind: Pod +metadata: + name: mypod + namespace: volumes +spec: + containers: + - name: busybox + image: busybox:latest + volumeMounts: + - mountPath: "/data" + name: mypvc + command: ["/bin/sh"] + args: + - "-c" + - | + mkdir -p /data + MSG="[$(date '+%Y-%m-%d %H:%M:%S')] Message written" + echo "$MSG" > /data/message.txt + echo "$MSG" + tail -f /dev/null + volumes: + - name: mypvc + persistentVolumeClaim: + claimName: myclaim diff --git a/2026/day-55/day-55_PV__PVC/pvc.yml b/2026/day-55/day-55_PV__PVC/pvc.yml new file mode 100644 index 0000000000..4eb53bd51e --- /dev/null +++ b/2026/day-55/day-55_PV__PVC/pvc.yml @@ -0,0 +1,17 @@ +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: myclaim + namespace: volumes +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi + storageClassName: manual + selector: + matchLabels: + day: day-55 + + diff --git "a/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Day-55-stateful-stes-notes.md" "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Day-55-stateful-stes-notes.md" new file mode 100644 index 0000000000..430d4d3621 --- /dev/null +++ "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Day-55-stateful-stes-notes.md" @@ -0,0 +1,245 @@ +# Kubernetes StatefulSets — Complete Notes + +--- + +## 1. What is a StatefulSet? + +A **StatefulSet** is a Kubernetes workload API object used to manage **stateful applications**. Unlike Deployments, StatefulSets give each pod a **stable, unique identity** that persists across rescheduling. + +Each pod in a StatefulSet gets: +- A **stable hostname**: `nginx-stats-0`, `nginx-stats-1`, `nginx-stats-2` +- A **stable DNS name**: `...svc.cluster.local` +- Its **own PersistentVolumeClaim (PVC)** — data is NOT shared between pods + +--- + +## 2. StatefulSet vs Deployment + +| Feature | StatefulSet | Deployment | +|---|---|---| +| Pod identity | Stable, unique (`pod-0`, `pod-1`) | Random (`pod-abc123`) | +| Pod DNS name | Stable per pod | Not stable | +| Storage | Each pod gets its own PVC | Shared or no persistent storage | +| Scaling order | Ordered (0 → 1 → 2) | Random/parallel | +| Use case | Databases, queues, stateful apps | Stateless apps (web servers, APIs) | +| Pod restart | Same name and storage retained | New random name | + +### When to use StatefulSet +- Databases (MySQL, PostgreSQL, MongoDB) +- Message queues (Kafka, RabbitMQ) +- Distributed systems (Elasticsearch, Zookeeper) +- Any app that needs **stable network identity** or **per-pod storage** + +### When to use Deployment +- Stateless web servers +- REST APIs +- Frontend apps +- Any app where pods are interchangeable + +--- + +## 3. StatefulSet YAML + +```yaml +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: nginx-stats + namespace: nginx +spec: + selector: + matchLabels: + app: nginx + serviceName: "my-service" # Must match the headless service name + replicas: 3 + minReadySeconds: 10 + template: + metadata: + labels: + app: nginx + spec: + terminationGracePeriodSeconds: 10 + containers: + - name: nginx + image: nginx:latest + ports: + - containerPort: 80 + volumeMounts: + - name: www + mountPath: /usr/share/nginx/html + volumeClaimTemplates: + - metadata: + name: www + spec: + accessModes: [ "ReadWriteOnce" ] + resources: + requests: + storage: 100Mi +``` + +--- + +## 4. Headless Service + +A **Headless Service** has `clusterIP: None`. Instead of load balancing traffic to a virtual IP, it returns the **actual pod IPs** directly via DNS. + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service + namespace: nginx +spec: + clusterIP: None # ← This makes it headless + selector: + app: nginx + ports: + - port: 80 + targetPort: 80 +``` + +### Regular Service vs Headless Service + +| | Regular Service | Headless Service | +|---|---|---| +| `clusterIP` | Virtual IP (e.g. `10.96.x.x`) | `None` | +| DNS resolution | Returns ClusterIP (load balanced) | Returns individual pod IPs | +| Use with | Deployments | StatefulSets | +| Pod addressable? | No | Yes (each pod has DNS) | + +--- + +## 5. Stable DNS Names + +Each StatefulSet pod gets a DNS entry in the format: + +``` +...svc.cluster.local +``` + +For our setup: + +``` +nginx-stats-0.my-service.nginx.svc.cluster.local → 10.244.1.x +nginx-stats-1.my-service.nginx.svc.cluster.local → 10.244.1.9 +nginx-stats-2.my-service.nginx.svc.cluster.local → 10.244.1.11 +``` + +This DNS name is **stable** — even if the pod is deleted and recreated, it gets the same DNS name and reconnects to its own storage. + +--- + +## 6. volumeClaimTemplates + +`volumeClaimTemplates` automatically creates a **separate PVC for each pod**. This is the key feature that enables per-pod storage isolation. + +```yaml +volumeClaimTemplates: +- metadata: + name: www + spec: + accessModes: [ "ReadWriteOnce" ] + resources: + requests: + storage: 100Mi +``` + +This creates: + +``` +NAME STATUS CAPACITY +www-nginx-stats-0 Bound 100Mi +www-nginx-stats-1 Bound 100Mi +www-nginx-stats-2 Bound 100Mi +``` + +Each pod mounts **only its own PVC**. Data written by `nginx-stats-0` is NOT visible to `nginx-stats-1`. + +--- + +## 7. DNS Resolution — Lab Verification + +### Step 1: Launch a busybox pod + +```bash +kubectl run busybox --image=busybox:1.28 --rm -it --restart=Never -- /bin/sh +``` + +### Step 2: Run nslookup inside busybox + +```bash +nslookup nginx-stats-0.my-service.nginx.svc.cluster.local +nslookup nginx-stats-1.my-service.nginx.svc.cluster.local +nslookup nginx-stats-2.my-service.nginx.svc.cluster.local +``` + +### Successful Output + +``` +Server: 10.96.0.10 +Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local + +Name: nginx-stats-1.my-service.nginx.svc.cluster.local +Address 1: 10.244.1.9 nginx-stats-1.my-service.nginx.svc.cluster.local + +Name: nginx-stats-2.my-service.nginx.svc.cluster.local +Address 1: 10.244.1.11 nginx-stats-2.my-service.nginx.svc.cluster.local +``` + +> ✅ Each pod resolves to its own unique IP — DNS is working correctly. + +--- + +## 8. Per-Pod Storage — Lab Verification + +### Write unique data to each pod + +```bash +kubectl exec nginx-stats-0 -n nginx -- sh -c "echo 'Data from web-0' > /usr/share/nginx/html/index.html" +kubectl exec nginx-stats-1 -n nginx -- sh -c "echo 'Data from web-1' > /usr/share/nginx/html/index.html" +kubectl exec nginx-stats-2 -n nginx -- sh -c "echo 'Data from web-2' > /usr/share/nginx/html/index.html" +``` + +### Verify each pod has isolated data + +```bash +kubectl exec nginx-stats-0 -n nginx -- cat /usr/share/nginx/html/index.html # → Data from web-0 +kubectl exec nginx-stats-1 -n nginx -- cat /usr/share/nginx/html/index.html # → Data from web-1 +kubectl exec nginx-stats-2 -n nginx -- cat /usr/share/nginx/html/index.html # → Data from web-2 +``` + +Each pod returning **different data** confirms that `volumeClaimTemplates` created separate PVCs per pod. + +--- + +## 9. Useful Commands + +```bash +# Get all pods in nginx namespace +kubectl get pods -n nginx + +# Watch pods in real time +kubectl get pods -n nginx -l app=nginx -w + +# Check PVCs +kubectl get pvc -n nginx + +# Check headless service +kubectl get svc -n nginx + +# Describe service (verify selector) +kubectl describe svc my-service -n nginx + +# Check DNS from inside cluster +kubectl run busybox --image=busybox:1.28 --rm -it --restart=Never -- /bin/sh +``` + +--- + +## 10. Key Takeaways + +- StatefulSets give pods **stable identity** — name, DNS, and storage survive restarts +- **Headless Services** (`clusterIP: None`) enable per-pod DNS resolution +- **`volumeClaimTemplates`** auto-creates one PVC per pod — storage is isolated +- Pod DNS format: `...svc.cluster.local` +- Use StatefulSets for **databases and stateful apps**; use Deployments for **stateless apps** diff --git "a/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Deployment.yml" "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Deployment.yml" new file mode 100644 index 0000000000..2da107dff4 --- /dev/null +++ "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Deployment.yml" @@ -0,0 +1,22 @@ +kind: Deployment +apiVersion: apps/v1 +metadata: + name: nginx-deployment + namespace: nginx + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:latest + ports: + - containerPort: 80 diff --git "a/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Service.yml" "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Service.yml" new file mode 100644 index 0000000000..ac175e200f --- /dev/null +++ "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/Service.yml" @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service + namespace: nginx +spec: + selector: + app: nginx + ports: + - protocol: TCP + port: 80 + targetPort: 80 + clusterIP: None diff --git "a/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/StatefulSet.yml" "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/StatefulSet.yml" new file mode 100644 index 0000000000..0948fafaf8 --- /dev/null +++ "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/StatefulSet.yml" @@ -0,0 +1,34 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: nginx-stats + namespace: nginx +spec: + selector: + matchLabels: + app: nginx # has to match .spec.template.metadata.labels + serviceName: "my-service" + replicas: 3 # by default is 1 + minReadySeconds: 10 # by default is 0 + template: + metadata: + labels: + app: nginx # has to match .spec.selector.matchLabels + spec: + terminationGracePeriodSeconds: 10 + containers: + - name: nginx + image: nginx:latest + ports: + - containerPort: 80 + volumeMounts: + - name: www + mountPath: /usr/share/nginx/html + volumeClaimTemplates: + - metadata: + name: www + spec: + accessModes: [ "ReadWriteOnce" ] + resources: + requests: + storage: 100Mi diff --git "a/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/day-56.png" "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/day-56.png" new file mode 100644 index 0000000000..121ff45397 Binary files /dev/null and "b/2026/day-56/Day-56\342\200\223Kubernetes-StatefulSet/day-56.png" differ diff --git a/2026/day-56/README.md b/2026/day-56/README.md new file mode 100644 index 0000000000..4265b5bc32 --- /dev/null +++ b/2026/day-56/README.md @@ -0,0 +1,135 @@ +# Day 56 – Kubernetes StatefulSets + +## Task +Deployments work great for stateless apps, but what about databases? You need stable pod names, ordered startup, and persistent storage per replica. Today you learn StatefulSets — the workload designed for stateful applications like MySQL, PostgreSQL, and Kafka. + +--- + +## Expected Output +- A StatefulSet with 3 replicas and stable pod names +- DNS resolution tested for individual pods +- Data persistence verified across pod deletion +- A markdown file: `day-56-statefulsets.md` + +--- + +## Challenge Tasks + +### Task 1: Understand the Problem +1. Create a Deployment with 3 replicas using nginx +2. Check the pod names — they are random (`app-xyz-abc`) +3. Delete a pod and notice the replacement gets a different random name + +This is fine for web servers but not for databases where you need stable identity. + +| Feature | Deployment | StatefulSet | +|---|---|---| +| Pod names | Random | Stable, ordered (`app-0`, `app-1`) | +| Startup order | All at once | Ordered: pod-0, then pod-1, then pod-2 | +| Storage | Shared PVC | Each pod gets its own PVC | +| Network identity | No stable hostname | Stable DNS per pod | + +Delete the Deployment before moving on. + +**Verify:** Why would random pod names be a problem for a database cluster? + +--- + +### Task 2: Create a Headless Service +1. Write a Service manifest with `clusterIP: None` — this is a Headless Service +2. Set the selector to match the labels you will use on your StatefulSet pods +3. Apply it and confirm CLUSTER-IP shows `None` + +A Headless Service creates individual DNS entries for each pod instead of load-balancing to one IP. StatefulSets require this. + +**Verify:** What does the CLUSTER-IP column show? + +--- + +### Task 3: Create a StatefulSet +1. Write a StatefulSet manifest with `serviceName` pointing to your Headless Service +2. Set replicas to 3, use the nginx image +3. Add a `volumeClaimTemplates` section requesting 100Mi of ReadWriteOnce storage +4. Apply and watch: `kubectl get pods -l -w` + +Observe ordered creation — `web-0` first, then `web-1` after `web-0` is Ready, then `web-2`. + +Check the PVCs: `kubectl get pvc` — you should see `web-data-web-0`, `web-data-web-1`, `web-data-web-2` (names follow the pattern `-`). + +**Verify:** What are the exact pod names and PVC names? + +--- + +### Task 4: Stable Network Identity +Each StatefulSet pod gets a DNS name: `...svc.cluster.local` + +1. Run a temporary busybox pod and use `nslookup` to resolve `web-0..default.svc.cluster.local` +2. Do the same for `web-1` and `web-2` +3. Confirm the IPs match `kubectl get pods -o wide` + +**Verify:** Does the nslookup IP match the pod IP? + +--- + +### Task 5: Stable Storage — Data Survives Pod Deletion +1. Write unique data to each pod: `kubectl exec web-0 -- sh -c "echo 'Data from web-0' > /usr/share/nginx/html/index.html"` +2. Delete `web-0`: `kubectl delete pod web-0` +3. Wait for it to come back, then check the data — it should still be "Data from web-0" + +The new pod reconnected to the same PVC. + +**Verify:** Is the data identical after pod recreation? + +--- + +### Task 6: Ordered Scaling +1. Scale up to 5: `kubectl scale statefulset web --replicas=5` — pods create in order (web-3, then web-4) +2. Scale down to 3 — pods terminate in reverse order (web-4, then web-3) +3. Check `kubectl get pvc` — all five PVCs still exist. Kubernetes keeps them on scale-down so data is preserved if you scale back up. + +**Verify:** After scaling down, how many PVCs exist? + +--- + +### Task 7: Clean Up +1. Delete the StatefulSet and the Headless Service +2. Check `kubectl get pvc` — PVCs are still there (safety feature) +3. Delete PVCs manually + +**Verify:** Were PVCs auto-deleted with the StatefulSet? + +--- + +## Hints +- `kubectl get sts` is the short name for StatefulSets +- `serviceName` must match an existing Headless Service +- Pod DNS: `...svc.cluster.local` +- PVC naming: `--` +- Pods create in order (0, 1, 2) and terminate in reverse (2, 1, 0) +- Scaling down does not delete PVCs — data is preserved +- Deleting a StatefulSet does not delete PVCs — clean up separately + +--- + +## Documentation +Create `day-56-statefulsets.md` with: +- What StatefulSets are and when to use them vs Deployments +- The comparison table +- How Headless Services, stable DNS, and volumeClaimTemplates work +- Screenshots of pods, PVCs, and DNS resolution + +--- + +## Submission +1. Add `day-56-statefulsets.md` to `2026/day-56/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Learned Kubernetes StatefulSets today. Stable pod names, per-pod DNS, and persistent storage that survives deletion — now I understand why databases need StatefulSets." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/busybox-liveness-probe.yml" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/busybox-liveness-probe.yml" new file mode 100644 index 0000000000..7771bcc392 --- /dev/null +++ "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/busybox-liveness-probe.yml" @@ -0,0 +1,24 @@ +kind: Pod +apiVersion: v1 +metadata: + name: busybox +spec: + containers: + - name: busybox + image: busybox:latest + command: ["sh","-c","touch /tmp/healthy && sleep 30 && rm -f /tmp/healthy"] + resources: + requests: + memory: "128Mi" + cpu: "100m" + limits: + memory: "256Mi" + cpu: "250m" + livenessProbe: + exec: + command: + - cat + - /tmp/healthy + initialDelaySeconds: 5 + periodSeconds: 5 + failureThreshold: 3 diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/busybox-startup-probe.yml" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/busybox-startup-probe.yml" new file mode 100644 index 0000000000..a9d27311aa --- /dev/null +++ "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/busybox-startup-probe.yml" @@ -0,0 +1,34 @@ +kind: Pod +apiVersion: v1 +metadata: + name: busybox +spec: + containers: + - name: busybox + image: busybox:latest + command: ["sh","-c","sleep 20 && touch /tmp/started && touch /tmp/healthy && sleep 600 "] + resources: + requests: + memory: "128Mi" + cpu: "100m" + limits: + memory: "256Mi" + cpu: "250m" + startupProbe: + exec: + command: + - cat + - /tmp/started + periodSeconds: 5 # check every 5s + failureThreshold: 12 # allow up to 60s for startup (5 × 12) + timeoutSeconds: 1 + + livenessProbe: + exec: + command: + - cat + - /tmp/healthy + initialDelaySeconds: 0 # no extra delay — startup probe handles the wait + periodSeconds: 5 + failureThreshold: 3 + timeoutSeconds: 1 diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57(1).png" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57(1).png" new file mode 100644 index 0000000000..5f509d7720 Binary files /dev/null and "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57(1).png" differ diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57(2).png" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57(2).png" new file mode 100644 index 0000000000..e590a316bd Binary files /dev/null and "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57(2).png" differ diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57-resources-probes.md" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57-resources-probes.md" new file mode 100644 index 0000000000..f631bfeef4 --- /dev/null +++ "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57-resources-probes.md" @@ -0,0 +1,518 @@ +# Day 57 — Kubernetes Resource Management & Probes + +> 90 Days of DevOps | Uttam Tripathi | CSJMU Kanpur + +--- + +## 📌 Table of Contents + +1. [Requests vs Limits](#1-requests-vs-limits) +2. [What Happens When Limits Are Exceeded](#2-what-happens-when-limits-are-exceeded) +3. [Liveness vs Readiness vs Startup Probes](#3-liveness-vs-readiness-vs-startup-probes) +4. [Hands-on Demo Results](#4-hands-on-demo-results) +5. [Screenshots & Observations](#5-screenshots--observations) +6. [Key Takeaways](#6-key-takeaways) + +--- + +## 1. Requests vs Limits + +### What are they? + +```yaml +resources: + requests: + memory: "128Mi" # used for SCHEDULING + cpu: "100m" + limits: + memory: "256Mi" # used for ENFORCEMENT + cpu: "250m" +``` + +### Requests — Scheduling + +- Used by the **Kubernetes Scheduler** to decide **which node** to place the pod on +- The scheduler looks for a node that has **at least** this much free resource +- If no node has enough → pod stays in **Pending** state forever +- Does **not** restrict actual usage — just a reservation + +``` +Pod requests 128Mi memory + ↓ +Scheduler scans all nodes + ↓ +Node A: 100Mi free → ❌ skip +Node B: 512Mi free → ✅ schedule here +``` + +### Limits — Enforcement + +- Enforced by the **Linux kernel** (cgroups) at runtime +- Container **cannot exceed** these values +- Exceeding CPU limit → container **throttled** (slowed down) +- Exceeding memory limit → container **OOMKilled** (killed immediately) + +### Key Difference Table + +| | `requests` | `limits` | +|--|-----------|---------| +| **Used by** | Kubernetes Scheduler | Linux Kernel (cgroups) | +| **Purpose** | Node selection | Runtime enforcement | +| **Effect** | Pod placed on right node | Pod throttled or killed | +| **If not set** | Scheduler has no hint | No restriction (dangerous) | +| **CPU exceed** | N/A | Throttled (slowed) | +| **Memory exceed** | N/A | OOMKilled (exit 137) | + +### Best Practice + +``` +requests = what your app typically uses +limits = maximum your app should ever use + +requests ≤ limits (always) +``` + +--- + +## 2. What Happens When Limits Are Exceeded + +### CPU Limit Exceeded → Throttling + +``` +Container tries to use 500m CPU +Limit is set to 250m + ↓ +Kernel throttles CPU cycles + ↓ +App runs slower (NOT killed) +Container stays Running ✅ +RESTARTS: 0 +``` + +CPU is a **compressible** resource — Kubernetes throttles, never kills for CPU. + +### Memory Limit Exceeded → OOMKilled + +``` +Container tries to allocate 200Mi +Limit is set to 100Mi + ↓ +Linux OOM Killer activates + ↓ +Container killed with SIGKILL +Exit Code: 137 (128 + signal 9) +STATUS: OOMKilled ❌ +``` + +Memory is a **non-compressible** resource — Kubernetes kills immediately. + +### Exit Code 137 Explained + +``` +137 = 128 + 9 + ↑ + SIGKILL (signal 9 sent by OOM killer) +``` + +### How to Confirm OOMKill + +```bash +# Check pod status +kubectl get pod +# STATUS: OOMKilled + +# Get full details +kubectl describe pod +# Last State: Terminated +# Reason: OOMKilled +# Exit Code: 137 + +# Programmatic check +kubectl get pod -o jsonpath='{.status.containerStatuses[0].lastState.terminated.reason}' +# OOMKilled +``` + +### Pending Pod — Requests Too High + +``` +Pod requests 128Gi memory + 100 CPU cores + ↓ +Scheduler scans all nodes + ↓ +No node can satisfy request + ↓ +Pod stays PENDING forever +No OOMKill, No restart — just stuck +``` + +```bash +# Check why pod is pending +kubectl describe pod | grep -A 5 "Events" +# Warning FailedScheduling 0/1 nodes are available: +# 1 Insufficient memory, 1 Insufficient cpu +``` + +### OOMKill vs Pending vs Throttle — Summary + +| Situation | Status | Exit Code | Restarted? | +|-----------|--------|-----------|-----------| +| Memory limit exceeded | `OOMKilled` | 137 | ✅ Yes (if restartPolicy: Always) | +| CPU limit exceeded | `Running` | — | ❌ No (throttled) | +| Requests too high | `Pending` | — | ❌ No (never scheduled) | +| Normal exit | `Completed` | 0 | ❌ No | + +--- + +## 3. Liveness vs Readiness vs Startup Probes + +### Overview + +``` +Container starts + ↓ + startupProbe ← IS APP DONE BOOTING? + ↓ (succeeds once → stops forever) + livenessProbe ← IS APP STILL ALIVE? + readinessProbe ← IS APP READY FOR TRAFFIC? +``` + +### Probe Types Available + +```yaml +# 1. exec — run a command inside container +exec: + command: [cat, /tmp/healthy] + +# 2. httpGet — HTTP request to an endpoint +httpGet: + path: /healthz + port: 8080 + +# 3. tcpSocket — check if port is open +tcpSocket: + port: 3306 +``` + +### startupProbe + +**Question it answers:** Has the app finished starting up? + +```yaml +startupProbe: + exec: + command: + - cat + - /tmp/started + periodSeconds: 5 # check every 5s + failureThreshold: 12 # 60s budget (5 × 12) + timeoutSeconds: 1 +``` + +- Runs **first**, from container start +- livenessProbe and readinessProbe are **disabled** until this passes +- Once it succeeds → **stops forever, never runs again** +- Budget formula: `periodSeconds × failureThreshold = max startup time` +- If budget exceeded → container restarted + +**Use cases:** +- Java/Spring Boot apps (slow JVM startup) +- Apps running DB migrations on boot +- Apps waiting for external service connections +- Any app taking more than 30s to start + +### livenessProbe + +**Question it answers:** Is the app still alive and functioning? + +```yaml +livenessProbe: + exec: + command: + - cat + - /tmp/healthy + initialDelaySeconds: 0 # startup probe handles wait + periodSeconds: 5 + failureThreshold: 3 # restart after 3 failures = 15s + timeoutSeconds: 1 +``` + +- Starts after startupProbe succeeds +- Runs **forever** throughout container lifetime +- On failure → container **restarted** (RESTARTS counter goes up) +- Container is killed with SIGTERM then SIGKILL + +**Use cases:** +- Detecting deadlocked apps (running but frozen) +- Detecting memory leak causing unresponsiveness +- Auto-recovery from silent crashes +- App stuck in infinite loop + +### readinessProbe + +**Question it answers:** Is the app ready to receive traffic? + +```yaml +readinessProbe: + httpGet: + path: /ready + port: 8080 + initialDelaySeconds: 0 + periodSeconds: 5 + failureThreshold: 3 + successThreshold: 1 +``` + +- Starts after startupProbe succeeds +- Runs **forever** throughout container lifetime +- On failure → pod removed from **Service endpoints** (traffic stops) +- Container is **never restarted** — RESTARTS stays 0 +- On recovery → pod **automatically added back** to endpoints + +**Use cases:** +- DB connection temporarily lost +- App temporarily overloaded +- Rolling deployment (new pod not ready yet) +- App draining connections during graceful shutdown +- Waiting for cache to warm up + +### All 3 Probes Comparison Table + +| | `startupProbe` | `livenessProbe` | `readinessProbe` | +|--|--------------|----------------|-----------------| +| **Purpose** | App done booting? | App still alive? | App ready for traffic? | +| **Runs when** | Container start only | After startup succeeds | After startup succeeds | +| **Runs how long** | Until first success | Forever | Forever | +| **On failure** | Restart (budget exceeded) | Restart container | Remove from endpoints | +| **Container restarted?** | ✅ Yes | ✅ Yes | ❌ Never | +| **Traffic stopped?** | ✅ Yes (0/1) | ✅ Yes (during restart) | ✅ Yes (indefinitely) | +| **RESTARTS counter** | Goes up | Goes up | Stays at 0 | +| **Recovers automatically?** | N/A | ✅ After restart | ✅ Without restart | + +### What Happens if You Skip a Probe? + +| Missing Probe | Real Problem | +|--------------|-------------| +| ❌ No `startupProbe` | Liveness kills slow-starting app before it finishes booting | +| ❌ No `livenessProbe` | Deadlocked/frozen app runs forever — users get errors with no recovery | +| ❌ No `readinessProbe` | Traffic hits pod before it is ready — causes errors during deployments | + +### Production Best Practice Template + +```yaml +# Always use all 3 in production +startupProbe: + httpGet: + path: /healthz + port: 8080 + failureThreshold: 12 # 60s startup budget + periodSeconds: 5 + +livenessProbe: + httpGet: + path: /healthz # same endpoint as startup + port: 8080 + initialDelaySeconds: 0 # startup probe already handled the wait + periodSeconds: 10 + failureThreshold: 3 + +readinessProbe: + httpGet: + path: /ready # SEPARATE endpoint from liveness + port: 8080 + initialDelaySeconds: 0 + periodSeconds: 5 + failureThreshold: 3 +``` + +> `/healthz` and `/ready` are **separate endpoints** — liveness and readiness +> can fail independently. App can be alive but not ready (DB reconnecting). + +--- + +## 4. Hands-on Demo Results + +### Demo 1 — OOMKill (polinux/stress) + +**Manifest used:** +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: oomkill-demo +spec: + containers: + - name: stress + image: polinux/stress + command: ["stress"] + args: ["--vm", "1", "--vm-bytes", "200M", "--vm-hang", "1"] + resources: + limits: + memory: "100Mi" # container requests 200M but limit is 100Mi + restartPolicy: Never +``` + +**Result observed:** +```bash +kubectl get pod oomkill-demo +# NAME READY STATUS RESTARTS +# oomkill-demo 0/1 OOMKilled 0 + +kubectl describe pod oomkill-demo +# Last State: Terminated +# Reason: OOMKilled +# Exit Code: 137 +``` + +--- + +### Demo 2 — Pending Pod + +**Manifest used:** +```yaml +resources: + requests: + memory: "128Gi" # no node has 128GB RAM + cpu: "100" # no node has 100 cores +``` + +**Result observed:** +```bash +kubectl get pod pending-demo +# NAME READY STATUS RESTARTS +# pending-demo 0/1 Pending 0 + +kubectl describe pod pending-demo +# Warning FailedScheduling +# 0/1 nodes are available: Insufficient memory, Insufficient cpu +``` + +--- + +### Demo 3 — Liveness Probe (busybox) + +**Manifest used:** +```yaml +command: ["sh","-c","touch /tmp/healthy && sleep 30 && rm -f /tmp/healthy && sleep 600"] +livenessProbe: + exec: + command: [cat, /tmp/healthy] + periodSeconds: 5 + failureThreshold: 3 +``` + +**Result observed:** +```bash +# After 30s — file deleted, probe fails 3x +kubectl get pod busybox -w +# NAME READY STATUS RESTARTS +# busybox 1/1 Running 0 +# busybox 1/1 Running 1 ← restarted after probe failed! +``` + +--- + +### Demo 4 — Readiness Probe (nginx) + +**Manifest used:** +```yaml +readinessProbe: + httpGet: + path: / + port: 80 + periodSeconds: 5 + failureThreshold: 3 +``` + +**Steps:** +```bash +# 1. Apply pod and expose +kubectl apply -f nginx-readiness.yml +kubectl expose pod nginx-readiness --port=80 --name=readiness-svc + +# 2. Confirm endpoint exists +kubectl get endpoints readiness-svc +# ENDPOINTS: 10.244.0.5:80 ✅ + +# 3. Break the probe +kubectl exec nginx-readiness -- rm /usr/share/nginx/html/index.html + +# 4. After 15s — pod NOT READY, endpoints EMPTY +kubectl get pod nginx-readiness +# READY: 0/1 RESTARTS: 0 ✅ (not restarted — just removed from traffic) + +kubectl get endpoints readiness-svc +# ENDPOINTS: ✅ + +# 5. Restore — pod recovers without restart +kubectl exec nginx-readiness -- sh -c "echo 'back' > /usr/share/nginx/html/index.html" +kubectl get pod nginx-readiness +# READY: 1/1 RESTARTS: 0 ✅ +``` + +--- + +### Demo 5 — Startup + Liveness Probe (busybox) + +**Manifest used:** +```yaml +command: ["sh","-c","sleep 20 && touch /tmp/started && touch /tmp/healthy && sleep 600"] +startupProbe: + exec: + command: [cat, /tmp/started] + periodSeconds: 5 + failureThreshold: 12 # 60s budget +livenessProbe: + exec: + command: [cat, /tmp/healthy] + periodSeconds: 5 + failureThreshold: 3 +``` + +**Result observed:** +```bash +kubectl get pod busybox -w +# AGE 21s → 0/1 Running 0 ← startup probe running, not ready yet +# AGE 25s → 1/1 Running 0 ← startup succeeded, liveness took over ✅ +# AGE 65s → 1/1 Running 0 ← healthy, RESTARTS: 0 ✅ +``` + +--- + +## 5. Screenshots & Observations + +![alt text](day-57.png) +![alt text](day-57(2).png) +![alt text](day-57(1).png) + + +**Command to capture probe events:** +```bash +kubectl describe pod | grep -A 30 "Events" +``` + +--- + +## 6. Key Takeaways + +``` +1. requests = scheduling hint (node selection) + limits = runtime enforcement (kernel enforced) + +2. CPU exceeded → throttled (slowed, not killed) + RAM exceeded → OOMKilled (exit code 137) + requests > node capacity → Pending (never scheduled) + +3. startupProbe → protects BOOT phase (runs once) + livenessProbe → protects RUNTIME health (restarts on fail) + readinessProbe → protects TRAFFIC routing (no restart on fail) + +4. Always use all 3 probes in production + Use separate /healthz and /ready endpoints + +5. Readiness failure = RESTARTS stays 0 (key interview answer!) + Liveness failure = RESTARTS goes up +``` + +--- + +*Day 57 of 90 | #90DaysOfDevOps | Uttam Tripathi* diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57.png" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57.png" new file mode 100644 index 0000000000..561234c486 Binary files /dev/null and "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/day-57.png" differ diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/exceed-memory-pod.yml" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/exceed-memory-pod.yml" new file mode 100644 index 0000000000..b1c0af381a --- /dev/null +++ "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/exceed-memory-pod.yml" @@ -0,0 +1,14 @@ +kind: Pod +apiVersion: v1 +metadata: + name: polinux +spec: + containers: + - name: app + image: polinux/stress + command: ["stress"] + args: ["--vm", "1", "--vm-bytes", "200M", "--vm-hang", "1"] + resources: + limits: + memory: "100Mi" + diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/nginx-readiness-probe.yml" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/nginx-readiness-probe.yml" new file mode 100644 index 0000000000..4f4c94b9c5 --- /dev/null +++ "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/nginx-readiness-probe.yml" @@ -0,0 +1,28 @@ +kind: Pod +apiVersion: v1 +metadata: + name: nginx + labels: + app: nginx-readiness +spec: + containers: + - name: nginx + image: nginx:1.25.5 + ports: + - containerPort: 80 + name: http + resources: + requests: + memory: "64Mi" + cpu: "100m" + limits: + memory: "128Mi" + cpu: "250m" + readinessProbe: + httpGet: + path: / + port: 80 + initialDelaySeconds: 5 + periodSeconds: 5 + failureThreshold: 3 + successThreshold: 1 diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/pending-pod.yml" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/pending-pod.yml" new file mode 100644 index 0000000000..914148a884 --- /dev/null +++ "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/pending-pod.yml" @@ -0,0 +1,12 @@ +kind: Pod +apiVersion: v1 +metadata: + name: polinux +spec: + containers: + - name: app + image: polinux/stress + resources: + requests: + memory: "128Mi" + cpu: "100" diff --git "a/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/pod.yml" "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/pod.yml" new file mode 100644 index 0000000000..5c16f78f85 --- /dev/null +++ "b/2026/day-57/Day57\342\200\223ResourceRequests,Limits,andProbes/pod.yml" @@ -0,0 +1,15 @@ +kind: Pod +apiVersion: v1 +metadata: + name: nginx +spec: + containers: + - name: app + image: nginx:latest + resources: + requests: + memory: "128Mi" + cpu: "100m" + limits: + memory: "256Mi" + cpu: "250m" diff --git a/2026/day-57/README.md b/2026/day-57/README.md new file mode 100644 index 0000000000..5337e1f1d4 --- /dev/null +++ b/2026/day-57/README.md @@ -0,0 +1,125 @@ +# Day 57 – Resource Requests, Limits, and Probes + +## Task +Your Pods are running, but Kubernetes has no idea how much CPU or memory they need — and no way to tell if they are actually healthy. Today you set resource requests and limits for smart scheduling, then add probes so Kubernetes can detect and recover from failures automatically. + +--- + +## Expected Output +- A Pod with CPU and memory requests and limits +- OOMKilled observed when exceeding memory limits +- Liveness, readiness, and startup probes tested +- A markdown file: `day-57-resources-probes.md` + +--- + +## Challenge Tasks + +### Task 1: Resource Requests and Limits +1. Write a Pod manifest with `resources.requests` (cpu: 100m, memory: 128Mi) and `resources.limits` (cpu: 250m, memory: 256Mi) +2. Apply and inspect with `kubectl describe pod` — look for the Requests, Limits, and QoS Class sections +3. Since requests and limits differ, the QoS class is `Burstable`. If equal, it would be `Guaranteed`. If missing, `BestEffort`. + +CPU is in millicores: `100m` = 0.1 CPU. Memory is in mebibytes: `128Mi`. + +**Requests** = guaranteed minimum (scheduler uses this for placement). **Limits** = maximum allowed (kubelet enforces at runtime). + +**Verify:** What QoS class does your Pod have? + +--- + +### Task 2: OOMKilled — Exceeding Memory Limits +1. Write a Pod manifest using the `polinux/stress` image with a memory limit of `100Mi` +2. Set the stress command to allocate 200M of memory: `command: ["stress"] args: ["--vm", "1", "--vm-bytes", "200M", "--vm-hang", "1"]` +3. Apply and watch — the container gets killed immediately + +CPU is throttled when over limit. Memory is killed — no mercy. + +Check `kubectl describe pod` for `Reason: OOMKilled` and `Exit Code: 137` (128 + SIGKILL). + +**Verify:** What exit code does an OOMKilled container have? + +--- + +### Task 3: Pending Pod — Requesting Too Much +1. Write a Pod manifest requesting `cpu: 100` and `memory: 128Gi` +2. Apply and check — STATUS stays `Pending` forever +3. Run `kubectl describe pod` and read the Events — the scheduler says exactly why: insufficient resources + +**Verify:** What event message does the scheduler produce? + +--- + +### Task 4: Liveness Probe +A liveness probe detects stuck containers. If it fails, Kubernetes restarts the container. + +1. Write a Pod manifest with a busybox container that creates `/tmp/healthy` on startup, then deletes it after 30 seconds +2. Add a liveness probe using `exec` that runs `cat /tmp/healthy`, with `periodSeconds: 5` and `failureThreshold: 3` +3. After the file is deleted, 3 consecutive failures trigger a restart. Watch with `kubectl get pod -w` + +**Verify:** How many times has the container restarted? + +--- + +### Task 5: Readiness Probe +A readiness probe controls traffic. Failure removes the Pod from Service endpoints but does NOT restart it. + +1. Write a Pod manifest with nginx and a `readinessProbe` using `httpGet` on path `/` port `80` +2. Expose it as a Service: `kubectl expose pod --port=80 --name=readiness-svc` +3. Check `kubectl get endpoints readiness-svc` — the Pod IP is listed +4. Break the probe: `kubectl exec -- rm /usr/share/nginx/html/index.html` +5. Wait 15 seconds — Pod shows `0/1` READY, endpoints are empty, but the container is NOT restarted + +**Verify:** When readiness failed, was the container restarted? + +--- + +### Task 6: Startup Probe +A startup probe gives slow-starting containers extra time. While it runs, liveness and readiness probes are disabled. + +1. Write a Pod manifest where the container takes 20 seconds to start (e.g., `sleep 20 && touch /tmp/started`) +2. Add a `startupProbe` checking for `/tmp/started` with `periodSeconds: 5` and `failureThreshold: 12` (60 second budget) +3. Add a `livenessProbe` that checks the same file — it only kicks in after startup succeeds + +**Verify:** What would happen if `failureThreshold` were 2 instead of 12? + +--- + +### Task 7: Clean Up +Delete all pods and services you created. + +--- + +## Hints +- CPU is compressible (throttled); memory is incompressible (OOMKilled) +- CPU: `1` = 1 core = `1000m`. Memory: `Mi` (mebibytes), `Gi` (gibibytes) +- QoS: Guaranteed (requests == limits), Burstable (requests < limits), BestEffort (none set) +- Probe types: `httpGet`, `exec`, `tcpSocket` +- Liveness failure = restart. Readiness failure = remove from endpoints. Startup failure = kill. +- `initialDelaySeconds`, `periodSeconds`, `failureThreshold` control probe timing +- Exit code 137 = OOMKilled (128 + SIGKILL) + +--- + +## Documentation +Create `day-57-resources-probes.md` with: +- Requests vs limits (scheduling vs enforcement) +- What happens when CPU or memory limits are exceeded +- Liveness vs readiness vs startup probes +- Screenshots of OOMKilled, Pending, and probe events + +--- + +## Submission +1. Add `day-57-resources-probes.md` to `2026/day-57/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Set resource requests and limits in Kubernetes today, watched a pod get OOMKilled, and added liveness, readiness, and startup probes for self-healing." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git "a/2026/day-58/Day58\342\200\223Metrics-Server-and-Horizontal-Pod-Autoscaler/Deployment.yml" "b/2026/day-58/Day58\342\200\223Metrics-Server-and-Horizontal-Pod-Autoscaler/Deployment.yml" new file mode 100644 index 0000000000..d05a89a197 --- /dev/null +++ "b/2026/day-58/Day58\342\200\223Metrics-Server-and-Horizontal-Pod-Autoscaler/Deployment.yml" @@ -0,0 +1,25 @@ +kind: Deployment +apiVersion: apps/v1 +metadata: + name: php-apache + namespace: apache + labels: + app: apache +spec: + replicas: 1 + selector: + matchLabels: + app: apache + template: + metadata: + labels: + app: apache + spec: + containers: + - name: apache-server + image: registry.k8s.io/hpa-example + ports: + - containerPort: 80 + resources: + requests: + cpu: "200m" diff --git "a/2026/day-58/Day58\342\200\223Metrics-Server-and-Horizontal-Pod-Autoscaler/hpa.yml" "b/2026/day-58/Day58\342\200\223Metrics-Server-and-Horizontal-Pod-Autoscaler/hpa.yml" new file mode 100644 index 0000000000..1f9ad37e2c --- /dev/null +++ "b/2026/day-58/Day58\342\200\223Metrics-Server-and-Horizontal-Pod-Autoscaler/hpa.yml" @@ -0,0 +1,33 @@ +kind: HorizontalPodAutoscaler +apiVersion: autoscaling/v2 +metadata: + name: hpa + namespace: apache +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: php-apache + minReplicas: 1 + maxReplicas: 10 + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 50 + behavior: + scaleUp: + stabilizationWindowSeconds: 0 + policies: + - type: Percent + value: 100 + periodSeconds: 15 + scaleDown: + stabilizationWindowSeconds: 300 + policies: + - type: Percent + value: 100 + periodSeconds: 5 + diff --git a/2026/day-58/README.md b/2026/day-58/README.md new file mode 100644 index 0000000000..60f02e560f --- /dev/null +++ b/2026/day-58/README.md @@ -0,0 +1,119 @@ +# Day 58 – Metrics Server and Horizontal Pod Autoscaler (HPA) + +## Task +Yesterday you set resource requests and limits. Today you put that to work. Install the Metrics Server so Kubernetes can see actual resource usage, then set up a Horizontal Pod Autoscaler that scales your app up under load and back down when things calm down. + +--- + +## Expected Output +- Metrics Server installed and `kubectl top` returning data +- An HPA that auto-scales pods under load +- A markdown file: `day-58-metrics-hpa.md` + +--- + +## Challenge Tasks + +### Task 1: Install the Metrics Server +1. Check if it is already running: `kubectl get pods -n kube-system | grep metrics-server` +2. If not, install it: + - Minikube: `minikube addons enable metrics-server` + - Kind/kubeadm: apply the official manifest from the metrics-server GitHub releases +3. On local clusters, you may need the `--kubelet-insecure-tls` flag (never in production) +4. Wait 60 seconds, then verify: `kubectl top nodes` and `kubectl top pods -A` + +**Verify:** What is the current CPU and memory usage of your node? + +--- + +### Task 2: Explore kubectl top +1. Run `kubectl top nodes`, `kubectl top pods -A`, `kubectl top pods -A --sort-by=cpu` +2. `kubectl top` shows real-time usage, not requests or limits — these are different things +3. Data comes from the Metrics Server, which polls kubelets every 15 seconds + +**Verify:** Which pod is using the most CPU right now? + +--- + +### Task 3: Create a Deployment with CPU Requests +1. Write a Deployment manifest using the `registry.k8s.io/hpa-example` image (a CPU-intensive PHP-Apache server) +2. Set `resources.requests.cpu: 200m` — HPA needs this to calculate utilization percentages +3. Expose it as a Service: `kubectl expose deployment php-apache --port=80` + +Without CPU requests, HPA cannot work — this is the most common HPA setup mistake. + +**Verify:** What is the current CPU usage of the Pod? + +--- + +### Task 4: Create an HPA (Imperative) +1. Run: `kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10` +2. Check: `kubectl get hpa` and `kubectl describe hpa php-apache` +3. TARGETS may show `` initially — wait 30 seconds for metrics to arrive + +This scales up when average CPU exceeds 50% of requests, and down when it drops below. + +**Verify:** What does the TARGETS column show? + +--- + +### Task 5: Generate Load and Watch Autoscaling +1. Start a load generator: `kubectl run load-generator --image=busybox:1.36 --restart=Never -- /bin/sh -c "while true; do wget -q -O- http://php-apache; done"` +2. Watch HPA: `kubectl get hpa php-apache --watch` +3. Over 1-3 minutes, CPU climbs above 50%, replicas increase, CPU stabilizes +4. Stop the load: `kubectl delete pod load-generator` +5. Scale-down is slow (5-minute stabilization window) — you do not need to wait + +**Verify:** How many replicas did HPA scale to under load? + +--- + +### Task 6: Create an HPA from YAML (Declarative) +1. Delete the imperative HPA: `kubectl delete hpa php-apache` +2. Write an HPA manifest using `autoscaling/v2` API with CPU target at 50% utilization +3. Add a `behavior` section to control scale-up speed (no stabilization) and scale-down speed (300 second window) +4. Apply and verify with `kubectl describe hpa` + +`autoscaling/v2` supports multiple metrics and fine-grained scaling behavior that the imperative command cannot configure. + +**Verify:** What does the `behavior` section control? + +--- + +### Task 7: Clean Up +Delete the HPA, Service, Deployment, and load-generator pod. Leave the Metrics Server installed. + +--- + +## Hints +- HPA requires `resources.requests` — without them TARGETS shows `` +- `kubectl top` = actual usage. `kubectl describe pod` = configured requests/limits +- HPA checks every 15 seconds. Scale-up is fast, scale-down has a 5-minute stabilization window +- `autoscaling/v1` = CPU only. `autoscaling/v2` = CPU + memory + custom metrics +- Formula: `desiredReplicas = ceil(currentReplicas * (currentUsage / targetUsage))` +- HPA works with Deployments, StatefulSets, and ReplicaSets + +--- + +## Documentation +Create `day-58-metrics-hpa.md` with: +- What the Metrics Server is and why HPA needs it +- How HPA calculates desired replicas +- The difference between `autoscaling/v1` and `v2` +- Screenshots of `kubectl top`, HPA events, and pod scaling + +--- + +## Submission +1. Add `day-58-metrics-hpa.md` to `2026/day-58/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Set up Kubernetes HPA today. Watched my app auto-scale from 1 to multiple replicas under load, then scale back down. This is how production handles variable traffic." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-58/day-58(1).png b/2026/day-58/day-58(1).png new file mode 100644 index 0000000000..8652f654ee Binary files /dev/null and b/2026/day-58/day-58(1).png differ diff --git a/2026/day-58/day-58-metrics-hpa.md b/2026/day-58/day-58-metrics-hpa.md new file mode 100644 index 0000000000..7091788924 --- /dev/null +++ b/2026/day-58/day-58-metrics-hpa.md @@ -0,0 +1,321 @@ +# Day 58 — Metrics Server and Horizontal Pod Autoscaler (HPA) + +--- + +## 1. What is the Metrics Server and Why Does HPA Need It? + +### What is Metrics Server? + +Metrics Server is a **cluster-wide aggregator of resource usage data**. It collects CPU and memory usage from each node's `kubelet` and exposes them via the Kubernetes Metrics API (`metrics.k8s.io`). + +It is **not installed by default** — you must deploy it manually. + +```bash +# Install Metrics Server +kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml + +# Verify it is running +kubectl get deployment metrics-server -n kube-system +``` + +### Why Does HPA Need It? + +HPA (Horizontal Pod Autoscaler) makes scaling decisions based on **live resource usage**. Without Metrics Server, HPA has no data source to read from and cannot function. + +The flow looks like this: + +``` +kubelet (on each node) + ↓ collects container stats +Metrics Server + ↓ aggregates and exposes via API +metrics.k8s.io API + ↓ HPA reads from here +HPA Controller + ↓ decides to scale up or down +Deployment (replicas adjusted) +``` + +**Without Metrics Server:** +- `kubectl top nodes` → error +- `kubectl top pods` → error +- HPA TARGETS column shows `/50%` +- No autoscaling happens + +**With Metrics Server:** +- Live CPU/memory data available +- HPA can calculate utilization +- Autoscaling works correctly + +### Quick check commands + +```bash +# Check if metrics are available +kubectl top nodes +kubectl top pods -n apache + +# Raw metrics API +kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods +``` + +--- + +## 2. How HPA Calculates Desired Replicas + +HPA uses a simple formula to decide how many replicas are needed: + +``` +desiredReplicas = ceil( currentReplicas × (currentMetricValue / desiredMetricValue) ) +``` + +### Example Calculation + +``` +currentReplicas = 2 +currentCPU usage = 90% +desiredCPU target = 50% + +desiredReplicas = ceil( 2 × (90 / 50) ) + = ceil( 2 × 1.8 ) + = ceil( 3.6 ) + = 4 pods +``` + +HPA rounds **up** (ceiling), never down — to ensure load is handled. + +### Scale Up Example + +``` +Pods = 1, CPU = 474%, Target = 50% + +desiredReplicas = ceil( 1 × (474 / 50) ) + = ceil( 9.48 ) + = 10 pods ← hits maxReplicas cap +``` + +### Scale Down Example + +``` +Pods = 10, CPU = 0%, Target = 50% + +desiredReplicas = ceil( 10 × (0 / 50) ) + = ceil( 0 ) + = 1 pod ← but waits stabilizationWindowSeconds first +``` + +### Key Behaviours + +- Scale **up** is immediate (stabilizationWindowSeconds: 0) +- Scale **down** waits for stabilization window (default 300 seconds) to avoid flapping +- HPA always respects `minReplicas` and `maxReplicas` boundaries +- CPU utilization % = (actual CPU used) ÷ (CPU request) × 100 — this is why CPU requests are mandatory + +--- + +## 3. Difference Between `autoscaling/v1` and `autoscaling/v2` + +### autoscaling/v1 — Old, Limited + +```yaml +apiVersion: autoscaling/v1 +kind: HorizontalPodAutoscaler +metadata: + name: php-apache +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: php-apache + minReplicas: 1 + maxReplicas: 10 + targetCPUUtilizationPercentage: 50 # CPU only, no other options +``` + +Limitations: +- CPU metrics only +- No memory scaling +- No custom metrics +- No behavior/cooldown control +- Deprecated — avoid using in new setups + +### autoscaling/v2 — Current, Powerful + +```yaml +apiVersion: autoscaling/v2 +kind: HorizontalPodAutoscaler +metadata: + name: php-apache + namespace: apache +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: php-apache + minReplicas: 1 + maxReplicas: 10 + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 50 + - type: Resource + resource: + name: memory # memory scaling — not in v1 + target: + type: Utilization + averageUtilization: 70 + behavior: # fine-grained control — not in v1 + scaleUp: + stabilizationWindowSeconds: 0 + policies: + - type: Percent + value: 100 + periodSeconds: 15 + scaleDown: + stabilizationWindowSeconds: 300 + policies: + - type: Percent + value: 100 + periodSeconds: 15 +``` + +### Comparison Table + +| Feature | autoscaling/v1 | autoscaling/v2 | +|---|---|---| +| CPU scaling | ✅ | ✅ | +| Memory scaling | ❌ | ✅ | +| Custom metrics | ❌ | ✅ | +| External metrics | ❌ | ✅ | +| behavior section | ❌ | ✅ | +| Scale up control | ❌ | ✅ | +| Scale down cooldown | ❌ | ✅ | +| Recommended | ❌ Deprecated | ✅ Use this | + +**Always use `autoscaling/v2`** for all new HPA definitions. + +--- + +## 4. Screenshots — kubectl top, HPA Events, Pod Scaling + +### kubectl top (Metrics Server working) + +![alt text](day-58(1).png) +![alt text](day-58.png) + +``` +$ kubectl top nodes +NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% +minikube 350m 8% 1100Mi 27% + +$ kubectl top pods -n apache +NAME CPU(cores) MEMORY(bytes) +php-apache-6d5b6b7c9f-xk2p4 200m 18Mi +``` + +### HPA Status — Idle (no load) + +``` +$ kubectl get hpa -n apache +NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE +php-apache Deployment/php-apache cpu: 0%/50% 1 10 1 6m7s +``` + +### HPA Status — Under Load (load generator running) + +``` +$ kubectl get hpa php-apache -n apache --watch +NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE +php-apache Deployment/php-apache cpu: 474%/50% 1 10 4 9m37s +php-apache Deployment/php-apache cpu: 320%/50% 1 10 7 10m +php-apache Deployment/php-apache cpu: 198%/50% 1 10 10 10m30s +php-apache Deployment/php-apache cpu: 53%/50% 1 10 10 11m +``` + +### Pod Scaling — Pods created automatically + +``` +$ kubectl get pods -n apache +NAME READY STATUS RESTARTS AGE +php-apache-6d5b6b7c9f-xk2p4 1/1 Running 0 15m ← original +php-apache-6d5b6b7c9f-ab3c1 1/1 Running 0 2m ← scaled up +php-apache-6d5b6b7c9f-de4f2 1/1 Running 0 2m +php-apache-6d5b6b7c9f-gh5j3 1/1 Running 0 1m +php-apache-6d5b6b7c9f-kl6m4 1/1 Running 0 1m +... +``` + +### HPA Status — Cooling Down (load stopped) + +``` +$ kubectl get hpa php-apache -n apache --watch +NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE +php-apache Deployment/php-apache cpu: 53%/50% 1 10 10 11m +php-apache Deployment/php-apache cpu: 50%/50% 1 10 10 12m +php-apache Deployment/php-apache cpu: 18%/50% 1 10 10 12m +php-apache Deployment/php-apache cpu: 0%/50% 1 10 10 12m + ↑ waiting 300s stabilization window +php-apache Deployment/php-apache cpu: 0%/50% 1 10 1 17m + ↑ scaled back down after cooldown +``` + +### HPA Events + +``` +$ kubectl describe hpa php-apache -n apache + +Events: + Type Reason Age Message + ---- ------ ---- ------- + Normal SuccessfulRescale 10m New size: 4; reason: cpu resource utilization above target + Normal SuccessfulRescale 9m New size: 10; reason: cpu resource utilization above target + Normal SuccessfulRescale 4m New size: 1; reason: All metrics below target +``` + +--- + +## 5. Complete Setup — Full Command Reference + +```bash +# Step 1 — Create namespace +kubectl create namespace apache + +# Step 2 — Apply Deployment + HPA +kubectl apply -f deployment.yml +kubectl apply -f hpa.yml + +# Step 3 — Expose service +kubectl expose deployment php-apache --port=80 --name=php-apache -n apache + +# Step 4 — Verify everything +kubectl get deployment -n apache +kubectl get svc -n apache +kubectl get hpa -n apache + +# Step 5 — Generate load (in separate terminal) +kubectl run load-generator \ + --image=busybox:1.36 \ + --restart=Never \ + -n apache \ + -- /bin/sh -c "while true; do wget -q -O- http://php-apache; done" + +# Step 6 — Watch HPA scale +kubectl get hpa -n apache -w +kubectl get pods -n apache -w + +# Step 7 — Stop load and watch scale down +kubectl delete pod load-generator -n apache +``` + +--- + +## Key Takeaways + +- Metrics Server is **mandatory** for HPA — install it first +- Always set `resources.requests.cpu` in your container spec — without it HPA shows `` +- Use `autoscaling/v2` — v1 is deprecated and CPU-only +- Scale **up** is fast, scale **down** is slow by design (prevents flapping) +- The `behavior` section gives fine-grained control over scaling speed and cooldown +- HPA and Deployment replicas coexist — HPA takes control of the replica count when attached diff --git a/2026/day-58/day-58.png b/2026/day-58/day-58.png new file mode 100644 index 0000000000..75209310e7 Binary files /dev/null and b/2026/day-58/day-58.png differ diff --git "a/2026/day-59/Day59\342\200\223Helm-\342\200\224Kubernetes-Package-Manager/custom-values.yml" "b/2026/day-59/Day59\342\200\223Helm-\342\200\224Kubernetes-Package-Manager/custom-values.yml" new file mode 100644 index 0000000000..26f0443a56 --- /dev/null +++ "b/2026/day-59/Day59\342\200\223Helm-\342\200\224Kubernetes-Package-Manager/custom-values.yml" @@ -0,0 +1,14 @@ +replicaCount: 3 + +service: + type: NodePort + nodePorts: + http: 30080 + +resources: + requests: + cpu: "100m" + memory: "128Mi" + limits: + cpu: "250m" + memory: "256Mi" diff --git "a/2026/day-59/Day59\342\200\223Helm-\342\200\224Kubernetes-Package-Manager/day-59-helm.md" "b/2026/day-59/Day59\342\200\223Helm-\342\200\224Kubernetes-Package-Manager/day-59-helm.md" new file mode 100644 index 0000000000..0a96a26175 --- /dev/null +++ "b/2026/day-59/Day59\342\200\223Helm-\342\200\224Kubernetes-Package-Manager/day-59-helm.md" @@ -0,0 +1,327 @@ +# Day 59 — Helm: The Kubernetes Package Manager + +--- + +## What is Helm? + +Helm is the **package manager for Kubernetes** — just like `apt` for Ubuntu or `npm` for Node.js. Instead of writing and managing multiple Kubernetes YAML files manually, Helm bundles everything into a single reusable unit called a **chart**. + +Without Helm, deploying an app to Kubernetes means manually writing and applying separate YAML files for Deployments, Services, ConfigMaps, Ingress, and more. Helm packages all of that together and lets you install, upgrade, and rollback with a single command. + +--- + +## The Three Core Concepts + +### 1. Chart +A Chart is a **package** — a collection of Kubernetes YAML templates bundled together. Think of it like a Docker image but for Kubernetes deployments. Charts can be shared and reused. + +``` +my-app/ +├── Chart.yaml ← chart metadata (name, version, description) +├── values.yaml ← default configuration values +├── charts/ ← dependent charts +└── templates/ ← Kubernetes YAML templates with placeholders + ├── deployment.yaml + ├── service.yaml + ├── ingress.yaml + └── _helpers.tpl ← reusable template functions +``` + +### 2. Repository +A Repository is a **collection of charts** hosted online — just like Docker Hub for images or npm registry for packages. + +```bash +# Bitnami's chart repository +helm repo add bitnami https://charts.bitnami.com/bitnami + +# Search for charts +helm search repo nginx +``` + +### 3. Release +A Release is a **running instance** of a chart in your cluster. You can install the same chart multiple times with different release names — each is an independent release. + +```bash +helm install my-nginx bitnami/nginx # release 1 → production +helm install staging-nginx bitnami/nginx # release 2 → staging +helm install test-nginx bitnami/nginx # release 3 → testing +``` + +--- + +## Install, Customize, Upgrade, and Rollback + +### Install + +```bash +# Basic install +helm install my-nginx bitnami/nginx + +# Install with custom values inline +helm install my-nginx bitnami/nginx \ + --set replicaCount=3 \ + --set service.type=NodePort + +# Install with a values file +helm install my-nginx bitnami/nginx -f custom-values.yaml + +# Install or upgrade in one command (best for CI/CD) +helm upgrade --install my-nginx bitnami/nginx -f custom-values.yaml +``` + +### Customize + +Before installing, check what values you can change: + +```bash +# View all default values +helm show values bitnami/nginx + +# Filter specific section +helm show values bitnami/nginx | grep -A 10 "^service:" +``` + +### Upgrade + +```bash +# Upgrade with new values +helm upgrade my-nginx bitnami/nginx \ + --set replicaCount=5 + +# Upgrade keeping existing values +helm upgrade my-nginx bitnami/nginx \ + --set replicaCount=5 \ + --reuse-values +``` + +### Rollback + +```bash +# Check release history +helm history my-nginx + +# Rollback to previous version +helm rollback my-nginx 1 + +# Rollback to specific revision +helm rollback my-nginx 2 +``` + +--- + +## Helm Chart Structure + +``` +my-app/ +├── Chart.yaml +├── values.yaml +├── charts/ +└── templates/ + ├── deployment.yaml + ├── service.yaml + ├── ingress.yaml + ├── hpa.yaml + ├── serviceaccount.yaml + ├── NOTES.txt + └── _helpers.tpl +``` + +### Chart.yaml — Chart Metadata + +```yaml +apiVersion: v2 +name: my-app +description: A Helm chart for Kubernetes +type: application +version: 0.1.0 # chart version +appVersion: "1.16.0" # actual app version +``` + +### values.yaml — Default Values + +```yaml +replicaCount: 1 + +image: + repository: nginx + pullPolicy: IfNotPresent + tag: "" + +service: + type: ClusterIP + port: 80 + +resources: {} +``` + +--- + +## Go Template Syntax + +Helm uses Go's templating engine to dynamically fill in Kubernetes YAML at install time. The `{{ }}` syntax marks placeholders that get replaced with real values. + +### Basic Value Injection + +```yaml +replicas: {{ .Values.replicaCount }} # → replicas: 3 +- name: {{ .Chart.Name }} # → - name: my-app +namespace: {{ .Release.Namespace }} # → namespace: default +``` + +### Three Template Data Sources + +| Source | Example | Comes From | +|---|---|---| +| `.Values` | `{{ .Values.replicaCount }}` | values.yaml | +| `.Chart` | `{{ .Chart.Name }}` | Chart.yaml | +| `.Release` | `{{ .Release.Name }}` | helm install command | + +### Common Template Functions + +```yaml +# include — reusable helper functions from _helpers.tpl +name: {{ include "my-app.fullname" . }} + +# nindent — control indentation +labels: + {{- include "my-app.labels" . | nindent 4 }} + +# default — fallback value if empty +image: "nginx:{{ .Values.image.tag | default .Chart.AppVersion }}" + +# if/end — conditional blocks +{{- if not .Values.autoscaling.enabled }} +replicas: {{ .Values.replicaCount }} +{{- end }} + +# with/end — conditional + context change +{{- with .Values.resources }} +resources: + {{- toYaml . | nindent 12 }} +{{- end }} + +# toYaml — convert object to YAML string +{{- toYaml .Values.resources | nindent 12 }} +``` + +### Whitespace Control + +```yaml +{{- → trim whitespace BEFORE this tag +-}} → trim whitespace AFTER this tag +| → pipe output into next function (just like Linux pipes!) +``` + +### Debug Templates Without Installing + +```bash +# See final rendered YAML without touching the cluster +helm template my-release my-app/ + +# Validate against cluster +helm template my-release my-app/ | kubectl apply --dry-run=client -f - + +# Lint the chart +helm lint my-app/ +``` + +--- + +## custom-values.yaml + +```yaml +# Number of pod replicas to run +replicaCount: 3 + +# Expose app via NodePort so it's accessible on the node's IP +service: + type: NodePort + nodePorts: + http: 30080 # pin to a specific port (30000-32767 range) + +# Resource limits — controls how much CPU/memory each pod can use +resources: + + # requests = guaranteed minimum resources reserved for the pod + requests: + cpu: "100m" # 100 millicores = 0.1 of a CPU core + memory: "128Mi" # 128 megabytes guaranteed + + # limits = hard maximum — pod gets killed if it exceeds memory limit + limits: + cpu: "250m" # max 0.25 of a CPU core + memory: "256Mi" # max 256 megabytes — OOMKilled if exceeded +``` + +### Apply It + +```bash +# Fresh install +helm install my-nginx bitnami/nginx -f custom-values.yaml + +# Upgrade existing release +helm upgrade --install my-nginx bitnami/nginx -f custom-values.yaml + +# Combine file with inline override +helm upgrade --install my-nginx bitnami/nginx \ + -f custom-values.yaml \ + --set replicaCount=5 # --set always overrides -f +``` + +### Verify Resources Were Applied + +```bash +kubectl describe pod -l app.kubernetes.io/instance=my-nginx | grep -A 6 "Limits:" +``` + +--- + +## Key Commands Cheatsheet + +```bash +# Repo management +helm repo add bitnami https://charts.bitnami.com/bitnami +helm repo update +helm search repo nginx + +# Chart info +helm show values bitnami/nginx +helm show chart bitnami/nginx + +# Install / Upgrade +helm install -f values.yaml +helm upgrade --install -f values.yaml + +# Inspect +helm list +helm status my-nginx +helm history my-nginx + +# Rollback +helm rollback my-nginx 1 + +# Remove +helm uninstall my-nginx + +# Development +helm create my-app +helm lint my-app/ +helm template my-release my-app/ +helm package my-app/ +``` + +--- + +## Mental Model + +``` +Chart = the recipe +Values = your ingredients +Release = the cooked dish running in your cluster + +helm install = cook the dish using the recipe + your ingredients +helm upgrade = remake the dish with new ingredients +helm rollback = go back to how the dish was before +``` + +> Helm doesn't just deploy — it manages the full lifecycle of your app in Kubernetes. Install, upgrade, rollback, uninstall — all with one tool. 🚀 diff --git a/2026/day-59/README.md b/2026/day-59/README.md new file mode 100644 index 0000000000..0aac940c86 --- /dev/null +++ b/2026/day-59/README.md @@ -0,0 +1,129 @@ +# Day 59 – Helm — Kubernetes Package Manager + +## Task +Over the past eight days you have written Deployments, Services, ConfigMaps, Secrets, PVCs, and more — all as individual YAML files. For a real application you might have dozens of these. Helm is the package manager for Kubernetes, like apt for Ubuntu. Today you install charts, customize them, and create your own. + +--- + +## Expected Output +- Helm installed and a chart deployed from Bitnami +- A release customized, upgraded, and rolled back +- A custom chart created and installed +- A markdown file: `day-59-helm.md` + +--- + +## Challenge Tasks + +### Task 1: Install Helm +1. Install Helm (brew, curl script, or chocolatey depending on your OS) +2. Verify with `helm version` and `helm env` + +Three core concepts: +- **Chart** — a package of Kubernetes manifest templates +- **Release** — a specific installation of a chart in your cluster +- **Repository** — a collection of charts (like a package repo) + +**Verify:** What version of Helm is installed? + +--- + +### Task 2: Add a Repository and Search +1. Add the Bitnami repository: `helm repo add bitnami https://charts.bitnami.com/bitnami` +2. Update: `helm repo update` +3. Search: `helm search repo nginx` and `helm search repo bitnami` + +**Verify:** How many charts does Bitnami have? + +--- + +### Task 3: Install a Chart +1. Deploy nginx: `helm install my-nginx bitnami/nginx` +2. Check what was created: `kubectl get all` +3. Inspect the release: `helm list`, `helm status my-nginx`, `helm get manifest my-nginx` + +One command replaced writing a Deployment, Service, and ConfigMap by hand. + +**Verify:** How many Pods are running? What Service type was created? + +--- + +### Task 4: Customize with Values +1. View defaults: `helm show values bitnami/nginx` +2. Install a custom release with `--set replicaCount=3 --set service.type=NodePort` +3. Create a `custom-values.yaml` file with replicaCount, service type, and resource limits +4. Install another release using `-f custom-values.yaml` +5. Check overrides: `helm get values ` + +**Verify:** Does the values file release have the correct replicas and service type? + +--- + +### Task 5: Upgrade and Rollback +1. Upgrade: `helm upgrade my-nginx bitnami/nginx --set replicaCount=5` +2. Check history: `helm history my-nginx` +3. Rollback: `helm rollback my-nginx 1` +4. Check history again — rollback creates a new revision (3), not overwriting revision 2 + +Same concept as Deployment rollouts from Day 52, but at the full stack level. + +**Verify:** How many revisions after the rollback? + +--- + +### Task 6: Create Your Own Chart +1. Scaffold: `helm create my-app` +2. Explore the directory: `Chart.yaml`, `values.yaml`, `templates/deployment.yaml` +3. Look at the Go template syntax in templates: `{{ .Values.replicaCount }}`, `{{ .Chart.Name }}` +4. Edit `values.yaml` — set replicaCount to 3 and image to nginx:1.25 +5. Validate: `helm lint my-app` +6. Preview: `helm template my-release ./my-app` +7. Install: `helm install my-release ./my-app` +8. Upgrade: `helm upgrade my-release ./my-app --set replicaCount=5` + +**Verify:** After installing, 3 replicas? After upgrading, 5? + +--- + +### Task 7: Clean Up +1. Uninstall all releases: `helm uninstall ` for each +2. Remove chart directory and values file +3. Use `--keep-history` if you want to retain release history for auditing + +**Verify:** Does `helm list` show zero releases? + +--- + +## Hints +- `helm show values ` — see what you can customize +- `--set key=value` for single overrides, `-f values.yaml` for files +- Nested values use dots: `--set service.type=NodePort` +- `helm get values ` shows overrides, `--all` for everything +- `helm template` renders without installing — great for debugging +- `helm lint` validates chart structure before installing +- Templates: `{{ .Values.key }}`, `{{ .Chart.Name }}`, `{{ .Release.Name }}` + +--- + +## Documentation +Create `day-59-helm.md` with: +- What Helm is and the three core concepts +- How to install, customize, upgrade, and rollback +- The structure of a Helm chart and how Go templating works +- Your `custom-values.yaml` with explanations + +--- + +## Submission +1. Add `day-59-helm.md` and `custom-values.yaml` to `2026/day-59/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Learned Helm today — deployed charts, customized with values, performed rollbacks, and created my own chart from scratch. One command replaces dozens of YAML files." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-60/README.md b/2026/day-60/README.md new file mode 100644 index 0000000000..8d631d92c0 --- /dev/null +++ b/2026/day-60/README.md @@ -0,0 +1,128 @@ +# Day 60 – Capstone: Deploy WordPress + MySQL on Kubernetes + +## Task +Ten days of Kubernetes — clusters, Pods, Deployments, Services, ConfigMaps, Secrets, storage, StatefulSets, resource management, autoscaling, and Helm. Today you put it all together. Deploy a real WordPress + MySQL application using every major concept you have learned. + +--- + +## Expected Output +- A complete WordPress + MySQL stack in a `capstone` namespace +- Self-healing and data persistence verified +- A markdown file: `day-60-capstone.md` +- Screenshot of the running WordPress site and `kubectl get all -n capstone` + +--- + +## Challenge Tasks + +### Task 1: Create the Namespace (Day 52) +1. Create a `capstone` namespace +2. Set it as your default: `kubectl config set-context --current --namespace=capstone` + +--- + +### Task 2: Deploy MySQL (Days 54-56) +1. Create a Secret with `MYSQL_ROOT_PASSWORD`, `MYSQL_DATABASE`, `MYSQL_USER`, and `MYSQL_PASSWORD` using `stringData` +2. Create a Headless Service (`clusterIP: None`) for MySQL on port 3306 +3. Create a StatefulSet for MySQL with: + - Image: `mysql:8.0` + - `envFrom` referencing the Secret + - Resource requests (cpu: 250m, memory: 512Mi) and limits (cpu: 500m, memory: 1Gi) + - A `volumeClaimTemplates` section requesting 1Gi of storage, mounted at `/var/lib/mysql` +4. Verify MySQL works: `kubectl exec -it mysql-0 -- mysql -u -p -e "SHOW DATABASES;"` + +**Verify:** Can you see the `wordpress` database? + +--- + +### Task 3: Deploy WordPress (Days 52, 54, 57) +1. Create a ConfigMap with `WORDPRESS_DB_HOST` set to `mysql-0.mysql.capstone.svc.cluster.local:3306` and `WORDPRESS_DB_NAME` +2. Create a Deployment with 2 replicas using `wordpress:latest` that: + - Uses `envFrom` for the ConfigMap + - Uses `secretKeyRef` for `WORDPRESS_DB_USER` and `WORDPRESS_DB_PASSWORD` from the MySQL Secret + - Has resource requests and limits + - Has a liveness probe and readiness probe on `/wp-login.php` port 80 +3. Wait until both pods show `1/1 Running` + +**Verify:** Are both WordPress pods running and ready? + +--- + +### Task 4: Expose WordPress (Day 53) +1. Create a NodePort Service on port 30080 targeting the WordPress pods +2. Access WordPress in your browser: + - Minikube: `minikube service wordpress -n capstone` + - Kind: `kubectl port-forward svc/wordpress 8080:80 -n capstone` +3. Complete the setup wizard and create a blog post + +**Verify:** Can you see the WordPress setup page? + +--- + +### Task 5: Test Self-Healing and Persistence +1. Delete a WordPress pod — watch the Deployment recreate it within seconds. Refresh the site. +2. Delete the MySQL pod: `kubectl delete pod mysql-0 -n capstone` — watch the StatefulSet recreate it +3. After MySQL recovers, refresh WordPress — your blog post should still be there + +**Verify:** After deleting both pods, is your blog post still there? + +--- + +### Task 6: Set Up HPA (Day 58) +1. Write an HPA manifest targeting the WordPress Deployment with CPU at 50%, min 2, max 10 replicas +2. Apply and check: `kubectl get hpa -n capstone` +3. Run `kubectl get all -n capstone` for the complete picture + +**Verify:** Does the HPA show correct min/max and target? + +--- + +### Task 7: (Bonus) Compare with Helm (Day 59) +1. Install WordPress using `helm install wp-helm bitnami/wordpress` in a separate namespace +2. Compare: how many resources did each approach create? Which gives more control? +3. Clean up the Helm deployment + +--- + +### Task 8: Clean Up and Reflect +1. Take a final look: `kubectl get all -n capstone` +2. Count the concepts you used: Namespace, Secret, ConfigMap, PVC, StatefulSet, Headless Service, Deployment, NodePort Service, Resource Limits, Probes, HPA, Helm — twelve concepts in one deployment +3. Delete the namespace: `kubectl delete namespace capstone` +4. Reset default: `kubectl config set-context --current --namespace=default` + +**Verify:** Did deleting the namespace remove everything? + +--- + +## Hints +- If MySQL takes long to start, check: `kubectl logs mysql-0 -n capstone` +- `WORDPRESS_DB_HOST` must match the StatefulSet DNS pattern: `...svc.cluster.local` +- WordPress probes may fail initially — `initialDelaySeconds` gives it time to boot +- If PVC stays Pending, check `kubectl get storageclass` +- `nodePort` must be in range 30000-32767 +- The Bitnami chart uses MariaDB instead of MySQL — compatible but not identical + +--- + +## Documentation +Create `day-60-capstone.md` with: +- Architecture of your deployment (which resources connect to which) +- Results of self-healing and persistence tests +- A table mapping each concept to the day you learned it +- Reflection: what was hardest, what clicked, what you would add for production + +--- + +## Submission +1. Add `day-60-capstone.md` to `2026/day-60/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Completed the Kubernetes capstone — deployed WordPress + MySQL using twelve K8s concepts: Namespaces, Deployments, StatefulSets, Services, ConfigMaps, Secrets, PVCs, resource limits, probes, and HPA. Ten days of learning, one real application." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-61/README.md b/2026/day-61/README.md new file mode 100644 index 0000000000..152b4c0b33 --- /dev/null +++ b/2026/day-61/README.md @@ -0,0 +1,177 @@ +# Day 61 -- Introduction to Terraform and Your First AWS Infrastructure + +## Task +You have been deploying containers, writing CI/CD pipelines, and orchestrating workloads on Kubernetes. But who creates the servers, networks, and clusters underneath? Today you start your Infrastructure as Code journey with Terraform -- the tool that lets you define, provision, and manage cloud infrastructure by writing code. + +By the end of today, you will have created real AWS resources using nothing but a `.tf` file and a terminal. + +--- + +## Expected Output +- Terraform installed and working on your machine +- AWS CLI configured with valid credentials +- An S3 bucket and EC2 instance created and destroyed via Terraform +- A markdown file: `day-61-terraform-intro.md` + +--- + +## Challenge Tasks + +### Task 1: Understand Infrastructure as Code +Before touching the terminal, research and write short notes on: + +1. What is Infrastructure as Code (IaC)? Why does it matter in DevOps? +2. What problems does IaC solve compared to manually creating resources in the AWS console? +3. How is Terraform different from AWS CloudFormation, Ansible, and Pulumi? +4. What does it mean that Terraform is "declarative" and "cloud-agnostic"? + +Write this in your own words -- not copy-pasted definitions. + +--- + +### Task 2: Install Terraform and Configure AWS +1. Install Terraform: +```bash +# macOS +brew tap hashicorp/tap +brew install hashicorp/tap/terraform + +# Linux (amd64) +wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg +echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list +sudo apt update && sudo apt install terraform + +# Windows +choco install terraform +``` + +2. Verify: +```bash +terraform -version +``` + +3. Install and configure the AWS CLI: +```bash +aws configure +# Enter your Access Key ID, Secret Access Key, default region (e.g., ap-south-1), output format (json) +``` + +4. Verify AWS access: +```bash +aws sts get-caller-identity +``` + +You should see your AWS account ID and ARN. + +--- + +### Task 3: Your First Terraform Config -- Create an S3 Bucket +Create a project directory and write your first Terraform config: + +```bash +mkdir terraform-basics && cd terraform-basics +``` + +Create a file called `main.tf` with: +1. A `terraform` block with `required_providers` specifying the `aws` provider +2. A `provider "aws"` block with your region +3. A `resource "aws_s3_bucket"` that creates a bucket with a globally unique name + +Run the Terraform lifecycle: +```bash +terraform init # Download the AWS provider +terraform plan # Preview what will be created +terraform apply # Create the bucket (type 'yes' to confirm) +``` + +Go to the AWS S3 console and verify your bucket exists. + +**Document:** What did `terraform init` download? What does the `.terraform/` directory contain? + +--- + +### Task 4: Add an EC2 Instance +In the same `main.tf`, add: +1. A `resource "aws_instance"` using AMI `ami-0f5ee92e2d63afc18` (Amazon Linux 2 in ap-south-1 -- use the correct AMI for your region) +2. Set instance type to `t2.micro` +3. Add a tag: `Name = "TerraWeek-Day1"` + +Run: +```bash +terraform plan # You should see 1 resource to add (bucket already exists) +terraform apply +``` + +Go to the AWS EC2 console and verify your instance is running with the correct name tag. + +**Document:** How does Terraform know the S3 bucket already exists and only the EC2 instance needs to be created? + +--- + +### Task 5: Understand the State File +Terraform tracks everything it creates in a state file. Time to inspect it. + +1. Open `terraform.tfstate` in your editor -- read the JSON structure +2. Run these commands and document what each returns: +```bash +terraform show # Human-readable view of current state +terraform state list # List all resources Terraform manages +terraform state show aws_s3_bucket. # Detailed view of a specific resource +terraform state show aws_instance. +``` + +3. Answer these questions in your notes: + - What information does the state file store about each resource? + - Why should you never manually edit the state file? + - Why should the state file not be committed to Git? + +--- + +### Task 6: Modify, Plan, and Destroy +1. Change the EC2 instance tag from `"TerraWeek-Day1"` to `"TerraWeek-Modified"` in your `main.tf` +2. Run `terraform plan` and read the output carefully: + - What do the `~`, `+`, and `-` symbols mean? + - Is this an in-place update or a destroy-and-recreate? +3. Apply the change +4. Verify the tag changed in the AWS console +5. Finally, destroy everything: +```bash +terraform destroy +``` +6. Verify in the AWS console -- both the S3 bucket and EC2 instance should be gone + +--- + +## Hints +- S3 bucket names must be globally unique -- use something like `terraweek--2026` +- AMI IDs are region-specific -- search "Amazon Linux 2 AMI" in your region's EC2 launch wizard +- `terraform fmt` auto-formats your `.tf` files -- run it before committing +- `terraform validate` checks for syntax errors without connecting to AWS +- The `.terraform/` directory contains downloaded provider plugins +- Add `*.tfstate`, `*.tfstate.backup`, and `.terraform/` to your `.gitignore` + +--- + +## Documentation +Create `day-61-terraform-intro.md` with: +- IaC explanation in your own words (3-4 sentences) +- Screenshot of `terraform apply` creating your S3 bucket and EC2 instance +- Screenshot of the resources in the AWS console +- What each Terraform command does (init, plan, apply, destroy, show, state list) +- What the state file contains and why it matters + +--- + +## Submission +1. Add `day-61-terraform-intro.md` to `2026/day-61/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Started the TerraWeek Challenge -- installed Terraform, created my first S3 bucket and EC2 instance using code, and destroyed it all with one command. Infrastructure as Code just clicked." + +`#90DaysOfDevOps` `#TerraWeek` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-62/README.md b/2026/day-62/README.md new file mode 100644 index 0000000000..ec28c27fdc --- /dev/null +++ b/2026/day-62/README.md @@ -0,0 +1,152 @@ +# Day 62 -- Providers, Resources and Dependencies + +## Task +Yesterday you created standalone resources. But real infrastructure is connected -- a server lives inside a subnet, a subnet lives inside a VPC, a security group controls what traffic gets in. Today you build a complete networking stack on AWS and learn how Terraform figures out what to create first. + +Understanding dependencies is what separates a Terraform beginner from someone who can build production infrastructure. + +--- + +## Expected Output +- A VPC with subnet, internet gateway, route table, security group, and an EC2 instance -- all created via Terraform +- A dependency graph visualized with `terraform graph` +- A markdown file: `day-62-providers-resources.md` + +--- + +## Challenge Tasks + +### Task 1: Explore the AWS Provider +1. Create a new project directory: `terraform-aws-infra` +2. Write a `providers.tf` file: + - Define the `terraform` block with `required_providers` pinning the AWS provider to version `~> 5.0` + - Define the `provider "aws"` block with your region +3. Run `terraform init` and check the output -- what version was installed? +4. Read the provider lock file `.terraform.lock.hcl` -- what does it do? + +**Document:** What does `~> 5.0` mean? How is it different from `>= 5.0` and `= 5.0.0`? + +--- + +### Task 2: Build a VPC from Scratch +Create a `main.tf` and define these resources one by one: + +1. `aws_vpc` -- CIDR block `10.0.0.0/16`, tag it `"TerraWeek-VPC"` +2. `aws_subnet` -- CIDR block `10.0.1.0/24`, reference the VPC ID from step 1, enable public IP on launch, tag it `"TerraWeek-Public-Subnet"` +3. `aws_internet_gateway` -- attach it to the VPC +4. `aws_route_table` -- create it in the VPC, add a route for `0.0.0.0/0` pointing to the internet gateway +5. `aws_route_table_association` -- associate the route table with the subnet + +Run `terraform plan` -- you should see 5 resources to create. + +**Verify:** Apply and check the AWS VPC console. Can you see all five resources connected? + +--- + +### Task 3: Understand Implicit Dependencies +Look at your `main.tf` carefully: + +1. The subnet references `aws_vpc.main.id` -- this is an implicit dependency +2. The internet gateway references the VPC ID -- another implicit dependency +3. The route table association references both the route table and the subnet + +Answer these questions: +- How does Terraform know to create the VPC before the subnet? +- What would happen if you tried to create the subnet before the VPC existed? +- Find all implicit dependencies in your config and list them + +--- + +### Task 4: Add a Security Group and EC2 Instance +Add to your config: + +1. `aws_security_group` in the VPC: + - Ingress rule: allow SSH (port 22) from `0.0.0.0/0` + - Ingress rule: allow HTTP (port 80) from `0.0.0.0/0` + - Egress rule: allow all outbound traffic + - Tag: `"TerraWeek-SG"` + +2. `aws_instance` in the subnet: + - Use Amazon Linux 2 AMI for your region + - Instance type: `t2.micro` + - Associate the security group + - Set `associate_public_ip_address = true` + - Tag: `"TerraWeek-Server"` + +Apply and verify -- your EC2 instance should have a public IP and be reachable. + +--- + +### Task 5: Explicit Dependencies with depends_on +Sometimes Terraform cannot detect a dependency automatically. + +1. Add a second `aws_s3_bucket` resource for application logs +2. Add `depends_on = [aws_instance.main]` to the S3 bucket -- even though there is no direct reference, you want the bucket created only after the instance +3. Run `terraform plan` and observe the order + +Now visualize the entire dependency tree: +```bash +terraform graph | dot -Tpng > graph.png +``` +If you don't have `dot` (Graphviz) installed, use: +```bash +terraform graph +``` +and paste the output into an online Graphviz viewer. + +**Document:** When would you use `depends_on` in real projects? Give two examples. + +--- + +### Task 6: Lifecycle Rules and Destroy +1. Add a `lifecycle` block to your EC2 instance: +```hcl +lifecycle { + create_before_destroy = true +} +``` +2. Change the AMI ID to a different one and run `terraform plan` -- observe that Terraform plans to create the new instance before destroying the old one + +3. Destroy everything: +```bash +terraform destroy +``` +4. Watch the destroy order -- Terraform destroys in reverse dependency order. Verify in the AWS console that everything is cleaned up. + +**Document:** What are the three lifecycle arguments (`create_before_destroy`, `prevent_destroy`, `ignore_changes`) and when would you use each? + +--- + +## Hints +- `aws_vpc.main.id` syntax: `..` +- Use `terraform fmt` to keep your HCL clean +- CIDR `10.0.0.0/16` gives you 65,536 IPs, `10.0.1.0/24` gives you 256 +- If you cannot SSH into the instance, check: security group rules, public IP, route table, internet gateway +- `terraform graph` outputs DOT format -- paste it into webgraphviz.com if you don't have Graphviz +- Always destroy resources when done to avoid AWS charges + +--- + +## Documentation +Create `day-62-providers-resources.md` with: +- Your full `main.tf` with comments explaining each resource +- Screenshot of `terraform apply` output +- Screenshot of the VPC and its resources in the AWS console +- The dependency graph (image or text) +- Explanation of implicit vs explicit dependencies in your own words + +--- + +## Submission +1. Add `day-62-providers-resources.md` to `2026/day-62/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Built a complete AWS networking stack with Terraform today -- VPC, subnets, internet gateway, route tables, security groups, and an EC2 instance. All connected through dependency graphs. Terraform decides the order, you define the desired state." + +`#90DaysOfDevOps` `#TerraWeek` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-63/README.md b/2026/day-63/README.md new file mode 100644 index 0000000000..9749d624c7 --- /dev/null +++ b/2026/day-63/README.md @@ -0,0 +1,219 @@ +# Day 63 -- Variables, Outputs, Data Sources and Expressions + +## Task +Your Day 62 config works, but it is full of hardcoded values -- region, CIDR blocks, AMI IDs, instance types, tags. Change the region and everything breaks. Today you make your Terraform configs dynamic, reusable, and environment-aware. + +This is the difference between a config that works once and a config you can use across projects. + +--- + +## Expected Output +- A fully parameterized Terraform config with no hardcoded values +- Separate `.tfvars` files for different environments +- Outputs printed after every apply +- A markdown file: `day-63-variables-outputs.md` + +--- + +## Challenge Tasks + +### Task 1: Extract Variables +Take your Day 62 infrastructure config and refactor it: + +1. Create a `variables.tf` file with input variables for: + - `region` (string, default: your preferred region) + - `vpc_cidr` (string, default: `"10.0.0.0/16"`) + - `subnet_cidr` (string, default: `"10.0.1.0/24"`) + - `instance_type` (string, default: `"t2.micro"`) + - `project_name` (string, no default -- force the user to provide it) + - `environment` (string, default: `"dev"`) + - `allowed_ports` (list of numbers, default: `[22, 80, 443]`) + - `extra_tags` (map of strings, default: `{}`) + +2. Replace every hardcoded value in `main.tf` with `var.` references +3. Run `terraform plan` -- it should prompt you for `project_name` since it has no default + +**Document:** What are the five variable types in Terraform? (`string`, `number`, `bool`, `list`, `map`) + +--- + +### Task 2: Variable Files and Precedence +1. Create `terraform.tfvars`: +```hcl +project_name = "terraweek" +environment = "dev" +instance_type = "t2.micro" +``` + +2. Create `prod.tfvars`: +```hcl +project_name = "terraweek" +environment = "prod" +instance_type = "t3.small" +vpc_cidr = "10.1.0.0/16" +subnet_cidr = "10.1.1.0/24" +``` + +3. Apply with the default file: +```bash +terraform plan # Uses terraform.tfvars automatically +``` + +4. Apply with the prod file: +```bash +terraform plan -var-file="prod.tfvars" # Uses prod.tfvars +``` + +5. Override with CLI: +```bash +terraform plan -var="instance_type=t2.nano" # CLI overrides everything +``` + +6. Set an environment variable: +```bash +export TF_VAR_environment="staging" +terraform plan # env var overrides default but not tfvars +``` + +**Document:** Write the variable precedence order from lowest to highest priority. + +--- + +### Task 3: Add Outputs +Create an `outputs.tf` file with outputs for: + +1. `vpc_id` -- the VPC ID +2. `subnet_id` -- the public subnet ID +3. `instance_id` -- the EC2 instance ID +4. `instance_public_ip` -- the public IP of the EC2 instance +5. `instance_public_dns` -- the public DNS name +6. `security_group_id` -- the security group ID + +Apply your config and verify the outputs are printed at the end: +```bash +terraform apply + +# After apply, you can also run: +terraform output # Show all outputs +terraform output instance_public_ip # Show a specific output +terraform output -json # JSON format for scripting +``` + +**Verify:** Does `terraform output instance_public_ip` return the correct IP? + +--- + +### Task 4: Use Data Sources +Stop hardcoding the AMI ID. Use a data source to fetch it dynamically. + +1. Add a `data "aws_ami"` block that: + - Filters for Amazon Linux 2 images + - Filters for `hvm` virtualization and `gp2` root device + - Uses `owners = ["amazon"]` + - Sets `most_recent = true` + +2. Replace the hardcoded AMI in your `aws_instance` with `data.aws_ami.amazon_linux.id` + +3. Add a `data "aws_availability_zones"` block to fetch available AZs in your region + +4. Use the first AZ in your subnet: `data.aws_availability_zones.available.names[0]` + +Apply and verify -- your config now works in any region without changing the AMI. + +**Document:** What is the difference between a `resource` and a `data` source? + +--- + +### Task 5: Use Locals for Dynamic Values +1. Add a `locals` block: +```hcl +locals { + name_prefix = "${var.project_name}-${var.environment}" + common_tags = { + Project = var.project_name + Environment = var.environment + ManagedBy = "Terraform" + } +} +``` + +2. Replace all Name tags with `local.name_prefix`: + - VPC: `"${local.name_prefix}-vpc"` + - Subnet: `"${local.name_prefix}-subnet"` + - Instance: `"${local.name_prefix}-server"` + +3. Merge common tags with resource-specific tags: +```hcl +tags = merge(local.common_tags, { + Name = "${local.name_prefix}-server" +}) +``` + +Apply and check the tags in the AWS console -- every resource should have consistent tagging. + +--- + +### Task 6: Built-in Functions and Conditional Expressions +Practice these in `terraform console`: +```bash +terraform console +``` + +1. **String functions:** + - `upper("terraweek")` -> `"TERRAWEEK"` + - `join("-", ["terra", "week", "2026"])` -> `"terra-week-2026"` + - `format("arn:aws:s3:::%s", "my-bucket")` + +2. **Collection functions:** + - `length(["a", "b", "c"])` -> `3` + - `lookup({dev = "t2.micro", prod = "t3.small"}, "dev")` -> `"t2.micro"` + - `toset(["a", "b", "a"])` -> removes duplicates + +3. **Networking function:** + - `cidrsubnet("10.0.0.0/16", 8, 1)` -> `"10.0.1.0/24"` + +4. **Conditional expression** -- add this to your config: +```hcl +instance_type = var.environment == "prod" ? "t3.small" : "t2.micro" +``` + +Apply with `environment = "prod"` and verify the instance type changes. + +**Document:** Pick five functions you find most useful and explain what each does. + +--- + +## Hints +- `terraform.tfvars` is loaded automatically. Any other `.tfvars` file needs `-var-file` +- Variable precedence (low to high): default -> `terraform.tfvars` -> `*.auto.tfvars` -> `-var-file` -> `-var` flag -> `TF_VAR_*` env vars +- `terraform console` is an interactive REPL for testing expressions and functions +- Data sources are read-only -- they fetch information, they don't create resources +- `merge()` combines two maps -- great for tags +- `terraform output -json` is useful when piping output into other scripts + +--- + +## Documentation +Create `day-63-variables-outputs.md` with: +- Your `variables.tf` with all variable types +- Both `.tfvars` files (dev and prod) +- Screenshot of outputs after `terraform apply` +- Explanation of variable precedence with examples +- Five built-in functions you found most useful +- The difference between `variable`, `local`, `output`, and `data` + +--- + +## Submission +1. Add `day-63-variables-outputs.md` to `2026/day-63/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Made my Terraform configs fully dynamic today -- variables for every environment, data sources for AMI lookups, locals for consistent tagging, and conditional expressions for environment-specific sizing. Zero hardcoded values." + +`#90DaysOfDevOps` `#TerraWeek` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-64/README.md b/2026/day-64/README.md new file mode 100644 index 0000000000..397b4c8b96 --- /dev/null +++ b/2026/day-64/README.md @@ -0,0 +1,214 @@ +# Day 64 -- Terraform State Management and Remote Backends + +## Task +The state file is the single most important thing in Terraform. It is the source of truth -- the map between your `.tf` files and what actually exists in the cloud. Lose it and Terraform forgets everything. Corrupt it and your next apply could destroy production. + +Today you learn to manage state like a professional -- remote backends, locking, importing existing resources, and handling drift. + +--- + +## Expected Output +- Terraform state migrated from local to S3 remote backend with DynamoDB locking +- An existing AWS resource imported into Terraform state +- State drift simulated and reconciled +- A markdown file: `day-64-state-management.md` + +--- + +## Challenge Tasks + +### Task 1: Inspect Your Current State +Use your Day 63 config (or create a small config with a VPC and EC2 instance). Apply it and then explore the state: + +```bash +terraform show # Full state in human-readable format +terraform state list # All resources tracked by Terraform +terraform state show aws_instance. # Every attribute of the instance +terraform state show aws_vpc. # Every attribute of the VPC +``` + +Answer: +1. How many resources does Terraform track? +2. What attributes does the state store for an EC2 instance? (hint: way more than what you defined) +3. Open `terraform.tfstate` in an editor -- find the `serial` number. What does it represent? + +--- + +### Task 2: Set Up S3 Remote Backend +Storing state locally is dangerous -- one deleted file and you lose everything. Time to move it to S3. + +1. First, create the backend infrastructure (do this manually or in a separate Terraform config): +```bash +# Create S3 bucket for state storage +aws s3api create-bucket \ + --bucket terraweek-state- \ + --region ap-south-1 \ + --create-bucket-configuration LocationConstraint=ap-south-1 + +# Enable versioning (so you can recover previous state) +aws s3api put-bucket-versioning \ + --bucket terraweek-state- \ + --versioning-configuration Status=Enabled + +# Create DynamoDB table for state locking +aws dynamodb create-table \ + --table-name terraweek-state-lock \ + --attribute-definitions AttributeName=LockID,AttributeType=S \ + --key-schema AttributeName=LockID,KeyType=HASH \ + --billing-mode PAY_PER_REQUEST \ + --region ap-south-1 +``` + +2. Add the backend block to your Terraform config: +```hcl +terraform { + backend "s3" { + bucket = "terraweek-state-" + key = "dev/terraform.tfstate" + region = "ap-south-1" + dynamodb_table = "terraweek-state-lock" + encrypt = true + } +} +``` + +3. Run: +```bash +terraform init +``` +Terraform will ask: "Do you want to copy existing state to the new backend?" -- say yes. + +4. Verify: + - Check the S3 bucket -- you should see `dev/terraform.tfstate` + - Your local `terraform.tfstate` should now be empty or gone + - Run `terraform plan` -- it should show no changes (state migrated correctly) + +--- + +### Task 3: Test State Locking +State locking prevents two people from running `terraform apply` at the same time and corrupting the state. + +1. Open **two terminals** in the same project directory +2. In Terminal 1, run: +```bash +terraform apply +``` +3. While Terminal 1 is waiting for confirmation, in Terminal 2 run: +```bash +terraform plan +``` +4. Terminal 2 should show a **lock error** with a Lock ID + +**Document:** What is the error message? Why is locking critical for team environments? + +5. After the test, if you get stuck with a stale lock: +```bash +terraform force-unlock +``` + +--- + +### Task 4: Import an Existing Resource +Not everything starts with Terraform. Sometimes resources already exist in AWS and you need to bring them under Terraform management. + +1. Manually create an S3 bucket in the AWS console -- name it `terraweek-import-test-` +2. Write a `resource "aws_s3_bucket"` block in your config for this bucket (just the bucket name, nothing else) +3. Import it: +```bash +terraform import aws_s3_bucket.imported terraweek-import-test- +``` +4. Run `terraform plan`: + - If you see "No changes" -- the import was perfect + - If you see changes -- your config does not match reality. Update your config to match, then plan again until you get "No changes" + +5. Run `terraform state list` -- the imported bucket should now appear alongside your other resources + +**Document:** What is the difference between `terraform import` and creating a resource from scratch? + +--- + +### Task 5: State Surgery -- mv and rm +Sometimes you need to rename a resource or remove it from state without destroying it in AWS. + +1. **Rename a resource in state:** +```bash +terraform state list # Note the current resource names +terraform state mv aws_s3_bucket.imported aws_s3_bucket.logs_bucket +``` +Update your `.tf` file to match the new name. Run `terraform plan` -- it should show no changes. + +2. **Remove a resource from state (without destroying it):** +```bash +terraform state rm aws_s3_bucket.logs_bucket +``` +Run `terraform plan` -- Terraform no longer knows about the bucket, but it still exists in AWS. + +3. **Re-import it** to bring it back: +```bash +terraform import aws_s3_bucket.logs_bucket terraweek-import-test- +``` + +**Document:** When would you use `state mv` in a real project? When would you use `state rm`? + +--- + +### Task 6: Simulate and Fix State Drift +State drift happens when someone changes infrastructure outside of Terraform -- through the AWS console, CLI, or another tool. + +1. Apply your full config so everything is in sync +2. Go to the **AWS console** and manually: + - Change the Name tag of your EC2 instance to `"ManuallyChanged"` + - Change the instance type if it's stopped (or add a new tag) +3. Run: +```bash +terraform plan +``` +You should see a **diff** -- Terraform detects that reality no longer matches the desired state. + +4. You have two choices: + - **Option A:** Run `terraform apply` to force reality back to match your config (reconcile) + - **Option B:** Update your `.tf` files to match the manual change (accept the drift) + +5. Choose Option A -- apply and verify the tags are restored. + +6. Run `terraform plan` again -- it should show "No changes." Drift resolved. + +**Document:** How do teams prevent state drift in production? (hint: restrict console access, use CI/CD for all changes) + +--- + +## Hints +- S3 bucket names must be globally unique +- DynamoDB table must have a `LockID` string key -- this is what Terraform uses for locking +- `terraform init -migrate-state` explicitly triggers state migration +- `terraform refresh` (or `terraform apply -refresh-only`) updates state to match real infrastructure without making changes +- State locking only works with backends that support it (S3+DynamoDB, Consul, Terraform Cloud) +- `terraform force-unlock` should only be used when you are sure no other operation is running +- Always version your S3 bucket so you can recover a previous state file if something goes wrong + +--- + +## Documentation +Create `day-64-state-management.md` with: +- Diagram: local state vs remote state setup +- Screenshot of state file in S3 bucket +- Screenshot of the lock error from Task 3 +- Steps you followed for `terraform import` and the result +- Explanation of state drift with your real example +- When to use: `state mv`, `state rm`, `import`, `force-unlock`, `refresh` + +--- + +## Submission +1. Add `day-64-state-management.md` to `2026/day-64/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Mastered Terraform state today -- migrated to S3 remote backend with DynamoDB locking, imported existing AWS resources, performed state surgery, and simulated drift. State management is the foundation of reliable infrastructure as code." + +`#90DaysOfDevOps` `#TerraWeek` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-65/README.md b/2026/day-65/README.md new file mode 100644 index 0000000000..5c9748866e --- /dev/null +++ b/2026/day-65/README.md @@ -0,0 +1,250 @@ +# Day 65 -- Terraform Modules: Build Reusable Infrastructure + +## Task +You have been writing everything in one big `main.tf` file. That works for learning, but in real teams you manage dozens of environments with hundreds of resources. Copy-pasting configs across projects is a recipe for disaster. + +Today you learn Terraform modules -- the way to package, reuse, and share infrastructure code. Think of modules as functions in programming. Write once, call many times. + +--- + +## Expected Output +- A custom EC2 module you built from scratch +- A custom security group module wired into the EC2 module +- A VPC created using the official public registry module +- A markdown file: `day-65-modules.md` + +--- + +## Challenge Tasks + +### Task 1: Understand Module Structure +A Terraform module is just a directory with `.tf` files. Create this structure: + +``` +terraform-modules/ + main.tf # Root module -- calls child modules + variables.tf # Root variables + outputs.tf # Root outputs + providers.tf # Provider config + modules/ + ec2-instance/ + main.tf # EC2 resource definition + variables.tf # Module inputs + outputs.tf # Module outputs + security-group/ + main.tf # Security group resource definition + variables.tf # Module inputs + outputs.tf # Module outputs +``` + +Create all the directories and empty files. This is the standard layout every Terraform project follows. + +**Document:** What is the difference between a "root module" and a "child module"? + +--- + +### Task 2: Build a Custom EC2 Module +Create `modules/ec2-instance/`: + +1. **`variables.tf`** -- define inputs: + - `ami_id` (string) + - `instance_type` (string, default: `"t2.micro"`) + - `subnet_id` (string) + - `security_group_ids` (list of strings) + - `instance_name` (string) + - `tags` (map of strings, default: `{}`) + +2. **`main.tf`** -- define the resource: + - `aws_instance` using all the variables + - Merge the Name tag with additional tags + +3. **`outputs.tf`** -- expose: + - `instance_id` + - `public_ip` + - `private_ip` + +Do NOT apply yet -- just write the module. + +--- + +### Task 3: Build a Custom Security Group Module +Create `modules/security-group/`: + +1. **`variables.tf`** -- define inputs: + - `vpc_id` (string) + - `sg_name` (string) + - `ingress_ports` (list of numbers, default: `[22, 80]`) + - `tags` (map of strings, default: `{}`) + +2. **`main.tf`** -- define the resource: + - `aws_security_group` in the given VPC + - Use `dynamic "ingress"` block to create rules from the `ingress_ports` list + - Allow all egress + +3. **`outputs.tf`** -- expose: + - `sg_id` + +This is your first time using a `dynamic` block -- it loops over a list to generate repeated nested blocks. + +--- + +### Task 4: Call Your Modules from Root +In the root `main.tf`, wire everything together: + +1. Create a VPC and subnet directly (or reuse your Day 62 config) +2. Call the security group module: +```hcl +module "web_sg" { + source = "./modules/security-group" + vpc_id = aws_vpc.main.id + sg_name = "terraweek-web-sg" + ingress_ports = [22, 80, 443] + tags = local.common_tags +} +``` + +3. Call the EC2 module -- deploy **two instances** with different names using the same module: +```hcl +module "web_server" { + source = "./modules/ec2-instance" + ami_id = data.aws_ami.amazon_linux.id + instance_type = "t2.micro" + subnet_id = aws_subnet.public.id + security_group_ids = [module.web_sg.sg_id] + instance_name = "terraweek-web" + tags = local.common_tags +} + +module "api_server" { + source = "./modules/ec2-instance" + ami_id = data.aws_ami.amazon_linux.id + instance_type = "t2.micro" + subnet_id = aws_subnet.public.id + security_group_ids = [module.web_sg.sg_id] + instance_name = "terraweek-api" + tags = local.common_tags +} +``` + +4. Add root outputs that reference module outputs: +```hcl +output "web_server_ip" { + value = module.web_server.public_ip +} + +output "api_server_ip" { + value = module.api_server.public_ip +} +``` + +5. Apply: +```bash +terraform init # Downloads/links the local modules +terraform plan # Should show all resources from both module calls +terraform apply +``` + +**Verify:** Two EC2 instances running, same security group, different names. Check the AWS console. + +--- + +### Task 5: Use a Public Registry Module +Instead of building your own VPC from scratch, use the official module from the Terraform Registry. + +1. Replace your hand-written VPC resources with: +```hcl +module "vpc" { + source = "terraform-aws-modules/vpc/aws" + version = "~> 5.0" + + name = "terraweek-vpc" + cidr = "10.0.0.0/16" + + azs = ["ap-south-1a", "ap-south-1b"] + public_subnets = ["10.0.1.0/24", "10.0.2.0/24"] + private_subnets = ["10.0.3.0/24", "10.0.4.0/24"] + + enable_nat_gateway = false + enable_dns_hostnames = true + + tags = local.common_tags +} +``` + +2. Update your EC2 and SG module calls to reference `module.vpc.vpc_id` and `module.vpc.public_subnets[0]` + +3. Run: +```bash +terraform init # Downloads the registry module +terraform plan +terraform apply +``` + +4. Compare: how many resources did the VPC module create vs your hand-written VPC from Day 62? + +**Document:** Where does Terraform download registry modules to? Check `.terraform/modules/`. + +--- + +### Task 6: Module Versioning and Best Practices +1. Pin your registry module version explicitly: + - `version = "5.1.0"` -- exact version + - `version = "~> 5.0"` -- any 5.x version + - `version = ">= 5.0, < 6.0"` -- range + +2. Run `terraform init -upgrade` to check for newer versions + +3. Check the state to see how modules appear: +```bash +terraform state list +``` +Notice the `module.vpc.`, `module.web_server.`, `module.web_sg.` prefixes. + +4. Destroy everything: +```bash +terraform destroy +``` + +**Document:** Write down five module best practices: +- Always pin versions for registry modules +- Keep modules focused -- one concern per module +- Use variables for everything, hardcode nothing +- Always define outputs so callers can reference resources +- Add a README.md to every custom module + +--- + +## Hints +- `terraform init` must be re-run after adding a new module source +- Module outputs are accessed as `module..` +- `dynamic` blocks use `content {}` inside to define the repeated block +- Registry modules document all inputs and outputs on registry.terraform.io +- Local modules use `source = "./modules/"`, registry modules use `source = "//"` +- `terraform get` downloads modules without full init + +--- + +## Documentation +Create `day-65-modules.md` with: +- Your custom module structure (directory tree) +- The `variables.tf`, `main.tf`, and `outputs.tf` for your EC2 module +- Root `main.tf` showing how you call both custom and registry modules +- Screenshot of both EC2 instances running from the same module +- Comparison: hand-written VPC vs registry VPC module (resources created) +- Five module best practices in your own words + +--- + +## Submission +1. Add `day-65-modules.md` to `2026/day-65/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Built my first custom Terraform modules today -- EC2 and security group modules called multiple times with different configs. Then replaced 50 lines of VPC code with one registry module. Modules are the key to scalable infrastructure as code." + +`#90DaysOfDevOps` `#TerraWeek` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-66/README.md b/2026/day-66/README.md new file mode 100644 index 0000000000..9b68645cd9 --- /dev/null +++ b/2026/day-66/README.md @@ -0,0 +1,285 @@ +# Day 66 -- Provision an EKS Cluster with Terraform Modules + +## Task +You built Kubernetes clusters manually in the Kubernetes week. Today you provision one the DevOps way -- fully automated, repeatable, and destroyable with a single command. You will use Terraform registry modules to create an AWS EKS cluster with a managed node group, connect kubectl, and deploy a workload. + +This is what infrastructure teams do every day in production. + +--- + +## Expected Output +- A running EKS cluster on AWS provisioned entirely through Terraform +- kubectl connected to the cluster with nodes visible +- An Nginx deployment running on the cluster +- A markdown file: `day-66-eks-terraform.md` +- Everything destroyed cleanly after the exercise + +--- + +## Challenge Tasks + +### Task 1: Project Setup +Create a new project directory with proper file structure: + +``` +terraform-eks/ + providers.tf # Provider and backend config + vpc.tf # VPC module call + eks.tf # EKS module call + variables.tf # All input variables + outputs.tf # Cluster outputs + terraform.tfvars # Variable values +``` + +In `providers.tf`: +1. Pin the AWS provider to `~> 5.0` +2. Pin the Kubernetes provider (you will need it later) +3. Set your region + +In `variables.tf`, define: +- `region` (string) +- `cluster_name` (string, default: `"terraweek-eks"`) +- `cluster_version` (string, default: `"1.31"`) +- `node_instance_type` (string, default: `"t3.medium"`) +- `node_desired_count` (number, default: `2`) +- `vpc_cidr` (string, default: `"10.0.0.0/16"`) + +--- + +### Task 2: Create the VPC with Registry Module +EKS requires a VPC with both public and private subnets across multiple availability zones. + +In `vpc.tf`, use the `terraform-aws-modules/vpc/aws` module: +1. CIDR: `var.vpc_cidr` +2. At least 2 availability zones +3. 2 public subnets and 2 private subnets +4. Enable NAT gateway (single NAT to save cost): `enable_nat_gateway = true`, `single_nat_gateway = true` +5. Enable DNS hostnames: `enable_dns_hostnames = true` +6. Add the required EKS tags on subnets: +```hcl +public_subnet_tags = { + "kubernetes.io/role/elb" = 1 +} + +private_subnet_tags = { + "kubernetes.io/role/internal-elb" = 1 +} +``` + +Run `terraform init` and `terraform plan` to verify the VPC config before moving on. + +**Document:** Why does EKS need both public and private subnets? What do the subnet tags do? + +--- + +### Task 3: Create the EKS Cluster with Registry Module +In `eks.tf`, use the `terraform-aws-modules/eks/aws` module: + +```hcl +module "eks" { + source = "terraform-aws-modules/eks/aws" + version = "~> 20.0" + + cluster_name = var.cluster_name + cluster_version = var.cluster_version + + vpc_id = module.vpc.vpc_id + subnet_ids = module.vpc.private_subnets + + cluster_endpoint_public_access = true + + eks_managed_node_groups = { + terraweek_nodes = { + ami_type = "AL2_x86_64" + instance_types = [var.node_instance_type] + + min_size = 1 + max_size = 3 + desired_size = var.node_desired_count + } + } + + tags = { + Environment = "dev" + Project = "TerraWeek" + ManagedBy = "Terraform" + } +} +``` + +Run: +```bash +terraform init # Download EKS module and its dependencies +terraform plan # Review -- this will create 30+ resources +``` + +Review the plan carefully before applying. You should see: EKS cluster, IAM roles, node group, security groups, and more. + +--- + +### Task 4: Apply and Connect kubectl +1. Apply the config: +```bash +terraform apply +``` +This will take 10-15 minutes. EKS cluster creation is slow -- be patient. + +2. Add outputs in `outputs.tf`: +```hcl +output "cluster_name" { + value = module.eks.cluster_name +} + +output "cluster_endpoint" { + value = module.eks.cluster_endpoint +} + +output "cluster_region" { + value = var.region +} +``` + +3. Update your kubeconfig: +```bash +aws eks update-kubeconfig --name terraweek-eks --region +``` + +4. Verify: +```bash +kubectl get nodes +kubectl get pods -A +kubectl cluster-info +``` + +**Verify:** Do you see 2 nodes in `Ready` state? Can you see the kube-system pods running? + +--- + +### Task 5: Deploy a Workload on the Cluster +Your Terraform-provisioned cluster is live. Deploy something on it. + +1. Create a file `k8s/nginx-deployment.yaml`: +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-terraweek + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:latest + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: nginx-service +spec: + type: LoadBalancer + selector: + app: nginx + ports: + - port: 80 + targetPort: 80 +``` + +2. Apply: +```bash +kubectl apply -f k8s/nginx-deployment.yaml +``` + +3. Wait for the LoadBalancer to get an external IP: +```bash +kubectl get svc nginx-service -w +``` + +4. Access the Nginx page via the LoadBalancer URL + +5. Verify the full picture: +```bash +kubectl get nodes +kubectl get deployments +kubectl get pods +kubectl get svc +``` + +**Verify:** Can you access the Nginx welcome page through the LoadBalancer URL? + +--- + +### Task 6: Destroy Everything +This is the most important step. EKS clusters cost money. Clean up completely. + +1. First, remove the Kubernetes resources (so the AWS LoadBalancer gets deleted): +```bash +kubectl delete -f k8s/nginx-deployment.yaml +``` + +2. Wait for the LoadBalancer to be fully removed (check EC2 > Load Balancers in AWS console) + +3. Destroy all Terraform resources: +```bash +terraform destroy +``` +This will take 10-15 minutes. + +4. Verify in the AWS console: + - EKS clusters: empty + - EC2 instances: no node group instances + - VPC: the terraweek VPC should be gone + - NAT Gateways: deleted + - Elastic IPs: released + +**Verify:** Is your AWS account completely clean? No leftover resources? + +--- + +## Hints +- EKS creation takes 10-15 minutes, destruction takes about the same -- plan your time +- Always delete Kubernetes LoadBalancer services before `terraform destroy`, otherwise the ELB will block VPC deletion +- If `terraform destroy` gets stuck, check for leftover ENIs or security groups in the VPC +- `t3.medium` is the minimum recommended instance type for EKS nodes +- The EKS module creates IAM roles automatically -- you don't need to create them manually +- If you see `Unauthorized` with kubectl, re-run the `aws eks update-kubeconfig` command +- Use `kubectl get events --sort-by=.metadata.creationTimestamp` to debug pod issues +- Cost warning: NAT Gateway charges ~$0.045/hour. Destroy when done. + +--- + +## Documentation +Create `day-66-eks-terraform.md` with: +- Your complete file structure and key config files +- Screenshot of `terraform apply` completing +- Screenshot of `kubectl get nodes` showing the managed node group +- Screenshot of Nginx running on the cluster +- How many resources Terraform created in total (check the apply output) +- The destroy process and verification +- Reflection: compare this to manually setting up a cluster with kind/minikube (Day 50) + +--- + +## Submission +1. Add `day-66-eks-terraform.md` to `2026/day-66/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Provisioned a full AWS EKS cluster with Terraform modules today -- VPC, subnets, NAT gateway, IAM roles, node groups, the works. 30+ resources created with one command, deployed Nginx on it, and destroyed everything cleanly. This is real-world infrastructure as code." + +`#90DaysOfDevOps` `#TerraWeek` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-67/README.md b/2026/day-67/README.md new file mode 100644 index 0000000000..589e586729 --- /dev/null +++ b/2026/day-67/README.md @@ -0,0 +1,326 @@ +# Day 67 -- TerraWeek Capstone: Multi-Environment Infrastructure with Workspaces and Modules + +## Task +Seven days of Terraform -- HCL, providers, resources, dependencies, variables, outputs, data sources, state management, remote backends, custom modules, registry modules, and a full EKS cluster. Today you put it all together in one production-grade project. + +Build a multi-environment AWS infrastructure using custom modules and Terraform workspaces. One codebase, three environments -- dev, staging, and prod. This is how infrastructure teams operate at scale. + +--- + +## Expected Output +- A complete Terraform project with custom modules and proper file structure +- Three separate environments (dev, staging, prod) deployed using workspaces +- Each environment with its own VPC, security group, and EC2 instance with different sizing +- A markdown file: `day-67-terraweek-capstone.md` +- Everything destroyed cleanly after verification + +--- + +## Challenge Tasks + +### Task 1: Learn Terraform Workspaces +Before building the project, understand workspaces: + +```bash +mkdir terraweek-capstone && cd terraweek-capstone +terraform init + +# See current workspace +terraform workspace show # default + +# Create new workspaces +terraform workspace new dev +terraform workspace new staging +terraform workspace new prod + +# List all workspaces +terraform workspace list + +# Switch between them +terraform workspace select dev +terraform workspace select staging +terraform workspace select prod +``` + +Answer: +1. What does `terraform.workspace` return inside a config? +2. Where does each workspace store its state file? +3. How is this different from using separate directories per environment? + +--- + +### Task 2: Set Up the Project Structure +Create this layout: + +``` +terraweek-capstone/ + main.tf # Root module -- calls child modules + variables.tf # Root variables + outputs.tf # Root outputs + providers.tf # AWS provider and backend + locals.tf # Local values using workspace + dev.tfvars # Dev environment values + staging.tfvars # Staging environment values + prod.tfvars # Prod environment values + .gitignore # Ignore state, .terraform, tfvars with secrets + modules/ + vpc/ + main.tf + variables.tf + outputs.tf + security-group/ + main.tf + variables.tf + outputs.tf + ec2-instance/ + main.tf + variables.tf + outputs.tf +``` + +Create the `.gitignore`: +``` +.terraform/ +*.tfstate +*.tfstate.backup +*.tfvars +.terraform.lock.hcl +``` + +**Document:** Why is this file structure considered best practice? + +--- + +### Task 3: Build the Custom Modules +Create three focused modules: + +**Module 1: `modules/vpc/`** +- Input: `cidr`, `public_subnet_cidr`, `environment`, `project_name` +- Resources: VPC, public subnet, internet gateway, route table, route table association +- Output: `vpc_id`, `subnet_id` +- All resources tagged with environment and project name + +**Module 2: `modules/security-group/`** +- Input: `vpc_id`, `ingress_ports`, `environment`, `project_name` +- Resources: Security group with dynamic ingress rules, allow all egress +- Output: `sg_id` + +**Module 3: `modules/ec2-instance/`** +- Input: `ami_id`, `instance_type`, `subnet_id`, `security_group_ids`, `environment`, `project_name` +- Resources: EC2 instance with tags +- Output: `instance_id`, `public_ip` + +Write and validate each module: +```bash +terraform validate +``` + +--- + +### Task 4: Wire It All Together with Workspace-Aware Config +In the root module, use `terraform.workspace` to drive environment-specific behavior. + +**`locals.tf`:** +```hcl +locals { + environment = terraform.workspace + name_prefix = "${var.project_name}-${local.environment}" + + common_tags = { + Project = var.project_name + Environment = local.environment + ManagedBy = "Terraform" + Workspace = terraform.workspace + } +} +``` + +**`variables.tf`:** +```hcl +variable "project_name" { + type = string + default = "terraweek" +} + +variable "vpc_cidr" { + type = string +} + +variable "subnet_cidr" { + type = string +} + +variable "instance_type" { + type = string +} + +variable "ingress_ports" { + type = list(number) + default = [22, 80] +} +``` + +**`main.tf`** -- call all three modules, passing workspace-aware names and variables. + +**Environment-specific tfvars:** + +`dev.tfvars`: +```hcl +vpc_cidr = "10.0.0.0/16" +subnet_cidr = "10.0.1.0/24" +instance_type = "t2.micro" +ingress_ports = [22, 80] +``` + +`staging.tfvars`: +```hcl +vpc_cidr = "10.1.0.0/16" +subnet_cidr = "10.1.1.0/24" +instance_type = "t2.small" +ingress_ports = [22, 80, 443] +``` + +`prod.tfvars`: +```hcl +vpc_cidr = "10.2.0.0/16" +subnet_cidr = "10.2.1.0/24" +instance_type = "t3.small" +ingress_ports = [80, 443] +``` + +Notice: dev allows SSH, prod does not. Different CIDRs prevent overlap. Instance types scale up per environment. + +--- + +### Task 5: Deploy All Three Environments +Deploy each environment using its workspace and tfvars file: + +**Dev:** +```bash +terraform workspace select dev +terraform plan -var-file="dev.tfvars" +terraform apply -var-file="dev.tfvars" +``` + +**Staging:** +```bash +terraform workspace select staging +terraform plan -var-file="staging.tfvars" +terraform apply -var-file="staging.tfvars" +``` + +**Prod:** +```bash +terraform workspace select prod +terraform plan -var-file="prod.tfvars" +terraform apply -var-file="prod.tfvars" +``` + +After all three are deployed, verify: +```bash +# Check each workspace's resources +terraform workspace select dev && terraform output +terraform workspace select staging && terraform output +terraform workspace select prod && terraform output +``` + +Go to the AWS console and verify: +- Three separate VPCs with different CIDR ranges +- Three EC2 instances with different instance types +- Different Name tags per environment: `terraweek-dev-server`, `terraweek-staging-server`, `terraweek-prod-server` + +**Verify:** Are all three environments completely isolated from each other? + +--- + +### Task 6: Document Best Practices +Write down everything you have learned this week as a Terraform best practices guide: + +1. **File structure** -- separate files for providers, variables, outputs, main, locals +2. **State management** -- always use remote backend, enable locking, enable versioning +3. **Variables** -- never hardcode, use tfvars per environment, validate with `validation` blocks +4. **Modules** -- one concern per module, always define inputs/outputs, pin registry module versions +5. **Workspaces** -- use for environment isolation, reference `terraform.workspace` in configs +6. **Security** -- .gitignore for state and tfvars, encrypt state at rest, restrict backend access +7. **Commands** -- always run `plan` before `apply`, use `fmt` and `validate` before committing +8. **Tagging** -- tag every resource with project, environment, and managed-by +9. **Naming** -- consistent prefix pattern: `--` +10. **Cleanup** -- always `terraform destroy` non-production environments when not in use + +--- + +### Task 7: Destroy All Environments +Clean up all three environments in reverse order: + +```bash +terraform workspace select prod +terraform destroy -var-file="prod.tfvars" + +terraform workspace select staging +terraform destroy -var-file="staging.tfvars" + +terraform workspace select dev +terraform destroy -var-file="dev.tfvars" +``` + +Verify in the AWS console -- all VPCs, instances, security groups, and gateways should be gone. + +Delete the workspaces: +```bash +terraform workspace select default +terraform workspace delete dev +terraform workspace delete staging +terraform workspace delete prod +``` + +**Verify:** Is your AWS account completely clean? + +--- + +## Hints +- Each workspace has its own state file -- `terraform.tfstate.d//terraform.tfstate` +- `terraform.workspace` is a built-in variable available in any config +- You cannot delete a workspace you are currently on -- switch to `default` first +- Different VPC CIDRs per environment prevent accidental peering conflicts +- `terraform plan -var-file` does NOT auto-load `terraform.tfvars` when you specify `-var-file` +- If you forget which workspace you are on: `terraform workspace show` +- Workspaces work with remote backends too -- S3 key becomes `env://terraform.tfstate` + +--- + +## Documentation +Create `day-67-terraweek-capstone.md` with: +- Your complete project structure (directory tree) +- All three custom module configs +- Root `main.tf` showing workspace-aware module calls +- All three tfvars files with the differences highlighted +- Screenshot of all three environments running simultaneously in AWS +- Screenshot of `terraform output` from each workspace +- Your Terraform best practices guide (Task 6) +- A table mapping each TerraWeek day to the concepts learned: + +| Day | Concepts | +|-----|----------| +| 61 | IaC, HCL, init/plan/apply/destroy, state basics | +| 62 | Providers, resources, dependencies, lifecycle | +| 63 | Variables, outputs, data sources, locals, functions | +| 64 | Remote backend, locking, import, drift | +| 65 | Custom modules, registry modules, versioning | +| 66 | EKS with modules, real-world provisioning | +| 67 | Workspaces, multi-env, capstone project | + +--- + +## Submission +1. Add `day-67-terraweek-capstone.md` to `2026/day-67/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Completed the TerraWeek Challenge -- seven days from terraform init to a full multi-environment infrastructure project. Custom modules for VPC, security groups, and EC2. Three environments deployed with workspaces. One codebase, three isolated environments, zero console clicks." + +`#90DaysOfDevOps` `#TerraWeek` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-68/README.md b/2026/day-68/README.md new file mode 100644 index 0000000000..10cbc5c279 --- /dev/null +++ b/2026/day-68/README.md @@ -0,0 +1,242 @@ +# Day 68 -- Introduction to Ansible and Inventory Setup + +## Task +Terraform provisions infrastructure. But who installs packages, configures services, manages users, and keeps servers in the desired state after they exist? That is the job of a configuration management tool, and Ansible is the industry standard. + +Today you install Ansible, set up an inventory of servers, and run your first ad-hoc commands -- all without installing a single agent on the target machines. Ansible is agentless. SSH is all it needs. + +--- + +## Expected Output +- Ansible installed on your control node +- 2-3 EC2 instances running as managed nodes +- A working inventory file with grouped hosts +- Successful ad-hoc commands run against remote servers +- A markdown file: `day-68-ansible-intro.md` + +--- + +## Challenge Tasks + +### Task 1: Understand Ansible +Research and write short notes on: + +1. What is configuration management? Why do we need it? +2. How is Ansible different from Chef, Puppet, and Salt? +3. What does "agentless" mean? How does Ansible connect to managed nodes? +4. Draw or describe the Ansible architecture: + - **Control Node** -- the machine where Ansible runs (your laptop or a jump server) + - **Managed Nodes** -- the servers Ansible configures (your EC2 instances) + - **Inventory** -- the list of managed nodes + - **Modules** -- units of work Ansible executes (install a package, copy a file, start a service) + - **Playbooks** -- YAML files that define what to do on which hosts + +--- + +### Task 2: Set Up Your Lab Environment +You need 2-3 EC2 instances to practice on. Choose one approach: + +**Option A: Use Terraform (recommended -- you just learned this)** +Use your TerraWeek skills to provision 3 EC2 instances with: +- Amazon Linux 2 or Ubuntu 22.04 +- `t2.micro` instance type +- A security group allowing SSH (port 22) +- A key pair for SSH access + +**Option B: Launch manually from AWS Console** +Create 3 instances with the same specs above. + +Label them mentally: +- **Instance 1:** web server +- **Instance 2:** app server +- **Instance 3:** db server + +Verify you can SSH into each one from your control node: +```bash +ssh -i ~/your-key.pem ec2-user@ +ssh -i ~/your-key.pem ec2-user@ +ssh -i ~/your-key.pem ec2-user@ +``` + +--- + +### Task 3: Install Ansible +Install Ansible on your **control node** (your laptop or one dedicated EC2 instance): + +```bash +# macOS +brew install ansible + +# Ubuntu/Debian +sudo apt update +sudo apt install ansible -y + +# Amazon Linux / RHEL +sudo yum install ansible -y +# or +pip3 install ansible + +# Verify +ansible --version +``` + +Confirm the output shows the Ansible version, config file path, and Python version. + +**Document:** On which machine did you install Ansible? Why is it only needed on the control node? + +--- + +### Task 4: Create Your Inventory File +The inventory tells Ansible which servers to manage. Create a project directory and your first inventory: + +```bash +mkdir ansible-practice && cd ansible-practice +``` + +Create a file called `inventory.ini`: +```ini +[web] +web-server ansible_host= + +[app] +app-server ansible_host= + +[db] +db-server ansible_host= + +[all:vars] +ansible_user=ec2-user +ansible_ssh_private_key_file=~/your-key.pem +``` + +Verify Ansible can reach all hosts: +```bash +ansible all -i inventory.ini -m ping +``` + +You should see green `SUCCESS` with `"ping": "pong"` for each host. + +**Troubleshoot:** If ping fails: +- Check the SSH key path and permissions (`chmod 400 your-key.pem`) +- Check the security group allows SSH from your IP +- Check the `ansible_user` matches your AMI (ec2-user for Amazon Linux, ubuntu for Ubuntu) + +--- + +### Task 5: Run Ad-Hoc Commands +Ad-hoc commands let you run quick one-off tasks without writing a playbook. + +1. **Check uptime on all servers:** +```bash +ansible all -i inventory.ini -m command -a "uptime" +``` + +2. **Check free memory on web servers only:** +```bash +ansible web -i inventory.ini -m command -a "free -h" +``` + +3. **Check disk space on all servers:** +```bash +ansible all -i inventory.ini -m command -a "df -h" +``` + +4. **Install a package on the web group:** +```bash +ansible web -i inventory.ini -m yum -a "name=git state=present" --become +``` +(Use `apt` instead of `yum` if running Ubuntu) + +5. **Copy a file to all servers:** +```bash +echo "Hello from Ansible" > hello.txt +ansible all -i inventory.ini -m copy -a "src=hello.txt dest=/tmp/hello.txt" +``` + +6. **Verify the file was copied:** +```bash +ansible all -i inventory.ini -m command -a "cat /tmp/hello.txt" +``` + +**Document:** What does `--become` do? When do you need it? + +--- + +### Task 6: Explore Inventory Groups and Patterns +1. **Create a group of groups** -- add this to your `inventory.ini`: +```ini +[application:children] +web +app + +[all_servers:children] +application +db +``` + +2. Run commands against different groups: +```bash +ansible application -i inventory.ini -m ping # web + app servers +ansible db -i inventory.ini -m ping # only db server +ansible all_servers -i inventory.ini -m ping # everything +``` + +3. **Use patterns:** +```bash +ansible 'web:app' -i inventory.ini -m ping # OR: web or app +ansible 'all:!db' -i inventory.ini -m ping # NOT: all except db +``` + +4. **Create an `ansible.cfg`** to avoid typing `-i inventory.ini` every time: +```ini +[defaults] +inventory = inventory.ini +host_key_checking = False +remote_user = ec2-user +private_key_file = ~/your-key.pem +``` + +Now you can simply run: +```bash +ansible all -m ping +``` + +**Verify:** Does `ansible all -m ping` work without specifying the inventory file? + +--- + +## Hints +- Ansible uses SSH by default -- no agent installation needed on managed nodes +- `ansible.cfg` is read from the current directory first, then `~/.ansible.cfg`, then `/etc/ansible/ansible.cfg` +- `-m` specifies the module, `-a` specifies the module arguments +- `--become` escalates to root (like `sudo`) -- needed for package installation and service management +- `command` module runs simple commands, `shell` module supports pipes and redirects +- Host key checking can cause issues on first connection -- `host_key_checking = False` in config helps during practice +- Ad-hoc commands are great for quick tasks, but playbooks are better for anything repeatable + +--- + +## Documentation +Create `day-68-ansible-intro.md` with: +- Ansible architecture in your own words +- How you set up your lab (Terraform or manual, with instance details) +- Your `inventory.ini` file (redact IPs if sharing publicly) +- Screenshot of `ansible all -m ping` with all green results +- Five ad-hoc commands you ran and their outputs +- Difference between `command` and `shell` modules + +--- + +## Submission +1. Add `day-68-ansible-intro.md` to `2026/day-68/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Started the Ansible journey today -- set up a control node, created an inventory with three EC2 instances, and ran ad-hoc commands to manage all servers from one terminal. No agents installed anywhere. Ansible just works over SSH." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-69/README.md b/2026/day-69/README.md new file mode 100644 index 0000000000..916700b64d --- /dev/null +++ b/2026/day-69/README.md @@ -0,0 +1,349 @@ +# Day 69 -- Ansible Playbooks and Modules + +## Task +Ad-hoc commands are useful for quick checks, but real automation lives in playbooks. A playbook is a YAML file that describes the desired state of your servers -- which packages to install, which services to run, which files to place where. You write it once, run it a hundred times, and get the same result every time. + +Today you write your first playbooks and learn the modules that you will use on every project. + +--- + +## Expected Output +- Multiple playbooks that install packages, manage services, and configure files +- A clear understanding of plays, tasks, modules, and handlers +- A markdown file: `day-69-playbooks.md` + +--- + +## Challenge Tasks + +### Task 1: Your First Playbook +Create `install-nginx.yml`: + +```yaml +--- +- name: Install and start Nginx on web servers + hosts: web + become: true + + tasks: + - name: Install Nginx + yum: + name: nginx + state: present + + - name: Start and enable Nginx + service: + name: nginx + state: started + enabled: true + + - name: Create a custom index page + copy: + content: "

Deployed by Ansible - TerraWeek Server

" + dest: /usr/share/nginx/html/index.html +``` + +(Use `apt` instead of `yum` if your instances run Ubuntu) + +Run it: +```bash +ansible-playbook install-nginx.yml +``` + +Read the output carefully -- every task shows `changed`, `ok`, or `failed`. + +Now run it **again**. Notice that tasks show `ok` instead of `changed`. This is **idempotency** -- Ansible only makes changes when needed. + +**Verify:** Curl the web server's public IP. Do you see your custom page? + +--- + +### Task 2: Understand the Playbook Structure +Open your playbook and annotate each part in your notes: + +```yaml +--- # YAML document start +- name: Play name # PLAY -- targets a group of hosts + hosts: web # Which inventory group to run on + become: true # Run tasks as root (sudo) + + tasks: # List of TASKS in this play + - name: Task name # TASK -- one unit of work + module_name: # MODULE -- what Ansible does + key: value # Module arguments +``` + +Answer: +1. What is the difference between a play and a task? +2. Can you have multiple plays in one playbook? +3. What does `become: true` do at the play level vs the task level? +4. What happens if a task fails -- do remaining tasks still run? + +--- + +### Task 3: Learn the Essential Modules +Practice each of these modules by writing a playbook called `essential-modules.yml` with multiple tasks: + +1. **`yum`/`apt`** -- Install and remove packages: +```yaml +- name: Install multiple packages + yum: + name: + - git + - curl + - wget + - tree + state: present +``` + +2. **`service`** -- Manage services: +```yaml +- name: Ensure Nginx is running + service: + name: nginx + state: started + enabled: true +``` + +3. **`copy`** -- Copy files from control node to managed nodes: +```yaml +- name: Copy config file + copy: + src: files/app.conf + dest: /etc/app.conf + owner: root + group: root + mode: '0644' +``` + +4. **`file`** -- Create directories and manage permissions: +```yaml +- name: Create application directory + file: + path: /opt/myapp + state: directory + owner: ec2-user + mode: '0755' +``` + +5. **`command`** -- Run a command (no shell features): +```yaml +- name: Check disk space + command: df -h + register: disk_output + +- name: Print disk space + debug: + var: disk_output.stdout_lines +``` + +6. **`shell`** -- Run a command with shell features (pipes, redirects): +```yaml +- name: Count running processes + shell: ps aux | wc -l + register: process_count + +- name: Show process count + debug: + msg: "Total processes: {{ process_count.stdout }}" +``` + +7. **`lineinfile`** -- Add or modify a single line in a file: +```yaml +- name: Set timezone in environment + lineinfile: + path: /etc/environment + line: 'TZ=Asia/Kolkata' + create: true +``` + +Create a `files/` directory with a sample `app.conf` file for the copy task. Run the playbook against all servers. + +**Document:** What is the difference between `command` and `shell`? When should you use each? + +--- + +### Task 4: Handlers -- Restart Services Only When Needed +Handlers are tasks that run only when triggered by a `notify`. This avoids unnecessary service restarts. + +Create `nginx-config.yml`: +```yaml +--- +- name: Configure Nginx with a custom config + hosts: web + become: true + + tasks: + - name: Install Nginx + yum: + name: nginx + state: present + + - name: Deploy Nginx config + copy: + src: files/nginx.conf + dest: /etc/nginx/nginx.conf + owner: root + mode: '0644' + notify: Restart Nginx + + - name: Deploy custom index page + copy: + content: "

Managed by Ansible

Server: {{ inventory_hostname }}

" + dest: /usr/share/nginx/html/index.html + + - name: Ensure Nginx is running + service: + name: nginx + state: started + enabled: true + + handlers: + - name: Restart Nginx + service: + name: nginx + state: restarted +``` + +Create `files/nginx.conf` with a basic Nginx config. + +Run the playbook: +- First run: handler triggers because the config file is new +- Second run: handler does NOT trigger because nothing changed + +**Verify:** Run it twice and compare the output. Does the handler run both times? + +--- + +### Task 5: Dry Run, Diff, and Verbosity +Before running playbooks on production, always preview changes first. + +1. **Dry run (check mode)** -- shows what would change without changing anything: +```bash +ansible-playbook install-nginx.yml --check +``` + +2. **Diff mode** -- shows the actual file differences: +```bash +ansible-playbook nginx-config.yml --check --diff +``` + +3. **Verbosity** -- increase output detail for debugging: +```bash +ansible-playbook install-nginx.yml -v # verbose +ansible-playbook install-nginx.yml -vv # more verbose +ansible-playbook install-nginx.yml -vvv # connection debugging +``` + +4. **Limit to specific hosts:** +```bash +ansible-playbook install-nginx.yml --limit web-server +``` + +5. **List what would be affected without running:** +```bash +ansible-playbook install-nginx.yml --list-hosts +ansible-playbook install-nginx.yml --list-tasks +``` + +**Document:** Why is `--check --diff` the most important flag combination for production use? + +--- + +### Task 6: Multiple Plays in One Playbook +Write `multi-play.yml` with separate plays for each server group: + +```yaml +--- +- name: Configure web servers + hosts: web + become: true + tasks: + - name: Install Nginx + yum: + name: nginx + state: present + - name: Start Nginx + service: + name: nginx + state: started + enabled: true + +- name: Configure app servers + hosts: app + become: true + tasks: + - name: Install Node.js dependencies + yum: + name: + - gcc + - make + state: present + - name: Create app directory + file: + path: /opt/app + state: directory + mode: '0755' + +- name: Configure database servers + hosts: db + become: true + tasks: + - name: Install MySQL client + yum: + name: mysql + state: present + - name: Create data directory + file: + path: /var/lib/appdata + state: directory + mode: '0700' +``` + +Run it: +```bash +ansible-playbook multi-play.yml +``` + +Watch the output -- each play targets a different group, and tasks run only on the relevant hosts. + +**Verify:** Is Nginx only installed on web servers? Is MySQL only on db servers? + +--- + +## Hints +- YAML indentation matters -- use 2 spaces, never tabs +- `state: present` means "install if not already installed", `state: absent` means "remove" +- `state: started` means "start if not running", `state: restarted` means "always restart" +- Handlers run once at the end of all tasks, even if notified multiple times +- `register` saves a task's output to a variable, `debug` prints it +- `{{ inventory_hostname }}` is a built-in variable that returns the current host's name +- `ansible-playbook --syntax-check playbook.yml` validates YAML syntax before running +- Always test with `--check --diff` before applying to production + +--- + +## Documentation +Create `day-69-playbooks.md` with: +- Your first playbook with annotations explaining each section +- All seven module examples with what each does +- Screenshot of the playbook run showing changed vs ok tasks +- Screenshot proving idempotency (second run shows all ok) +- How handlers work with a before/after comparison +- Difference between `--check`, `--diff`, and `-v` + +--- + +## Submission +1. Add `day-69-playbooks.md` to `2026/day-69/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Wrote my first Ansible playbooks today -- installed Nginx, managed services, copied files, and learned handlers. Ran the same playbook twice and it made zero changes the second time. Idempotency is beautiful." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-70/README.md b/2026/day-70/README.md new file mode 100644 index 0000000000..9e6d12869f --- /dev/null +++ b/2026/day-70/README.md @@ -0,0 +1,407 @@ +# Day 70 -- Variables, Facts, Conditionals and Loops + +## Task +Your playbooks work, but they are static -- same packages, same config, same behavior on every server. Real infrastructure is not like that. Web servers need Nginx, app servers need Node.js, production gets more memory than dev. Today you make your playbooks smart. + +Variables, facts, conditionals, and loops turn a rigid script into flexible automation that adapts to each host, each group, and each environment. + +--- + +## Expected Output +- Playbooks using variables from multiple sources +- Conditional tasks that run only on specific OS or groups +- Loops that install packages and create users dynamically +- A markdown file: `day-70-variables-loops.md` + +--- + +## Challenge Tasks + +### Task 1: Variables in Playbooks +Create `variables-demo.yml`: + +```yaml +--- +- name: Variable demo + hosts: all + become: true + + vars: + app_name: terraweek-app + app_port: 8080 + app_dir: "/opt/{{ app_name }}" + packages: + - git + - curl + - wget + + tasks: + - name: Print app details + debug: + msg: "Deploying {{ app_name }} on port {{ app_port }} to {{ app_dir }}" + + - name: Create application directory + file: + path: "{{ app_dir }}" + state: directory + mode: '0755' + + - name: Install required packages + yum: + name: "{{ packages }}" + state: present +``` + +Run it and verify the variables resolve correctly. + +Now, override a variable from the command line: +```bash +ansible-playbook variables-demo.yml -e "app_name=my-custom-app app_port=9090" +``` + +**Verify:** Does the CLI variable override the playbook variable? + +--- + +### Task 2: group_vars and host_vars +Variables should not live inside playbooks. Move them to dedicated files. + +Create this structure: +``` +ansible-practice/ + inventory.ini + ansible.cfg + group_vars/ + all.yml + web.yml + db.yml + host_vars/ + web-server.yml + playbooks/ + site.yml +``` + +**`group_vars/all.yml`** -- applies to every host: +```yaml +--- +ntp_server: pool.ntp.org +app_env: development +common_packages: + - vim + - htop + - tree +``` + +**`group_vars/web.yml`** -- applies only to the web group: +```yaml +--- +http_port: 80 +max_connections: 1000 +web_packages: + - nginx +``` + +**`group_vars/db.yml`** -- applies only to the db group: +```yaml +--- +db_port: 3306 +db_packages: + - mysql-server +``` + +**`host_vars/web-server.yml`** -- applies only to this specific host: +```yaml +--- +max_connections: 2000 +custom_message: "This is the primary web server" +``` + +Write a playbook `site.yml` that uses these variables: +```yaml +--- +- name: Apply common config + hosts: all + become: true + tasks: + - name: Install common packages + yum: + name: "{{ common_packages }}" + state: present + - name: Show environment + debug: + msg: "Environment: {{ app_env }}" + +- name: Configure web servers + hosts: web + become: true + tasks: + - name: Show web config + debug: + msg: "HTTP port: {{ http_port }}, Max connections: {{ max_connections }}" + - name: Show host-specific message + debug: + msg: "{{ custom_message }}" +``` + +Run it and observe which variables apply to which hosts. + +**Document:** What is the variable precedence? (hint: host_vars > group_vars > playbook vars, and `-e` overrides everything) + +--- + +### Task 3: Ansible Facts -- Gathering System Information +Ansible automatically collects "facts" about each managed node -- OS, IP, memory, CPU, disks, and hundreds more. + +1. **See all facts for a host:** +```bash +ansible web-server -m setup +``` + +2. **Filter specific facts:** +```bash +ansible web-server -m setup -a "filter=ansible_os_family" +ansible web-server -m setup -a "filter=ansible_distribution*" +ansible web-server -m setup -a "filter=ansible_memtotal_mb" +ansible web-server -m setup -a "filter=ansible_default_ipv4" +``` + +3. **Use facts in a playbook** -- create `facts-demo.yml`: +```yaml +--- +- name: Facts demo + hosts: all + tasks: + - name: Show OS info + debug: + msg: > + Hostname: {{ ansible_hostname }}, + OS: {{ ansible_distribution }} {{ ansible_distribution_version }}, + RAM: {{ ansible_memtotal_mb }}MB, + IP: {{ ansible_default_ipv4.address }} + + - name: Show all network interfaces + debug: + var: ansible_interfaces +``` + +Run it and observe the facts printed for each host. + +**Document:** Name five facts you would use in real playbooks and why. + +--- + +### Task 4: Conditionals with when +Tasks should not always run on every host. Use `when` to control execution. + +Create `conditional-demo.yml`: + +```yaml +--- +- name: Conditional tasks demo + hosts: all + become: true + + tasks: + - name: Install Nginx (only on web servers) + yum: + name: nginx + state: present + when: "'web' in group_names" + + - name: Install MySQL (only on db servers) + yum: + name: mysql-server + state: present + when: "'db' in group_names" + + - name: Show warning on low memory hosts + debug: + msg: "WARNING: This host has less than 1GB RAM" + when: ansible_memtotal_mb < 1024 + + - name: Run only on Amazon Linux + debug: + msg: "This is an Amazon Linux machine" + when: ansible_distribution == "Amazon" + + - name: Run only on Ubuntu + debug: + msg: "This is an Ubuntu machine" + when: ansible_distribution == "Ubuntu" + + - name: Run only in production + debug: + msg: "Production settings applied" + when: app_env == "production" + + - name: Multiple conditions (AND) + debug: + msg: "Web server with enough memory" + when: + - "'web' in group_names" + - ansible_memtotal_mb >= 512 + + - name: OR condition + debug: + msg: "Either web or app server" + when: "'web' in group_names or 'app' in group_names" +``` + +Run it and observe which tasks are skipped on which hosts. + +**Verify:** Are tasks correctly skipping on hosts that don't match the condition? + +--- + +### Task 5: Loops +Create `loops-demo.yml`: + +```yaml +--- +- name: Loops demo + hosts: all + become: true + + vars: + users: + - name: deploy + groups: wheel + - name: monitor + groups: wheel + - name: appuser + groups: users + + directories: + - /opt/app/logs + - /opt/app/config + - /opt/app/data + - /opt/app/tmp + + tasks: + - name: Create multiple users + user: + name: "{{ item.name }}" + groups: "{{ item.groups }}" + state: present + loop: "{{ users }}" + + - name: Create multiple directories + file: + path: "{{ item }}" + state: directory + mode: '0755' + loop: "{{ directories }}" + + - name: Install multiple packages + yum: + name: "{{ item }}" + state: present + loop: + - git + - curl + - unzip + - jq + + - name: Print each user created + debug: + msg: "Created user {{ item.name }} in group {{ item.groups }}" + loop: "{{ users }}" +``` + +Run it and observe the loop output -- each iteration is shown separately. + +**Document:** What is the difference between `loop` and the older `with_items`? (hint: `loop` is the modern recommended syntax) + +--- + +### Task 6: Register, Debug, and Combine Everything +Build a real-world playbook `server-report.yml` that combines variables, facts, conditionals, and register: + +```yaml +--- +- name: Server Health Report + hosts: all + + tasks: + - name: Check disk space + command: df -h / + register: disk_result + + - name: Check memory + command: free -m + register: memory_result + + - name: Check running services + shell: systemctl list-units --type=service --state=running | head -20 + register: services_result + + - name: Generate report + debug: + msg: + - "========== {{ inventory_hostname }} ==========" + - "OS: {{ ansible_distribution }} {{ ansible_distribution_version }}" + - "IP: {{ ansible_default_ipv4.address }}" + - "RAM: {{ ansible_memtotal_mb }}MB" + - "Disk: {{ disk_result.stdout_lines[1] }}" + - "Running services (first 20): {{ services_result.stdout_lines | length }}" + + - name: Flag if disk is critically low + debug: + msg: "ALERT: Check disk space on {{ inventory_hostname }}" + when: "'9[0-9]%' in disk_result.stdout or '100%' in disk_result.stdout" + + - name: Save report to file + copy: + content: | + Server: {{ inventory_hostname }} + OS: {{ ansible_distribution }} {{ ansible_distribution_version }} + IP: {{ ansible_default_ipv4.address }} + RAM: {{ ansible_memtotal_mb }}MB + Disk: {{ disk_result.stdout }} + Checked at: {{ ansible_date_time.iso8601 }} + dest: "/tmp/server-report-{{ inventory_hostname }}.txt" + become: true +``` + +Run it and verify the report file is created on each server. + +**Verify:** SSH into a server and read `/tmp/server-report-*.txt`. Does it contain accurate information? + +--- + +## Hints +- Variable precedence (simplified, low to high): role defaults -> group_vars/all -> group_vars/ -> host_vars/ -> playbook vars -> task vars -> extra vars (`-e`) +- `group_names` is a built-in variable containing the groups the current host belongs to +- `inventory_hostname` is the name of the host as defined in the inventory +- `when` conditions do not need `{{ }}` -- you reference variables directly: `when: app_env == "production"` +- `register` stores the entire result object including `stdout`, `stderr`, `rc` (return code), and `stdout_lines` +- `loop` replaces `with_items`, `with_dict`, `with_file` from older Ansible versions +- Use `ansible -m setup -a "filter="` to quickly find fact names +- `debug` with `var` shows the raw variable, `msg` shows a formatted string + +--- + +## Documentation +Create `day-70-variables-loops.md` with: +- Your `group_vars/` and `host_vars/` directory structure +- How variable precedence works with examples from your test +- Five useful Ansible facts and where you would use them +- Conditional playbook with screenshot showing skipped vs executed tasks +- Loop playbook with screenshot showing multiple iterations +- The server report output from Task 6 + +--- + +## Submission +1. Add `day-70-variables-loops.md` to `2026/day-70/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Made Ansible playbooks smart today -- variables from group_vars and host_vars, OS-based conditionals, loops for bulk operations, and facts-driven server reports. Same playbook, different behavior per host. This is how real configuration management works." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-71/README.md b/2026/day-71/README.md new file mode 100644 index 0000000000..4e9bcd08f1 --- /dev/null +++ b/2026/day-71/README.md @@ -0,0 +1,428 @@ +# Day 71 -- Roles, Galaxy, Templates and Vault + +## Task +Your playbooks are getting bigger. Tasks, variables, handlers, files -- all living in one YAML file that grows longer every day. In real projects, you manage dozens of servers with different roles -- web servers, databases, monitoring agents, load balancers. You need a way to organize, reuse, and share automation. + +Today you learn Ansible Roles (the standard way to structure automation), Jinja2 Templates (dynamic config files), Ansible Galaxy (the community marketplace), and Ansible Vault (secrets management). + +--- + +## Expected Output +- A custom Ansible role built from scratch +- A Jinja2 template rendering dynamic config files +- A role installed from Ansible Galaxy +- Secrets encrypted with Ansible Vault +- A markdown file: `day-71-roles-templates-vault.md` + +--- + +## Challenge Tasks + +### Task 1: Jinja2 Templates +Templates let you generate config files dynamically using variables and facts. + +1. Create `templates/nginx-vhost.conf.j2`: +```jinja2 +# Managed by Ansible -- do not edit manually +server { + listen {{ http_port | default(80) }}; + server_name {{ ansible_hostname }}; + + root /var/www/{{ app_name }}; + index index.html; + + location / { + try_files $uri $uri/ =404; + } + + access_log /var/log/nginx/{{ app_name }}_access.log; + error_log /var/log/nginx/{{ app_name }}_error.log; +} +``` + +2. Create a playbook `template-demo.yml`: +```yaml +--- +- name: Deploy Nginx with template + hosts: web + become: true + vars: + app_name: terraweek-app + http_port: 80 + + tasks: + - name: Install Nginx + yum: + name: nginx + state: present + + - name: Create web root + file: + path: "/var/www/{{ app_name }}" + state: directory + mode: '0755' + + - name: Deploy vhost config from template + template: + src: templates/nginx-vhost.conf.j2 + dest: "/etc/nginx/conf.d/{{ app_name }}.conf" + owner: root + mode: '0644' + notify: Restart Nginx + + - name: Deploy index page + copy: + content: "

{{ app_name }}

Host: {{ ansible_hostname }} | IP: {{ ansible_default_ipv4.address }}

" + dest: "/var/www/{{ app_name }}/index.html" + + handlers: + - name: Restart Nginx + service: + name: nginx + state: restarted +``` + +Run it with `--diff` to see the rendered template: +```bash +ansible-playbook template-demo.yml --diff +``` + +**Verify:** SSH into the web server and read the generated config. Are the variables replaced with actual values? + +--- + +### Task 2: Understand the Role Structure +An Ansible role has a fixed directory structure. Each directory has a specific purpose: + +``` +roles/ + webserver/ + tasks/ + main.yml # The main task list + handlers/ + main.yml # Handlers (restart services, etc.) + templates/ + nginx.conf.j2 # Jinja2 templates + files/ + index.html # Static files to copy + vars/ + main.yml # Role variables (high priority) + defaults/ + main.yml # Default variables (low priority, easily overridden) + meta/ + main.yml # Role metadata and dependencies +``` + +Every directory contains a `main.yml` that Ansible loads automatically. You only create the directories you need. + +Generate a skeleton with: +```bash +ansible-galaxy init roles/webserver +``` + +Explore the generated directory. Read the README.md that Galaxy creates. + +**Document:** What is the difference between `vars/main.yml` and `defaults/main.yml`? + +--- + +### Task 3: Build a Custom Webserver Role +Build a complete `webserver` role from scratch: + +**`roles/webserver/defaults/main.yml`:** +```yaml +--- +http_port: 80 +app_name: myapp +max_connections: 512 +``` + +**`roles/webserver/tasks/main.yml`:** +```yaml +--- +- name: Install Nginx + yum: + name: nginx + state: present + +- name: Deploy Nginx config + template: + src: nginx.conf.j2 + dest: /etc/nginx/nginx.conf + owner: root + mode: '0644' + notify: Restart Nginx + +- name: Deploy vhost config + template: + src: vhost.conf.j2 + dest: "/etc/nginx/conf.d/{{ app_name }}.conf" + owner: root + mode: '0644' + notify: Restart Nginx + +- name: Create web root + file: + path: "/var/www/{{ app_name }}" + state: directory + mode: '0755' + +- name: Deploy index page + template: + src: index.html.j2 + dest: "/var/www/{{ app_name }}/index.html" + mode: '0644' + +- name: Start and enable Nginx + service: + name: nginx + state: started + enabled: true +``` + +**`roles/webserver/handlers/main.yml`:** +```yaml +--- +- name: Restart Nginx + service: + name: nginx + state: restarted +``` + +**`roles/webserver/templates/index.html.j2`:** +```html +

{{ app_name }}

+

Server: {{ ansible_hostname }}

+

IP: {{ ansible_default_ipv4.address }}

+

Environment: {{ app_env | default('development') }}

+

Managed by Ansible

+``` + +Create the `vhost.conf.j2` and `nginx.conf.j2` templates yourself based on what you learned in Task 1. + +Now call the role from a playbook `site.yml`: +```yaml +--- +- name: Configure web servers + hosts: web + become: true + roles: + - role: webserver + vars: + app_name: terraweek + http_port: 80 +``` + +Run it: +```bash +ansible-playbook site.yml +``` + +**Verify:** Curl the web server. Does the custom page load? + +--- + +### Task 4: Ansible Galaxy -- Use Community Roles +Ansible Galaxy is a marketplace of pre-built roles. + +1. **Search for roles:** +```bash +ansible-galaxy search nginx --platforms EL +ansible-galaxy search mysql +``` + +2. **Install a role from Galaxy:** +```bash +ansible-galaxy install geerlingguy.docker +``` + +3. **Check where it was installed:** +```bash +ansible-galaxy list +``` + +4. **Use the installed role** -- create `docker-setup.yml`: +```yaml +--- +- name: Install Docker using Galaxy role + hosts: app + become: true + roles: + - geerlingguy.docker +``` + +Run it -- Docker gets installed with a single role call. + +5. **Use a requirements file** for managing multiple roles. Create `requirements.yml`: +```yaml +--- +roles: + - name: geerlingguy.docker + version: "7.4.1" + - name: geerlingguy.ntp +``` + +Install all at once: +```bash +ansible-galaxy install -r requirements.yml +``` + +**Document:** Why use a `requirements.yml` instead of installing roles manually? + +--- + +### Task 5: Ansible Vault -- Encrypt Secrets +Never put passwords, API keys, or tokens in plain text. Ansible Vault encrypts sensitive data. + +1. **Create an encrypted file:** +```bash +ansible-vault create group_vars/db/vault.yml +``` +It will ask for a vault password, then open an editor. Add: +```yaml +vault_db_password: SuperSecretP@ssw0rd +vault_db_root_password: R00tP@ssw0rd123 +vault_api_key: sk-abc123xyz789 +``` +Save and exit. Open the file with `cat` -- it is fully encrypted. + +2. **Edit an encrypted file:** +```bash +ansible-vault edit group_vars/db/vault.yml +``` + +3. **View without editing:** +```bash +ansible-vault view group_vars/db/vault.yml +``` + +4. **Encrypt an existing file:** +```bash +ansible-vault encrypt group_vars/db/secrets.yml +``` + +5. **Use vault variables in a playbook** -- create `db-setup.yml`: +```yaml +--- +- name: Configure database + hosts: db + become: true + + tasks: + - name: Show DB password (never do this in production) + debug: + msg: "DB password is set: {{ vault_db_password | length > 0 }}" +``` + +Run with the vault password: +```bash +ansible-playbook db-setup.yml --ask-vault-pass +``` + +6. **Use a password file** (better for CI/CD): +```bash +echo "YourVaultPassword" > .vault_pass +chmod 600 .vault_pass +echo ".vault_pass" >> .gitignore + +ansible-playbook db-setup.yml --vault-password-file .vault_pass +``` + +Or set it in `ansible.cfg`: +```ini +[defaults] +vault_password_file = .vault_pass +``` + +**Document:** Why is `--vault-password-file` better than `--ask-vault-pass` for automated pipelines? + +--- + +### Task 6: Combine Roles, Templates, and Vault +Write a complete `site.yml` that uses everything you learned today: + +```yaml +--- +- name: Configure web servers + hosts: web + become: true + roles: + - role: webserver + vars: + app_name: terraweek + http_port: 80 + +- name: Configure app servers with Docker + hosts: app + become: true + roles: + - geerlingguy.docker + +- name: Configure database servers + hosts: db + become: true + tasks: + - name: Create DB config with secrets + template: + src: templates/db-config.j2 + dest: /etc/db-config.env + owner: root + mode: '0600' +``` + +Create `templates/db-config.j2`: +```jinja2 +# Database Configuration -- Managed by Ansible +DB_HOST={{ ansible_default_ipv4.address }} +DB_PORT={{ db_port | default(3306) }} +DB_PASSWORD={{ vault_db_password }} +DB_ROOT_PASSWORD={{ vault_db_root_password }} +``` + +Run: +```bash +ansible-playbook site.yml +``` + +**Verify:** SSH into the db server and check `/etc/db-config.env`. Are the secrets rendered correctly? Is the file permission `600`? + +--- + +## Hints +- Templates use `.j2` extension by convention (Jinja2) +- In templates, `{{ variable }}` renders a value, `{% if %}` is a conditional, `{% for %}` is a loop +- `| default(value)` is a Jinja2 filter that provides a fallback if the variable is undefined +- Role `defaults/` has the lowest priority -- callers can easily override these values +- Role `vars/` has high priority -- use it for values that should not be overridden +- `ansible-galaxy init` creates the full skeleton, but you can delete directories you don't use +- Vault-encrypted files are normal YAML after decryption -- Ansible handles it transparently +- Never commit `.vault_pass` to Git -- always add it to `.gitignore` +- Use `ansible-vault encrypt_string` to encrypt a single value inline instead of a whole file + +--- + +## Documentation +Create `day-71-roles-templates-vault.md` with: +- Your webserver role directory structure +- The Jinja2 templates you created and the rendered output +- Screenshot of the role running successfully +- How you installed and used a Galaxy role +- Vault workflow: create, edit, view, encrypt, decrypt +- Screenshot of the encrypted vault file contents +- When to use roles vs playbooks vs ad-hoc commands + +--- + +## Submission +1. Add `day-71-roles-templates-vault.md` to `2026/day-71/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Built my first Ansible role today -- organized tasks, templates, handlers, and defaults into a reusable structure. Used Galaxy to install community roles, Jinja2 for dynamic configs, and Vault to encrypt secrets. This is production-grade automation." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-72/README.md b/2026/day-72/README.md new file mode 100644 index 0000000000..59608f5cb5 --- /dev/null +++ b/2026/day-72/README.md @@ -0,0 +1,436 @@ +# Day 72 -- Ansible Project: Automate Docker and Nginx Deployment + +## Task +Five days of Ansible -- inventory, ad-hoc commands, playbooks, modules, handlers, variables, facts, conditionals, loops, roles, templates, Galaxy, and Vault. Today you put it all together and build what you would actually do on the job. + +Automate a complete deployment: install Docker, pull and run a containerized application, set up Nginx as a reverse proxy in front of it, and manage everything through Ansible roles. One command to go from a fresh server to a fully running, production-style setup. + +--- + +## Expected Output +- A complete Ansible project with custom roles for Docker and Nginx +- Docker containers running on managed nodes, deployed entirely through Ansible +- Nginx configured as a reverse proxy to the container +- Vault-encrypted Docker Hub credentials +- A markdown file: `day-72-ansible-project.md` +- A running app accessible through Nginx on port 80 + +--- + +## Challenge Tasks + +### Task 1: Plan the Project Structure +Create the complete project layout: + +``` +ansible-docker-project/ + ansible.cfg + inventory.ini + site.yml # Master playbook + group_vars/ + all.yml # Common variables + web/ + vars.yml # Nginx variables + vault.yml # Encrypted Docker Hub credentials + roles/ + common/ # Shared setup for all servers + tasks/main.yml + docker/ # Docker installation and container management + tasks/main.yml + templates/ + docker-compose.yml.j2 + handlers/main.yml + defaults/main.yml + nginx/ # Nginx reverse proxy + tasks/main.yml + templates/ + nginx.conf.j2 + app-proxy.conf.j2 + handlers/main.yml + defaults/main.yml +``` + +Generate the role skeletons: +```bash +mkdir -p ansible-docker-project/roles +cd ansible-docker-project +ansible-galaxy init roles/common +ansible-galaxy init roles/docker +ansible-galaxy init roles/nginx +``` + +Set up your `ansible.cfg` and `inventory.ini` using what you built on Day 68. + +--- + +### Task 2: Build the Common Role +The `common` role runs on every server -- baseline packages and setup. + +**`roles/common/tasks/main.yml`:** +```yaml +--- +- name: Update package cache + yum: + update_cache: true + tags: common + +- name: Install common packages + yum: + name: "{{ common_packages }}" + state: present + tags: common + +- name: Set hostname + hostname: + name: "{{ inventory_hostname }}" + tags: common + +- name: Set timezone + timezone: + name: "{{ timezone }}" + tags: common + +- name: Create deploy user + user: + name: deploy + groups: wheel + shell: /bin/bash + state: present + tags: common +``` + +(Use `apt` instead of `yum` if your instances run Ubuntu) + +**`group_vars/all.yml`:** +```yaml +--- +timezone: Asia/Kolkata +project_name: devops-app +app_env: development +common_packages: + - vim + - curl + - wget + - git + - htop + - tree + - jq + - unzip +``` + +--- + +### Task 3: Build the Docker Role +This role installs Docker, starts the service, pulls images, and runs containers. + +**`roles/docker/defaults/main.yml`:** +```yaml +--- +docker_app_image: nginx +docker_app_tag: latest +docker_app_name: myapp +docker_app_port: 8080 +docker_container_port: 80 +``` + +**`roles/docker/tasks/main.yml`:** +Write tasks that: +1. Install Docker dependencies (`yum-utils`, `device-mapper-persistent-data`, `lvm2`) +2. Add the Docker CE repository +3. Install Docker CE +4. Start and enable the Docker service +5. Add the `deploy` user to the `docker` group +6. Install Docker Compose (via pip or direct download) +7. Log in to Docker Hub using vault-encrypted credentials: +```yaml +- name: Log in to Docker Hub + community.docker.docker_login: + username: "{{ vault_docker_username }}" + password: "{{ vault_docker_password }}" + become_user: deploy + when: vault_docker_username is defined +``` +8. Pull the application image: +```yaml +- name: Pull application image + community.docker.docker_image: + name: "{{ docker_app_image }}" + tag: "{{ docker_app_tag }}" + source: pull +``` +9. Run the container: +```yaml +- name: Run application container + community.docker.docker_container: + name: "{{ docker_app_name }}" + image: "{{ docker_app_image }}:{{ docker_app_tag }}" + state: started + restart_policy: always + ports: + - "{{ docker_app_port }}:{{ docker_container_port }}" +``` +10. Verify the container is running: +```yaml +- name: Wait for container to be healthy + uri: + url: "http://localhost:{{ docker_app_port }}" + status_code: 200 + retries: 5 + delay: 3 + register: health_check + until: health_check.status == 200 +``` + +Tag all tasks with `docker`. + +**`roles/docker/handlers/main.yml`:** +```yaml +--- +- name: Restart Docker + service: + name: docker + state: restarted +``` + +**Install the required Ansible collection** (needed for `community.docker` modules): +```bash +ansible-galaxy collection install community.docker +``` + +--- + +### Task 4: Build the Nginx Role +This role installs Nginx and configures it as a reverse proxy to the Docker container. + +**`roles/nginx/defaults/main.yml`:** +```yaml +--- +nginx_http_port: 80 +nginx_upstream_port: 8080 +nginx_server_name: "_" +``` + +**`roles/nginx/tasks/main.yml`:** +Write tasks that: +1. Install Nginx +2. Remove the default Nginx site config +3. Deploy the main Nginx config from a template +4. Deploy the reverse proxy config from a template +5. Test Nginx config before reloading: +```yaml +- name: Test Nginx configuration + command: nginx -t + changed_when: false +``` +6. Start and enable Nginx +7. Use a handler to reload Nginx when any config changes + +Tag all tasks with `nginx`. + +**`roles/nginx/templates/app-proxy.conf.j2`:** +```nginx +# Reverse Proxy to Docker Container -- Managed by Ansible +upstream docker_app { + server 127.0.0.1:{{ nginx_upstream_port }}; +} + +server { + listen {{ nginx_http_port }}; + server_name {{ nginx_server_name }}; + + location / { + proxy_pass http://docker_app; + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + } + + location /health { + access_log off; + return 200 'OK'; + add_header Content-Type text/plain; + } + +{% if app_env == 'production' %} + access_log /var/log/nginx/{{ project_name }}_access.log; + error_log /var/log/nginx/{{ project_name }}_error.log; +{% else %} + access_log /var/log/nginx/{{ project_name }}_access.log; + error_log /var/log/nginx/{{ project_name }}_error.log debug; +{% endif %} +} +``` + +**`roles/nginx/handlers/main.yml`:** +```yaml +--- +- name: Reload Nginx + service: + name: nginx + state: reloaded + +- name: Restart Nginx + service: + name: nginx + state: restarted +``` + +--- + +### Task 5: Encrypt Docker Hub Credentials with Vault +1. Create the vault file: +```bash +ansible-vault create group_vars/web/vault.yml +``` +Add: +```yaml +vault_docker_username: your-dockerhub-username +vault_docker_password: your-dockerhub-token +``` + +2. Create a vault password file for convenience: +```bash +echo "YourVaultPassword" > .vault_pass +chmod 600 .vault_pass +echo ".vault_pass" >> .gitignore +``` + +3. Reference it in `ansible.cfg`: +```ini +[defaults] +inventory = inventory.ini +host_key_checking = False +vault_password_file = .vault_pass +``` + +--- + +### Task 6: Write the Master Playbook and Deploy +**`site.yml`:** +```yaml +--- +- name: Apply common configuration + hosts: all + become: true + roles: + - common + tags: common + +- name: Install Docker and run containers + hosts: web + become: true + roles: + - docker + tags: docker + +- name: Configure Nginx reverse proxy + hosts: web + become: true + roles: + - nginx + tags: nginx +``` + +Deploy the full stack: +```bash +# Dry run first -- always +ansible-playbook site.yml --check --diff + +# Full deploy +ansible-playbook site.yml +``` + +Use tags for selective execution: +```bash +# Only set up Docker and containers +ansible-playbook site.yml --tags docker + +# Only update Nginx config +ansible-playbook site.yml --tags nginx + +# Skip common setup +ansible-playbook site.yml --skip-tags common +``` + +**Verify:** +1. Curl the server on port 8080 -- does the Docker container respond directly? +2. Curl the server on port 80 -- does Nginx reverse proxy the request to the container? +3. Check `docker ps` on the server -- is the container running with the correct port mapping? + +--- + +### Task 7: Bonus -- Deploy a Different App and Re-Run +Change the Docker image to something else. Update `group_vars/all.yml` or pass extra vars: + +```bash +ansible-playbook site.yml --tags docker \ + -e "docker_app_image=httpd docker_app_tag=latest docker_app_name=apache-app" +``` + +The old container should be replaced with the new one. Nginx still proxies traffic -- no config change needed. + +Now run the full playbook one more time: +```bash +ansible-playbook site.yml +``` + +The output should show mostly `ok` with zero or minimal `changed`. This proves your entire setup is **idempotent**. + +**Reflect and document:** +1. How many total tasks ran? +2. Map each Ansible concept to the day you learned it: + +| Day | Concept Used | +|-----|-------------| +| 68 | Inventory, ad-hoc commands, SSH setup | +| 69 | Playbooks, modules, handlers | +| 70 | Variables, facts, conditionals, loops | +| 71 | Roles, templates, Galaxy, Vault | +| 72 | Everything combined in one project | + +3. What would you add for production? (SSL with certbot, monitoring, log rotation, multi-container Compose) +4. Clean up your EC2 instances when done. If you used Terraform: `terraform destroy`. If manual: terminate from the console. + +--- + +## Hints +- Install `community.docker` collection before running: `ansible-galaxy collection install community.docker` +- If `community.docker` modules are not available, you can use `command` or `shell` with `docker run` as a fallback +- Nginx and the Docker container run on the same server -- Nginx listens on port 80, container on port 8080 +- `nginx -t` tests the config without reloading -- always run this before a reload +- `restart_policy: always` ensures the container restarts after a server reboot +- Tags let you update just Docker containers or just Nginx config independently +- `--check --diff` is your best friend before any deployment +- If the container port conflicts with another service, change `docker_app_port` in defaults +- The `uri` module is a clean way to health-check without installing curl on the managed node + +--- + +## Documentation +Create `day-72-ansible-project.md` with: +- Your complete project directory structure +- Key files: `site.yml`, each role's `tasks/main.yml`, the Nginx reverse proxy template +- Screenshot of `ansible-playbook site.yml` running end-to-end +- Screenshot proving idempotency (second run with all ok) +- Screenshot of `docker ps` on the server showing the running container +- Screenshot of curling port 80 through Nginx +- How you used tags for selective deployment +- How Vault protected Docker Hub credentials +- Architecture: Ansible -> Server [Nginx:80 -> Docker Container:8080] + +--- + +## Submission +1. Add `day-72-ansible-project.md` to `2026/day-72/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Completed the Ansible block -- automated a full Docker + Nginx deployment with custom roles. Docker installed, container running, Nginx reverse-proxying, secrets encrypted with Vault. One command sets up the entire server. Five days from zero to production-grade automation." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 3376813da5..764c626f00 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,125 +1,18 @@ # Contributing Guidelines -Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional -documentation, we greatly value feedback and contributions from our community. +Thank you for contributing to #90DaysOfDevOps. -Please read through this document before submitting any issues or pull requests to ensure we have all the necessary -information to effectively respond to your bug report or contribution. +## Quick Start +- Fork the repository and create a branch. +- Complete the task in the correct `2026/day-XX` folder. +- Commit with a clear message (example: `day-14: git fundamentals notes`). -## Reporting Bugs, Features, and Enhancements -We welcome you to use the GitHub issue tracker to report bugs or suggest features and enhancements. +## Pull Request Checklist +- Only update the day(s) you worked on. +- Ensure filenames match the Expected Output exactly. +- Keep content concise and practical. +- No emojis in README files. -When filing an issue, please check existing open, or recently closed, issues to make sure someone else hasn't already -reported the issue. - -Please try to include as much information as you can. Details like these are incredibly useful: - -* A reproducible test case or series of steps. -* Any modifications you've made relevant to the bug. -* Anything unusual about your environment or deployment. - -## Contributing via Pull Requests - -Contributions via pull requests are appreciated. Before sending us a pull request, please ensure that: - -1. You [open a discussion](https://github.com/MichaelCade/90DaysOfDevOps/discussions) to discuss any significant work with the maintainer(s). -2. You open an issue and link your pull request to the issue for context. -3. You are working against the latest source on the `main` branch. -4. You check existing open, and recently merged, pull requests to make sure someone else hasn't already addressed the problem. - -To send us a pull request, please: - -1. Fork the repository. -2. Modify the source; please focus on the **specific** change you are contributing. -3. Ensure local tests pass. -4. Updated the documentation, if required. -4. Commit to your fork [using a clear commit messages](http://chris.beams.io/posts/git-commit/). We ask you to please use [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/). -5. Send us a pull request, answering any default questions in the pull request. -6. Pay attention to any automated failures reported in the pull request, and stay involved in the conversation. - -GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and -[creating a pull request](https://help.github.com/articles/creating-a-pull-request/). - -### Contributor Flow - -This is a rough outline of what a contributor's workflow looks like: - -- Create a topic branch from where you want to base your work. -- Make commits of logical units. -- Make sure your commit messages are [in the proper format](http://chris.beams.io/posts/git-commit/). -- Push your changes to a topic branch in your fork of the repository. -- Submit a pull request. - -Example: - -``` shell -git remote add upstream https://github.com/vmware-samples/packer-examples-for-vsphere.git -git checkout -b my-new-feature main -git commit -s -a -git push origin my-new-feature -``` - -### Staying In Sync With Upstream - -When your branch gets out of sync with the 90DaysOfDevOps/main branch, use the following to update: - -``` shell -git checkout my-new-feature -git fetch -a -git pull --rebase upstream main -git push --force-with-lease origin my-new-feature -``` - -### Updating Pull Requests - -If your pull request fails to pass or needs changes based on code review, you'll most likely want to squash these changes into -existing commits. - -If your pull request contains a single commit or your changes are related to the most recent commit, you can simply amend the commit. - -``` shell -git add . -git commit --amend -git push --force-with-lease origin my-new-feature -``` - -If you need to squash changes into an earlier commit, you can use: - -``` shell -git add . -git commit --fixup -git rebase -i --autosquash main -git push --force-with-lease origin my-new-feature -``` - -Be sure to add a comment to the pull request indicating your new changes are ready to review, as GitHub does not generate a notification when you `git push`. - -### Formatting Commit Messages - -We follow the conventions on [How to Write a Git Commit Message](http://chris.beams.io/posts/git-commit/). - -Be sure to include any related GitHub issue references in the commit message. - -See [GFM syntax](https://guides.github.com/features/mastering-markdown/#GitHub-flavored-markdown) for referencing issues and commits. - -## Reporting Bugs and Creating Issues - -When opening a new issue, try to roughly follow the commit message format conventions above. - -## Finding Contributions to Work On - -Looking at the existing issues is a great way to find something to contribute on. If you have an idea you'd like to discuss, [open a discussion](https://github.com/MichaelCade/90DaysOfDevOps/discussions). - -## License - -Shield: [![CC BY-NC-SA 4.0][cc-by-nc-sa-shield]][cc-by-nc-sa] - -This work is licensed under a -[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. - -[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] - -[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ -[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png -[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg +## Code of Conduct +Be respectful, supportive, and helpful to others in the community. diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000000..58006f3ebf --- /dev/null +++ b/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2026 TrainWithShubham + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/README.md b/README.md index f7591c0dcd..8b13789179 100644 --- a/README.md +++ b/README.md @@ -1,34 +1 @@ -# #90DaysOfDevOps Challenge -## Learn, Upskill, Grow with the Community - -This repository is a Challenge for the DevOps Community to get stronger in DevOps. -This challenge starts on the 1st January 2023 and in the next 90 Days we promise ourselves to become better at DevOps. - -The reason for making this Public is so that others can learn from the community and help each other grow. - -## Steps: -- Fork[https://github.com/LondheShubham153/90DaysOfDevOps/fork] the Repo. -- Learn Everyday and add your learnings in the day wise folders. -- Chek out what others are Learning and help/learn from them. -- Showcase your learnings on LinkedIn - - -These are our community Links. - -- Telegram Channel: https://t.me/trainwithshubham -- Discord Channel: https://discord.gg/hs3Pmc5F -- WhatsApp Group: https://chat.whatsapp.com/FvRlAAZVxUhCUSZ0Y1s7KY -- YouTube Channel: https://www.youtube.com/@TrainWithShubham -- Website: https://www.trainwithshubham.com/ -- LinkedIn: https://www.linkedin.com/in/shubhamlondhe1996/ - -## Events - -YouTube Live Announcement: - -YouTube Playlist for DevOps: -https://youtube.com/playlist?list=PLlfy9GnSVerRqYJgVYO0UiExj5byjrW8u - -DevOps Course: -https://bit.ly/devops-batch-2 diff --git a/TOC.md b/TOC.md new file mode 100644 index 0000000000..604c12e834 --- /dev/null +++ b/TOC.md @@ -0,0 +1,136 @@ +## Table of Contents + +Below is the index of the incredible DevOps journey that awaits you: + +... + +### 🌟 [Day 1-7 : Introduction to DevOps and Linux Basics](./2023/day01/) + +- Description: Kickstart your 90-day journey with the foundational principles of DevOps. Dive deep into the Linux ecosystem, exploring commands, shell scripting, and file permissions. +- Topics Covered: + - [Understanding and defining DevOps](./2023/day01/README.md) + - [Getting hands-on with basic to advanced Linux commands](./2023/day02/README.md) + - [Grasping the concepts of Linux Shell Scripting](./2023/day04/README.md) + - [Exploring advanced shell scripting techniques with practical tasks.](./2023/day05/README.md) + - [Deep dive into file permissions and Access Control Lists (ACLs)](./2023/day06/README.md) + - [Insights into package managers in Linux and understanding systemctl and systemd](./2023/day07/README.md) + +### 🚀 [Day 8-12: Mastering Git & GitHub: From Basics to Advanced Techniques](./2023/day08/) + +- Description: Embark on a comprehensive journey through Git and GitHub, from grasping the fundamental concepts to exploring advanced techniques that are essential for DevOps. +- Topics Covered: + - [Introduction and understanding of Git and GitHub.](./2023/day08/README.md) + - [Grasping the concept and advantages of Version Control Systems, with a focus on Centralized vs. Distributed.](./2023/day08/README.md) + - [Diving deep into the significance, distinctions, and practicalities of Git and GitHub, including setting up repositories and understanding branch differences.](./2023/day09/README.md) + - [Exploring advanced Git concepts such as branching, revert, reset, rebase, merge, stash, cherry-pick, and conflict resolution.](./2023/day10/README.md) + - [Concluding with celebrations, crafting a Git cheatsheet, and fostering a spirit of continuous learning.](./2023/day12/README.md) + +### 💼 [Day 13-15: Delving into Python Essentials for DevOps](./2023/day13/) + +- Description: Dive into the world of Python, as this programming language plays a pivotal role in a DevOps engineer's toolkit. Cover the basics, explore diverse data types, understand essential data structures, and leverage Python libraries for DevOps tasks. +- Topics Covered: + - [Introduction to Python: its definition, creator, and the extensive libraries and frameworks it offers.](./2023/day13/README.md) + - [Understanding Python's data types and structures.](./2023/day14/README.md) + - [Utilizing Python libraries for DevOps tasks while emphasizing hands-on work with data structures and file formats.](./2023/day15/README.md) + +### 🐳 [Day 16-21: Deep Dive into Docker for DevOps Engineers](./2023/day16/) + +- Description: This module immerses DevOps Engineers into the extensive world of Docker. It equips you with the hands-on skills necessary to build, manage, and optimize Docker containers, create Docker projects, understand related concepts, and share your knowledge with the community. +- Topics Covered: + - [The essence of Docker and its revolutionary packaging into standardized units known as containers.](./2023/day16/README.md) + - [A special project day focused on Dockerfiles – understanding their significance and constructing one for a simple web application.](./2023/day17/README.md) + - [Expanding knowledge on Docker Compose, its configuration language YAML, and the magic they bring to multi-container applications.](./2023/day18/README.md) + - [Docker’s storage solutions with Docker Volume, understanding its independence and how it can benefit container data management.](./2023/day19/README.md) + - [Important interview Questions](./2023/day21/README.md) + +### 🛠️ [Day 22-29: Diving into Jenkins: Basics to Advanced](./2023/day22/) + +- Description: Delve into Jenkins's world, navigating from its foundational concepts to advanced functionalities integral for DevOps. This will empower you to master CI/CD pipelines, understand the anatomy of Jenkins projects, and optimize Jenkins in the DevOps lifecycle. +- Topics Covered: + - [Introduction to Jenkins and its significance in the DevOps realm.](./2023/day22/README.md) + - [Detailed exploration of Jenkins Freestyle Projects.](./2023/day23/README.md) + - [Crafting an end-to-end Jenkins CI/CD project for a Node JS application.](./2023/day24/README.md) + - [Jenkins Declarative Pipelines, understanding the distinction between declarative and scripted pipelines.](./2023/day26/README.md) + - [Leveraging Docker with Jenkins to enhance CI/CD workflows.](./2023/day27/README.md) + - [Jenkins Agents and the orchestration between the master and agent for optimized task execution.](./2023/day28/README.md) + - [Jenkins Important interview Questions.](./2023/day29/README.md) + +### ☸️ [Day 30-37: Kubernetes Mastery: From Overview to Advanced Implementation](./2023/day30/) + +- Description: Dive deep into Kubernetes, the leading container management platform. Spanning from its foundations and architecture, all the way to advanced configurations, services, and best practices. Equip yourself not only with hands-on skills but also with critical insights and understanding. +- Topics Covered: + - [Historical background of Kubernetes, its inspiration from Google's Borg, and its significant role in DevOps.](./2023/day30/README.md) + - [Initial setup with launching a Kubernetes Cluster, getting hands-on with minikube, and deploying Nginx.](./2023/day31/README.md) + - [Advanced cluster operations, including deployments with features like auto-healing and auto-scaling.](./2023/day32/README.md) + - [Working with core Kubernetes concepts like Namespaces, Services, ConfigMaps, Secrets, and Persistent Volumes.](./2023/day33/README.md) + - [Mastering ConfigMaps and Secrets in Kubernete](./2023/day35/README.md) + - [Important interview questions related to Kubernetes.](./2023/day37/README.md) + +### ☁️ [Day 38-53: AWS's vast ecosystem and its dominance in the cloud industry.](./2023/day38/) + +- Description: Dive into Amazon Web Services, starting with the fundamentals and progressing to more complex concepts and tools. Over the course of these days, learn the intricacies of AWS, set up essential services, and work hands-on with CI/CD pipeline concepts. +- Topics Covered: + - [Introduction to AWS and its fundamental components.](./2023/day38/README.md) + - [Understanding IAM (Identity and Access Management)](./2023/day39/README.md) + - [Hands-on with AWS EC2 (Elastic Compute Cloud), including automation and setting up Application Load Balancers.](./2023/day40/README.md) + - [Working with AWS-CLI and S3 programmatic access.](./2023/day42/README.md) + - [Grasping the RDS (Relational Database Service) and deploying a WordPress website.](./2023/day44/README.md) + - [Monitoring and alerting with AWS CloudWatch and SNS.](./2023/day46/README.md) + - [Delving into ECS (Elastic Container Service) and preparing for AWS-based interviews.](./2023/day48/README.md) + - [Embarking on a 4-day intensive journey to set up a CI/CD pipeline on AWS, incorporating tools such as CodeCommit, CodeBuild, CodeDeploy, CodePipeline, and S3.](./2023/day50/README.md) + +### 🛠️ [Day 54-59: Journey Through Ansible: Configuration Management & Automation](./2023/day54/) + +- Description: Venture into the realm of Infrastructure as Code (IaC) and Configuration Management with a detailed focus on Ansible. From basic setups to complex playbooks and hands-on projects, master the nuances of Ansible through step-by-step tasks and comprehensive modules. +- Topics Covered: + - [Introduction to Infrastructure as Code and its significance.](./2023/day54/README.md) + - [Diving deep into Configuration Management and the power of Ansible.](./2023/day55/README.md) + - [A closer look at Ansible: from installation on AWS EC2 to understanding the hosts file and setting up additional EC2 instances.](./2023/day55/README.md) + - [Ad-hoc commands in Ansible: quick commands versus playbooks, their utility, and hands-on tasks involving pinging servers and checking uptime.](./2023/day56/README.md) + - [Enhancing understanding through video explanations to make Ansible more engaging and relatable.](./2023/day57/README.md) + - [Exploring Ansible Playbooks: their importance, use cases, and deep dives into configurations, deployment, roles, and variables.](./2023/day58/README.md) + - [A practical project to solidify understanding: deploying a web app using Ansible, including EC2 setup, Ansible installations, inventory file access, Nginx installations, and deploying a sample webpage.](./2023/day59/README.md) + +### ⚙️ [Day 60-71: Dive into Terraform: From Basics to Modules](./2023/day60/) + +- Description: Delve deep into Terraform, the renowned infrastructure-as-code tool. Spanning an 11-day learning journey, explore its fundamental concepts, automation potentials, advanced configurations, and best practices for AWS deployment. +- Topics Covered: + - [Introduction to Terraform and its pivotal role in automating EC2 instances.](./2023/day60/README.md) + - [Familiarizing with basic and essential Terraform commands.](./2023/day61/README.md) + - [The integration between Terraform and Docker, encompassing Blocks, Resources, and providers.](./2023/day62/README.md) + - [Understanding the significance of Terraform variables, and how they interplay in Terraform configurations.](./2023/day63/README.md) + - [Deep-dive into the realms of Terraform with AWS, emphasizing resource creation and management.](./2023/day64/README.md) + - [Expanding horizons with hands-on Terraform projects, crafting AWS infrastructure using Infrastructure-as-Code techniques.](./2023/day66/README.md) + - [AWS S3 Bucket creation, management, and the underlying intricacies.](./2023/day67/README.md) + - [Embracing scalability with Terraform - comprehending the art of scaling infrastructure.](./2023/day68/README.md) + - [Unraveling the world of Meta-Arguments and their application in Terraform.](./2023/day69/README.md) + - [Introduction to the modular world of Terraform - the core, the applications, and the benefits.](./2023/day70/README.md) + - [Preparing and acing Terraform interview questions.](./2023/day71/README.md) + +### [Day 72-78: 📊 Grafana Mastery: Monitoring, Dashboarding, and Alerting](./2023/day72/) + +- Description: Grafana, one of the most versatile open-source platforms for observability. From understanding its essence to setting it up and further integrating it with various platforms like Docker and cloud services, this comprehensive guide offers a mix of theory and hands-on tasks. +- Topics Covered: + - [Introducing Grafana and exploring its features, benefits, monitoring capabilities, databases compatibility, metrics, visualizations, and distinction from Prometheus.](./2023/day72/README.md) + - [Setting up Grafana on a local environment within AWS EC2.](./2023/day73/README.md) + - [Connecting AWS EC2 instances with Grafana for efficient monitoring.](./2023/day74/README.md) + - [Implementing Docker, creating containers, and sharing real-time logs with Grafana.](./2023/day75/README.md) + - [Constructing a Grafana dashboard for an organized visualization of metrics.](./2023/day76/README.md) + - [Establishing alert systems with Grafana for prompt notifications on system irregularities.](./2023/day77/README.md) + - [Exploring Grafana Cloud, setting up alerts for EC2 instances, and managing AWS billing alerts.](./2023/day78/README.md) + +### [Day 79+🔥: Comprehensive Dive into DevOps Projects & Prometheus Mastery](./2023/day79/) + +- Description: Delve into an extensive journey exploring the vast capabilities of Prometheus, combined with hands-on DevOps projects that span a variety of tools, platforms, and methodologies. Learn how to monitor, automate, deploy, and manage applications effectively using modern DevOps techniques. +- Topics Covered: + - [In-depth understanding of Prometheus: its architecture, features, components, database, and data retention.](./2023/day79/README.md) + - Projects to automate and streamline processes: + - [Building, testing, and deploying with Jenkins and GitHub.](./2023/day80/README.md) + - [Deploying using Jenkins' declarative syntax.](./2023/day81/README.md) + - [Hosting static websites on AWS S3.](./2023/day82/README.md) + - [Application deployment with Docker Swarm.](./2023/day83/README.md) + - [Deploying a Netflix clone using Kubernetes.](./2023/day84/README.md) + - [Utilizing AWS ECS Fargate and ECR with a Node JS app.](./2023/day85/README.md) + - [Deployment on AWS platforms using GitHub Actions.](./2023/day86/README.md) + - [Setting up and deploying a Django Todo app on AWS EC2 with a Kubeadm Kubernetes cluster.](./2023/day88/README.md) + - [Mounting AWS S3 Bucket on Amazon EC2 using S3FS.](./2023/day89/README.md) diff --git a/scripts/generate_days.py b/scripts/generate_days.py new file mode 100644 index 0000000000..a186169445 --- /dev/null +++ b/scripts/generate_days.py @@ -0,0 +1,16 @@ +from pathlib import Path + +root = Path(__file__).resolve().parents[1] +base = root / "2026" +base.mkdir(parents=True, exist_ok=True) + +total_days = 90 + +# Generate folders and ensure README.md exists without overwriting. +for i in range(total_days): + day_num = i + 1 + day_dir = base / f"day-{day_num:02d}" + day_dir.mkdir(parents=True, exist_ok=True) + readme = day_dir / "README.md" + if not readme.exists(): + readme.write_text("") diff --git a/scripts/generate_days.sh b/scripts/generate_days.sh new file mode 100755 index 0000000000..c83c5f9b8d --- /dev/null +++ b/scripts/generate_days.sh @@ -0,0 +1,7 @@ +#!/usr/bin/env bash +set -euo pipefail + +# Regenerate 2026 day folders if needed. +# Usage: ./scripts/generate_days.sh + +python3 scripts/generate_days.py