diff --git a/.github/workflows/hello.yaml b/.github/workflows/hello.yaml new file mode 100644 index 0000000000..390f32b3e6 --- /dev/null +++ b/.github/workflows/hello.yaml @@ -0,0 +1,11 @@ +name: akash + +on : + push: + branches: main +jobs: + greet: + runs-on: ubuntu-latest + steps: + - name: say hi to everyone + run: echo "Hello dosto this is akash" diff --git a/.gitignore b/.gitignore index 7596df90c5..269d0f1e86 100644 --- a/.gitignore +++ b/.gitignore @@ -39,3 +39,4 @@ crash.log .cache/ .pytest_cache/ coverage/ +CLAUDE.md diff --git a/2026/day-01/learning-plan.md b/2026/day-01/learning-plan.md new file mode 100644 index 0000000000..fb5e6a5ac4 --- /dev/null +++ b/2026/day-01/learning-plan.md @@ -0,0 +1,2 @@ +in simple understanding devops means complete end to end process desigining and deploying applications.consists of two words dev +ops +dev means development in which code ,plan and features are included but in operations (ops) the code si deployed on servers where continous monitoring happens \ No newline at end of file diff --git a/2026/day-02/linux-architecture-notes.md b/2026/day-02/linux-architecture-notes.md new file mode 100644 index 0000000000..06c11f463f --- /dev/null +++ b/2026/day-02/linux-architecture-notes.md @@ -0,0 +1,15 @@ +* process states +1 running- means the process is currently in execution +2 sleep -process waiting for its execution +3 stopped - proces is paused by user or other +4 zombie - hwne process is terminianted but in process table and wiating for exit +5 dead - when process is completely terminated and nt present in process table +* 5 commands that i will use daily +cd - to navigate between dir +mkdir - create a new dir +vim - for text editor +touch - for creating new file +man - for description +pwd - to check current working directory +systemctl - to know logs of system + diff --git a/2026/day-03/linux-commands-cheatsheet.md b/2026/day-03/linux-commands-cheatsheet.md new file mode 100644 index 0000000000..a1aeaa3628 --- /dev/null +++ b/2026/day-03/linux-commands-cheatsheet.md @@ -0,0 +1,30 @@ +* Basic commands +cd - to navigate between dir +mv - to move or rename file +cp -copy file +pwd - current directory +ls - lsit all files +ls -a - list hidden files +ls -l - list the permissions of file +mkdir - make dir +touch - create file +whoami - display user name +cat - edit file +* adding user or group +useradd - add user in environment +useradd -m - add suer in home dir also +groupadd - add new group +usermod - to modify the permissions +chown - change owership +chgrp - change group ownership +ssh-keygen = genrate new keys +su - to with users +* file permissions +chmod - change permissions +* to know about systems +systemctl - to manage service on linux like start,stop,restart,reload + +* network commands +ipcongig - to check all ip address +ping - send ICMP requests +dig - Dns lookups in nslookups \ No newline at end of file diff --git a/2026/day-04/linux-practice.md b/2026/day-04/linux-practice.md new file mode 100644 index 0000000000..d6712b9ff8 --- /dev/null +++ b/2026/day-04/linux-practice.md @@ -0,0 +1,15 @@ +Process checks +ps - snapshot of current process +top - this provides dynamic view of running system +pgrep - looks htrough the currently running process and lists the process IDs + pgrep [options] pattern + +Service checks +systemctl - gives log for system and controls systemd +systemctl list-units - units are a managed resource this command helps to check the acitve services,sockets,mounts + +Log checks +journalctl -u ssh - print logs of the specific service +tail -n - 50 this shows the last 50 entries + +Mini troubleshooting steps diff --git a/2026/day-05/linux-troubleshooting-runbook.md b/2026/day-05/linux-troubleshooting-runbook.md new file mode 100644 index 0000000000..c929a8e720 --- /dev/null +++ b/2026/day-05/linux-troubleshooting-runbook.md @@ -0,0 +1,77 @@ + + +## Target Service +ssh (OpenSSH Server) + +## Environment +- Kernel: Linux 6.14.0-1018-aws (x86_64) +- OS: Ubuntu 24.04.3 LTS (Noble Numbat) +- System uptime low, clean boot state + +## Filesystem Sanity Check +Commands: +- mkdir /tmp/runbook-demo +- cp /etc/hosts /tmp/runbook-demo/hosts-copy + +Observations: +- Temporary directory created successfully +- File copy succeeded with normal permissions +- Filesystem is writable and healthy + +## CPU & Memory +Commands: +- top- this provide the list fo processes +- free -h - this display the storage in human readable format + +Observations: +- CPU is 99% idle, load average near zero +- No high CPU processes observed +- Memory usage is low with ~510MB available +- No swap usage or memory pressure + +## Disk & IO +Commands: +- df -h- This display file system usage in 1000 powers +- du -sh /var/log + +Observations: +- Root filesystem only 36% utilized +- /var/log size is ~35MB +- No disk space or IO concerns + +## Network +Commands: +- ss -tulpn +- ping -c 3 localhost + +Observations: +- sshd listening on port 22 (IPv4 and IPv6) +- Localhost connectivity is healthy +- No packet loss or latency issues + +--- + +## Logs Reviewed +Commands: +- journalctl -u ssh -n 50 +- tail -n 50 /var/log/auth.log + +Observations: +- SSH service starts cleanly after reboot +- Successful key-based logins observed +- No authentication errors or service crashes +- Log entries appear normal and expected + +--- + +## Quick Findings +- System resources are healthy +- SSH service is stable and responsive +- No indicators of CPU, memory, disk, or network issues +- Logs show normal operational behavior + +--- + +## If This Worsens (Next Steps) +1. Restart ssh service gracefully using systemctl and monitor logs +2. Investigate failed login attempts and review firewall or security group rules \ No newline at end of file diff --git a/2026/day-06/file-io-practice.md b/2026/day-06/file-io-practice.md new file mode 100644 index 0000000000..5e4fec9f88 --- /dev/null +++ b/2026/day-06/file-io-practice.md @@ -0,0 +1,39 @@ +root@Asus:/mnt/c/users/# cd documents +root@Asus:/mnt/c/users/documents# touch name.txt +root@Asus:/mnt/c/users/documents# cat "hello my name is " > name.txt +cat: 'hello my name is ': No such file or directory +root@Asus:/mnt/c/users/documents# man cat +root@Asus:/mnt/c/users/documents# man touch +root@Asus:/mnt/c/users/documents# mv name.txt notes.txt +root@Asus:/mnt/c/users/documents# echo "hello my name si :" > notes.txt +root@Asus:/mnt/c/users/documents# echo "hi everyone" >> notes.txt +root@Asus:/mnt/c/users/documents# echo "I am student of batch 10 " >> notes.txt +root@Asus:/mnt/c/users/documents# cat notes.txt +hello my name si : +hi everyone +I am student of batch 10 +root@Asus:/mnt/c/users/documents# head notes.txt +hello my name si : +hi everyone +I am student of batch 10 +root@Asus:/mnt/c/users/documents# man head +root@Asus:/mnt/c/users/documents# head -n 2 notes.txt +hello my name si : +hi everyone +root@Asus:/mnt/c/users/documents# tail -n 2 notes.txt +hi everyone +I am student of batch 10 +root@Asus:/mnt/c/users/documents# tee "hello" >notes.txt +hello +my name is +i am student of batch 10 +^C +root@Asus:/mnt/c/users/documents# cat notes.txt +hello +my name is +i am student of batch 10 +root@Asus:/mnt/c/users/documents# tee hello +hello +hello +i am +i am \ No newline at end of file diff --git a/2026/day-07/README.md b/2026/day-07/README.md index 613b241332..379331d87c 100644 --- a/2026/day-07/README.md +++ b/2026/day-07/README.md @@ -116,7 +116,7 @@ Write at least 4 commands in order. - Then check: What do the logs say? - Finally check: Is it enabled to start on boot? -**Commands to explore:** `systemctl status myapp`, `systemctl is-enabled myapp`, `journalctl -u myapp -n 50` +**Commands to explore:** `systemctl status myapp`, ` myapp`, `journalctl -u myapp -n 50`systemctl is-enabled **Resource:** Review Day 04 (Process and Services practice) diff --git a/2026/day-07/day-07-linux-fs-and-scenarios.md b/2026/day-07/day-07-linux-fs-and-scenarios.md new file mode 100644 index 0000000000..55426bfa15 --- /dev/null +++ b/2026/day-07/day-07-linux-fs-and-scenarios.md @@ -0,0 +1,51 @@ +### Part 1: Linux File System Hierarchy +- '/' (root) - This contains the boot files of system +- '/home' - this conatisn file configurations and users +- `/root' - This is subdirectory inside '/' which has full user access +- 'etc' - this contains editable configurations files +- '/var/log' - This contains the log of system +- '/tmp' - these are created for short term uses and they got deleted when system reboots + +### Part 2: Scenario-Based Practice +**Scenario 1: Service Not Starting** +Step 1: systemctl status +Why: This will display the status +s +Step 2: journalctl +why :this will display recent logs + +Step 3: systemctl start my-app +Why: this will again start the application + +step 4: systemctl is-enabled +why : to check is service is enabled + +**Scenario 2: High CPU Usage** +step 1: top +why : this will display the top processes executing / htop has interactive display + +step 2: htop +why : htop has interactive display where i can scroll also + +step 3 : ps aux --sort=-%cpu | head -10 +why : this will sort the process and then print first 10 processes + +**Scenario 3: Finding Service Logs** +step 1 : journalctl -u docker.io +why : this will display the logs of docker + +step 2 :journalctl -u docker.service -n 50 +why : this will show last 50 lines + +step 3: journalctl -u docker.service -f +why : this will show me the docker logs in real time + +**Scenario 4: File Permissions Issue** +step 1 : ls -l +why : firstly check the permsission of file + +Step 2 : chmod u+x file_name.sh +why : then give execute perimission to user + +step 3: ls-l +why :rwxr--r-- this means owner got permission to execute diff --git a/2026/day-08/day-08-cloud-deployment.md b/2026/day-08/day-08-cloud-deployment.md new file mode 100644 index 0000000000..a6bf643bdb --- /dev/null +++ b/2026/day-08/day-08-cloud-deployment.md @@ -0,0 +1,37 @@ +## Commands Used +step 1: connecting with instance using ssh +command : ssh -i "keyname"ubuntu@"public_dns" + +step 2: update ubunut +command : sudo apt update + +step 3: install nginx +command : sudo apt install nginx + +step 4 :then to confirm if its starting +command : systemctl status nginx + +step 5: check server logs +command : journalctl -u nginx + +step 6: chekc nginx logs + +command : var/log/nginx +this gives me two files +access.log +error.log + +step 7: then copy the nginx logs and saev into a new file into home directory +cp access.log ~/nginx-log.txt + +step 8 : then i download this using scp in my local machine +scp -i "keyname"ubuntu@"instanceip":"file_path" . +scp -secure copy it gets downloaded form remote server to local machine and . is used for current folder + +## Challenges Faced +i got challanges during cp as i got confused between home directory then i see the linux hirarchy and then i faced challenges in scp as i am running this on instance as then i search about this command and run on my windows termianl + +## What I Learned +i get used to ssh and got easy in connecting instance to my server +i learn about the scp command +i learn about creating inbound rules to check service on web \ No newline at end of file diff --git a/2026/day-08/nginx-logs.txt b/2026/day-08/nginx-logs.txt new file mode 100644 index 0000000000..1af14631a5 --- /dev/null +++ b/2026/day-08/nginx-logs.txt @@ -0,0 +1,4 @@ +115.70.62.21 - - [10/Feb/2026:10:32:37 +0000] "GET / HTTP/1.1" 200 409 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/144.0.0.0 Safari/537.36" +115.70.62.21 - - [10/Feb/2026:10:32:37 +0000] "GET /favicon.ico HTTP/1.1" 404 196 "http://16.26.213.32/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/144.0.0.0 Safari/537.36" +146.190.26.148 - - [10/Feb/2026:10:33:08 +0000] "GET / HTTP/1.1" 200 409 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36" +146.190.26.148 - - [10/Feb/2026:10:33:09 +0000] "GET /favicon.ico HTTP/1.1" 404 196 "http://16.26.213.32/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36" diff --git a/2026/day-08/nginx-webpage.png.png b/2026/day-08/nginx-webpage.png.png new file mode 100644 index 0000000000..4f42a70ace Binary files /dev/null and b/2026/day-08/nginx-webpage.png.png differ diff --git a/2026/day-08/ssh-connection.png.png b/2026/day-08/ssh-connection.png.png new file mode 100644 index 0000000000..c6432cf16e Binary files /dev/null and b/2026/day-08/ssh-connection.png.png differ diff --git a/2026/day-09/README.md b/2026/day-09/README.md index 67aea03d24..db303760e3 100644 --- a/2026/day-09/README.md +++ b/2026/day-09/README.md @@ -121,7 +121,7 @@ Create `day-09-user-management.md`: **User can't access directory?** - Check group: `groups username` - Check permissions: `ls -ld /path` - +cd --- ## Submission diff --git a/2026/day-09/day-09-user-management.md b/2026/day-09/day-09-user-management.md new file mode 100644 index 0000000000..2afbe99e5c --- /dev/null +++ b/2026/day-09/day-09-user-management.md @@ -0,0 +1,30 @@ +# Day 09 Challenge +## Users & Groups Created +- Users: tokyo, berlin, professor, nairobi +- Groups: developers, admins, project-team + +## Group Assignments +tokyo:x:1001: +berlin:x:1002: +professor:x:1003: +developers:x:1004:tokyo,berlin +admins:x:1005:berlin,professor +nairobai:x:1006: +project-team:x:1007:nairobai,tokyo + +## Directories Created +drwxrwsr-x 2 root developers 4096 Feb 11 10:43 dev-project +drwxrwxr-x 2 root project-team 4096 Feb 11 10:51 team-workspace + +## Commands Used +useradd -m : for creating users in home also +addgroup : for creating group +usermod -aG : for adding user to group +mkdir - for creating directory +chown :group_name directory : for changing ownership of group only +chmod 775 directory + +## What I Learned +i learned about creating groups and users +i learned about giving permissions and adding users to the groups +i learned about changing ownerships and groups diff --git a/2026/day-10/day-10-file-permissions.md b/2026/day-10/day-10-file-permissions.md new file mode 100644 index 0000000000..24f660fa4a --- /dev/null +++ b/2026/day-10/day-10-file-permissions.md @@ -0,0 +1,30 @@ +# Day 10 Challenge + +## Files Created +devops.txt +notes.txt +script.sh + +## Permission Changes +before +-rw-rw-r-- 1 ubuntu ubuntu 0 Feb 12 10:15 devops.txt +-rw-rw-r-- 1 ubuntu ubuntu 60 Feb 12 10:16 notes.txt +-rw-rw-r-- 1 ubuntu ubuntu 21 Feb 12 10:16 sript.sh + +after +-r--r--r-- 1 ubuntu ubuntu 0 Feb 12 10:15 devops.txt +-rw-r----- 1 ubuntu ubuntu 60 Feb 12 10:16 notes.txt +-rwxrw-r-- 1 ubuntu ubuntu 21 Feb 12 10:16 script.sh + +## Commands Used +touch +cat +chmod +head +tail +vim + +## What I Learned +i learned about creating file +editing file +permissions of files diff --git a/2026/day-10/files.png b/2026/day-10/files.png new file mode 100644 index 0000000000..62e6352eac Binary files /dev/null and b/2026/day-10/files.png differ diff --git a/2026/day-14/day-14-networking.md b/2026/day-14/day-14-networking.md new file mode 100644 index 0000000000..9e8adefa1a --- /dev/null +++ b/2026/day-14/day-14-networking.md @@ -0,0 +1,28 @@ +### OSI Model vs TCP/IP Stack +The OSI (Open Systems Interconnection) model is a conceptual framework that standardizes the functions of a telecommunication or computing system into seven distinct layers: Physical, Data Link, Network, Transport, Session, Presentation, and Application. Each layer serves a specific purpose and interacts with the layers directly above and below it. +The TCP/IP (Transmission Control Protocol/Internet Protocol) stack, on the other hand, is a more practical and widely used model that consists of four layers: Link, Internet, Transport, and Application. The TCP/IP stack is designed to be simpler and more efficient for real-world networking. +- **Link Layer**: Corresponds to the OSI's Physical and Data Link layers. It handles the physical transmission of data over a network and manages the hardware addressing (MAC addresses)and DNS resolution for local network communication. +- **Internet Layer**: Corresponds to the OSI's Network layer. It is responsible for logical addressing (IP addresses) and routing of data packets across networks. +- **Transport Layer**: Corresponds to the OSI's Transport layer. It manages end-to-end communication, error checking, and flow control. This is where TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) operate. +- **Application Layer**: Corresponds to the OSI's Session, Presentation, and Application layers. It provides protocols for specific applications, such as HTTP/HTTPS for web traffic, DNS for domain name resolution, and FTP for file transfers. + +### Hands-on Checklist +- **Identity:** `hostname -I` (or `ip addr show`) — shows th e IP address(es) assigned to the host. +- **Reachability:** `ping ` — tests the reachability of a target host and measures the round-trip time for messages sent from the originating host to a destination computer. +- **Path:** `traceroute ` (or `tracepath`) — displays the route and measures transit delays of packets across an IP network. +- **Ports:** `ss -tulpn` (or `netstat -tulpn`) — lists all listening ports and the associated services. +- **Name resolution:** `dig ` or `nslookup ` — queries DNS servers to resolve domain names to IP addresses. +- **HTTP check:** `curl -I ` — retrieves the HTTP headers from the specified URL, showing the HTTP status code and other metadata. +- **Connections snapshot:** `netstat -an | head` — provides a snapshot of current network connections, showing the state of each connection (e.g., ESTABLISHED, LISTENING). + +### Mini Task: Port Probe & Interpret +i have tested on port no 80 its succesfull +(test.png) +## Reflection (add to your markdown) +- Which command gives you the fastest signal when something is broken? +curl -I as it will shows me the HTTP status code and headers. +- What layer (OSI/TCP-IP) would you inspect next if DNS fails? If HTTP 500 shows up? +applica1tion layer for both cases, as DNS is part of the application layer in the TCP/IP stack, and HTTP 500 is an error code that indicates a server-side issue, which also falls under the application layer. +- Two follow-up checks you’d run in a real incident. +dns failure, I would check the DNS server configuration and logs to identify any issues. +Ports `ss -tulpn` to check if the DNS service is running and listening on the correct port (usually port 53). \ No newline at end of file diff --git a/2026/day-14/ports.png b/2026/day-14/ports.png new file mode 100644 index 0000000000..fd5913d08a Binary files /dev/null and b/2026/day-14/ports.png differ diff --git a/2026/day-14/test.png b/2026/day-14/test.png new file mode 100644 index 0000000000..e57048a455 Binary files /dev/null and b/2026/day-14/test.png differ diff --git a/2026/day-15/day-15-networking-concepts.md b/2026/day-15/day-15-networking-concepts.md new file mode 100644 index 0000000000..314a87f4e3 --- /dev/null +++ b/2026/day-15/day-15-networking-concepts.md @@ -0,0 +1,151 @@ +### Task 1: DNS – How Names Become IPs + +1. Explain in 3–4 lines: what happens when you type `google.com` in a browser? + +DNS recursor will start checking if it knows the IP. Then Root name server converts human text to readable IP. +TLD nameserver check for a specific IP. It hosts the last portion of IP like .com +then authoritative nameserver acts as specific rack of IPs finally browser uses this ip to connect to googles server. + +2. What are these record types? Write one line each: + - `A`, `AAAA`, `CNAME`, `MX`, `NS` + + - `A` - it contains the IPv4 address like for www-> 172.0.0.0 + - `AAAA` - it contains the record in IPv6 form like for www->2640:4444: + - `CNAME` (host to host) - this maps one hostname to another hostname (host-to-host). This reduces administrative overhead because multiple aliases (like ftp, mail, www) can point to the same canonical name. Updating the canonical hostname automatically updates all the aliases. + - `MX` - it is a mail exchange record that direct mail to the correct mail server for a domain + - `NS` - this stores the history of records + +3. Run: `dig google.com` — identify the A record and TTL from the output. + +google.com. 227 IN A 192.178.187.102 +google.com. 227 IN A 192.178.187.138 +google.com. 227 IN A 192.178.187.101 +google.com. 227 IN A 192.178.187.100 +google.com. 227 IN A 192.178.187.113 +google.com. 227 IN A 192.178.187.139 +TTl -227 + +### Task 2: IP Addressing + +1. What is an IPv4 address? How is it structured? (e.g., `192.168.1.10`) + +IP- it is a 32 -bit numerical label assigned to devices on a network , used to identify the devices and enable communication over the internet. +192.168.1.10. this is divided into 4 octets with values form (0-255) + +2. Difference between **public** and **private** IPs — give one example of each + +Public IP: +An IP address that can be accessed over the Internet by anyone. +Assigned by ISP and is unique across the Internet. +Example: 203.0.113.5 + +Private IP: +An IP address used within a local network (LAN) and cannot be accessed directly from the Internet. +Helps devices communicate inside a home or office network. +Examples: 172.16.0.1\ + +3. What are the private IP ranges? + +These IPs are reserved for use within private networks and cannot be routed on the public Internet: + +Range Notes +10.0.0.0 – 10.255.255.255 Large private network +172.16.0.0 – 172.31.255.255 Medium private network +192.168.0.0 – 192.168.255.255 Small private network, common in home LANs + +4. Run: `ip addr show` — identify which of your IPs are private + +ubuntu@ip-172-31-27-220:~$ ip addr show +1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host noprefixroute + valid_lft forever preferred_lft forever +2: ens5: mtu 9001 qdisc mq state UP group default qlen 1000 + link/ether 06:8a:5f:f8:3f:31 brd ff:ff:ff:ff:ff:ff + inet 172.31.27.220/20 metric 100 brd 172.31.31.255 scope global dynamic ens5 + valid_lft 3408sec preferred_lft 3408sec + inet6 fe80::48a:5fff:fef8:3f31/64 scope link + valid_lft forever preferred_lft forever + +inet 127.0.0.1/8 +inet 172.31.27.220/20 + +### Task 3: CIDR & Subnetting + +1. What does `/24` mean in `192.168.1.0/24`? + +The first 24 bits of the IP address are the network portion. +The remaining 8 bits are for hosts, which determines how many devices can be connected. +usable hosts = 2^(hostbits) -2 + +2. How many usable hosts in a `/24`? A `/16`? A `/28`? + +/24 - 254 +/16 - 65,534 +/28 - 14 + +3. Explain in your own words: why do we subnet? + +Subnetting divides a large network into smaller, manageable networks. + +It helps organize devices, reduce broadcast traffic, and improve security. + +4. Quick exercise — fill in: + +| CIDR | Subnet Mask | Total IPs | Usable Hosts | +|------|--------------- |-----------|--------------| +| /24 | 255.255.255.0 | 256 | 254 | +| /16 | 255.255.0.0 | 65536 | 65534 | +| /28 | 255.255.255.240 | 16 | 14 | + + +### Task 4: Ports – The Doors to Services + +1. What is a port? Why do we need them? + +A port is like a door where each application uses its own door to send and recieve data they helps which app should get the data. +we need them as we can run differents app or service on same ip address. + +2. Document these common ports: + +| Port | Service | +|------|---------| +| 22 | SSH | +| 80 | HTTP | +| 443 | HTTPS | +| 53 |DNS | +| 3306 | MYSQL | +| 6379 | REDIS | +| 27017| MONGODB | +ubuntu@ip-172-31-27-220:~$ ss -tulpn +Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process +udp UNCONN 0 0 127.0.0.54:53 0.0.0.0:* +udp UNCONN 0 0 127.0.0.53%lo:53 0.0.0.0:* +udp UNCONN 0 0 172.31.27.220%ens5:68 0.0.0.0:* +udp UNCONN 0 0 127.0.0.1:323 0.0.0.0:* +udp UNCONN 0 0 [::1]:323 [::]:* +tcp LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:* +tcp LISTEN 0 4096 127.0.0.54:53 0.0.0.0:* +tcp LISTEN 0 511 0.0.0.0:80 0.0.0.0:* +tcp LISTEN 0 4096 0.0.0.0:22 0.0.0.0:* +tcp LISTEN 0 511 [::]:80 [::]:* +tcp LISTEN 0 4096 [::]:22 [::]:* +ubuntu@ip-172-31-27-220:~$ + +### Task 5: Putting It Together + +Answer in 2–3 lines each: +- You run `curl http://myapp.com:8080` — what networking concepts from today are involved? + +curl is a client tool that sends a request to a server. DNS translates myapp.com to an IP, and port 8080 tells the system which application/service to connect to. The request uses TCP/IP to reach the server. + +- Your app can8't reach a database at `10.0.1.50:3306` — what would you check first? + +First, check if the IP is reachable (network rules). Then check port 3306 accessibility and database permissions. Knowing it’s private or public is important for network access. + +### what i learned +i learned about DNS +i learned how to check ports what are ports +i learned about what us curl ,dig ,IP addresses,CIDR diff --git a/2026/day-16/day-16-shell-scripting.md b/2026/day-16/day-16-shell-scripting.md new file mode 100644 index 0000000000..1f5d1f1443 --- /dev/null +++ b/2026/day-16/day-16-shell-scripting.md @@ -0,0 +1,157 @@ +## Challenge Tasks + +### Task 1: Your First Script +### hello.sh i create variables inside this +#!/bin/bash + +echo "Hello, Devops" + +NAME="AKASH" +ROLE="DEVOPS ENGINEER" +echo "with '' " +echo 'Hello, I am $NAME and I am a $ROLE' + +### output +ubuntu@ip-172-31-27-220:~$ vim hello.sh +"hello.sh" [New] 4L, 35B written +ubuntu@ip-172-31-27-220:~$ chmod +x hello.sh +ubuntu@ip-172-31-27-220:~$ ./hello.sh +Hello, Devops +ubuntu@ip-172-31-27-220:~$ + + +**Document:** What happens if you remove the shebang line? +it still runs because this has .sh as when we execute the file it throws format error bash detect this error ENOEXEC and runs the file itself + +### Task 2: Variables + +### hello.sh i create variables inside this +#!/bin/bash + +echo "Hello, Devops" + +NAME="AKASH" +ROLE="DEVOPS ENGINEER" +echo "with '' " +echo 'Hello, I am $NAME and I am a $ROLE' + +### output +## with double quotes +ubuntu@ip-172-31-27-220:~$ ./hello.sh +Hello, Devops +Hello, I am AKASH and I am a DEVOPS ENGINEER + + + ## with singe quote +ubuntu@ip-172-31-27-220:~$ ./hello.sh +Hello, Devops +with '' +Hello, I am $NAME and I am a $ROLE +ubuntu@ip-172-31-27-220:~$ + +The difference in both quotes is in single it read as line and $ variable and do not print its value + +### Task 3: User Input with read +### greet.sh +#!/bin/bash +read -p "Enter your name :" Name +read -p "what is your favourote tool:" tool +echo "Hello $Name , your favourite tool is $tool" + +### output +ubuntu@ip-172-31-27-220:~$ vim greet.sh +ubuntu@ip-172-31-27-220:~$ ./greet.sh +Enter your name :akash +what is your favourote tool:docker +Hello akash , your favourite tool is docker +ubuntu@ip-172-31-27-220:~$ + +### Task 4: If-Else Conditions + +1. Create `check_number.sh` that: +### check_numeber.sh + +#!/bin/bash +read -p "Enter a number: " Num + +if [ "$Num" -gt 0 ]; then + echo "Positive" +elif [ "$Num" -lt 0 ]; then + echo "Negative" +else + echo "Zero" +fi + +### output +ubuntu@ip-172-31-27-220:~$ vim check_number.sh +ubuntu@ip-172-31-27-220:~$ 10L, 162B written +ubuntu@ip-172-31-27-220:~$ ./check_number.sh +Enter a number: 23 +Positive +ubuntu@ip-172-31-27-220:~$ ./check_number.sh +Enter a number: 0 +Zero +ubuntu@ip-172-31-27-220:~$ ./check_number.sh +Enter a number: -9 +Negative +ubuntu@ip-172-31-27-220:~$ + +2. Create `file_check.sh` that: +### file_check.sh +#!/bin/bash +read -p "enter filename : " file +if [ -f $file ]; then + echo "file exists" +else + echo "file not present" +fi + +### output +ubuntu@ip-172-31-27-220:~$ ls +0 bank-heist devops-file.txt greet.sh hello.sh prac project-config.yaml +app-logs check_number.sh file_check.sh heist-project nginx-logs.txt prace team-notes.txt +ubuntu@ip-172-31-27-220:~$ ./file_check.sh +enter filename : devops-file.txt +file exists +ubuntu@ip-172-31-27-220:~$ ./file_check.sh +enter filename : deo +file not present +ubuntu@ip-172-31-27-220:~$ + +### Task 5: Combine It All +### server_check.sh + +Name="nginx" +read -p "Do you want to check the status? (y/n)" Ans +if [ $Ans = 'y' ]; then + echo "systemctl status $Name" + STATUS=$(systemctl is-active nginx) + + if [ "$STATUS" = "active" ]; then + echo "Nginx is running" + fi +else + echo "Skipped" + +fi + +### output +ubuntu@ip-172-31-27-220:~$ ./server_check.sh +Do you want to check the status? (y/n)y +systemctl status nginx +Nginx is running +ubuntu@ip-172-31-27-220:~$ +ubuntu@ip-172-31-27-220:~$ vim server_check.sh +ubuntu@ip-172-31-27-220:~$ ./server_check.sh +Do you want to check the status? (y/n)y +systemctl status nginx +Nginx is running +ubuntu@ip-172-31-27-220:~$ ./server_check.sh +Do you want to check the status? (y/n)n +Skipped +ubuntu@ip-172-31-27-220:~$ + + +i larned about shell commands +i learned about if else conditions +i learned about checking files and server by writing scripts \ No newline at end of file diff --git a/2026/day-17/day-17-scripting.md b/2026/day-17/day-17-scripting.md new file mode 100644 index 0000000000..bac17dc5c6 --- /dev/null +++ b/2026/day-17/day-17-scripting.md @@ -0,0 +1,169 @@ +### Task 1: For Loop +1 `for_loop.sh`file + +#!/bin/bash + +for i in apple banana orange pinapple kiwi +do + echo "$i" +done + +`output` + +ubuntu@ip-172-31-27-220:~$ ./for_loop.sh +apple +banana +orange +pinapple +kiwi + +2 `count.sh` +#!/bin/bash + +for i in {1..10}; +do + echo "$i" +done + +~ +~ +~ +`output` +ubuntu@ip-172-31-27-220:~$ ./count.sh +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +ubuntu@ip-172-31-27-220:~$ + +### Task 2: While Loop + +#!/bin/bash + +read -p "number" num + +while [ $num -ge 0 ] +do + echo "$num" + let num-- +done +echo "Done!" + +`output` +ubuntu@ip-172-31-27-220:~$ ./countdown.sh +number10 +10 +9 +8 +7 +6 +5 +4 +3 +2 +1 +0 +Done! + +### Task 3: Command-Line Arguments +#!/bin/bash + +if [ $# -eq 0 ]; +then + echo "usage: ./greet.sh " + exit 1 +fi + +echo "Hello, $1 !" +`output` +ubuntu@ip-172-31-27-220:~$ ./greet.sh akash +Hello, akash ! +ubuntu@ip-172-31-27-220:~$ ./greet.sh +usage: ./greet.sh +ubuntu@ip-172-31-27-220:~$ + +2. Create `args_demo.sh` that: +#!/bin/bash + +echo "$#" + +echo "$@" + +echo "$0" + +`output` +ubuntu@ip-172-31-27-220:~$ ./args_demo.sh akash tws +2 +akash tws +./args_demo.sh +ubuntu@ip-172-31-27-220:~$ ./args_demo.sh akash +1 +akash +./args_demo.sh +ubuntu@ip-172-31-27-220:~$ + +### Task 4: Install Packages via Script +#!/bin/bash + +sudo apt update &>/dev/null + +for i in nginx curl wget +do + sudo apt install $i -y &>/dev/null +done + +echo "Checking installation status..." + +for i in nginx curl wget +do + if dpkg -s $i &>/dev/null + then + echo "$i is installed" + else + echo "$i is NOT installed" + fi +done + +`output` + +ubuntu@ip-172-31-27-220:~$ ./install_packages.sh +Checking installation status... +nginx is installed +curl is installed +wget is installed +ubuntu@ip-172-31-27-220:~$ + +### Task 5: Error Handling +#!/bin/bash +set -e + +mkdir /tmp/devops &>/dev/null || echo "Directory already exists" +cd /tmp/devops +touch new_file.txt + +echo "production work" + +`output` + +ubuntu@ip-172-31-27-220:~$ vim safe-script.sh +ubuntu@ip-172-31-27-220:~$ ./safe-script.sh +mkdir: cannot create directory ‘/tmp/devops’: File exists +Directory already exists +production work +ubuntu@ip-172-31-27-220:~$ vim safe-script.sh +ubuntu@ip-172-31-27-220:~$ +ubuntu@ip-172-31-27-220:~$ ./safe-script.sh +Directory already exists +production work +ubuntu@ip-172-31-27-220:~$ ./safe-script.sh +production work +ubuntu@ip-172-31-27-220:~$ cd /tmp/devops +ubuntu@ip-172-31-27-220:/tmp/devops$ ls +new_file.txt +ubuntu@ip-172-31-27-220:/tmp/devops$ \ No newline at end of file diff --git a/2026/day-18/day-18-scripting.md b/2026/day-18/day-18-scripting.md new file mode 100644 index 0000000000..ece1439274 --- /dev/null +++ b/2026/day-18/day-18-scripting.md @@ -0,0 +1,212 @@ +### Task 1: Basic Functions +#!/bin/bash + +greet () { + echo "Hello $1 !i " +} +add () { + echo "sum is :" $(($1 + $2)) +} +greet "akash" +add 2 5 + +`output` +ubuntu@ip-172-31-27-220:~$ ./functions.sh +Hello akash !i +sum is : 7 + +### Task 2: Functions with Return Values +#!/bin/bash + + +check_disk (){ + df -h | awk 'NR==2 {print $4}' +} + + +check_memory (){ + free -h | awk 'NR==2 {print $4}' +} +check_disk +check_memory + + +~ +`output` +ubuntu@ip-172-31-27-220:~$ ./disk_check.sh +4.3G +219Mi +ubuntu@ip-172-31-27-220:~$ + +### Task 3: Strict Mode — `set -euo pipefail` +#!/bin/bash +set -euo pipefail + +echo "Testing set -u" +# echo "$num" + +echo "Testing set -e" + +# mkdir he +# mkdir he + +echo "Testing pipefail" +#false | true +true | true +echo "Script completed successfully" + +`output` +ubuntu@ip-172-31-27-220:~$ ./strict_demo.sh +./strict_demo.sh: line 5: num: unbound variable +ubuntu@ip-172-31-27-220:~$ vim strict_demo.sh +ubuntu@ip-172-31-27-220:~$ ./strict_demo.sh +Testing set -u +./strict_demo.sh: line 5: num: unbound variable +ubuntu@ip-172-31-27-220:~$ ./strict_demo.sh +Testing set -u +./strict_demo.sh: line 5: num: unbound variable +ubuntu@ip-172-31-27-220:~$ vim strict_demo.sh + 14L, 195B written +ubuntu@ip-172-31-27-220:~$ ./strict_demo.sh +Testing set -u +Testing set -e +mkdir: cannot create directory ‘he’: File exists +ubuntu@ip-172-31-27-220:~$ vim strict_demo.sh +ubuntu@ip-172-31-27-220:~$ ./strict_demo.sh +Testing set -u +Testing set -e +Testing pipefail +ubuntu@ip-172-31-27-220:~$ vim strict_demo.sh +ubuntu@ip-172-31-27-220:~$ ./strict_demo.sh +Testing set -u +Testing set -e +Testing pipefail +Script completed successfully +ubuntu@ip-172-31-27-220:~$ ./strict_demo.sh + +**Document:** +- `set -e` → exits when got the error +- `set -u` → produces error when anything is undefined +- `set -o pipefail` → produces error when | fails like `true | false ` means 0 | 1 ->0 so pipe fails in this case + +## Task 4: Local Variables +#!/bin/bash +set -u +#a=10 + +check_localvalue () { + local a=20 + echo "$a" + +} +check_localvalue +echo "$a" + +`output` +ubuntu@ip-172-31-27-220:~$ ./local_demo.sh +10 +ubuntu@ip-172-31-27-220:~$ vim local_demo.sh +ubuntu@ip-172-31-27-220:~$ ./local_demo.sh +20 +10 +ubuntu@ip-172-31-27-220:~$ vim local_demo.sh +ubuntu@ip-172-31-27-220:~$ ./local_demo.sh +20 + +ubuntu@ip-172-31-27-220:~$ vim local_demo.sh +ubuntu@ip-172-31-27-220:~$ ./local_demo.sh +20 +./local_demo.sh: line 11: a: unbound variable +ubuntu@ip-172-31-27-220:~$ + +### Task 5: Build a Script — System Info Reporter +#!/bin/bash +set -euo pipefail + +# Function to print hostname and OS info +host_name() { + echo "Hostname: $(hostname)" + echo "OS Info:" + grep -E '^(NAME|VERSION|VERSION_CODENAME|ID)' /etc/os-release + echo +} + +# Function to print uptime +up_time() { + echo "Uptime:" + uptime -p + echo +} + +# Function to print top 5 disk usage +disk_usage() { + echo "Top 5 Disk Usage:" + df -h | sort -k5 -hr | head -6 + echo +} + +# Function to print memory usage +memory_usage() { + echo "Memory Usage:" + free -h | awk 'NR==2 {print "Used: "$3", Free: "$4}' + echo +} + +# Function to print top 5 CPU-consuming processes +top_processes() { + echo "Top 5 CPU-Consuming Processes:" + ps -eo pid,ppid,cmd,%cpu --sort=-%cpu | head -6 + echo +} + +# Main function +main() { + host_name + up_time + disk_usage + memory_usage + top_processes +} + +# Call main +main + +`output` +ubuntu@ip-172-31-27-220:~$ ./system_info.sh +Hostname: ip-172-31-27-220 +OS Info: +NAME="Ubuntu" +VERSION_ID="24.04" +VERSION="24.04.3 LTS (Noble Numbat)" +VERSION_CODENAME=noble +ID=ubuntu +ID_LIKE=debian + +Uptime: +up 1 day, 19 minutes + +Top 5 Disk Usage: +/dev/root 6.8G 2.5G 4.3G 37% / +/dev/nvme0n1p16 881M 156M 663M 20% /boot +/dev/nvme0n1p15 105M 6.2M 99M 6% /boot/efi +efivarfs 128K 4.1K 119K 4% /sys/firmware/efi/efivars +tmpfs 183M 916K 182M 1% /run +tmpfs 92M 12K 92M 1% /run/user/1000 + +Memory Usage: +Used: 403Mi, Free: 100Mi + +Top 5 CPU-Consuming Processes: + PID PPID CMD %CPU + 24481 24367 sshd: ubuntu@pts/0 0.1 + 25396 2 [kworker/u8:3-events_unboun 0.0 + 24363 2 [kworker/0:1-events] 0.0 + 24354 2 [kworker/1:0-events] 0.0 + 189 1 /sbin/multipathd -d -s 0.0 + +ubuntu@ip-172-31-27-220:~$ + +***What you learned (3 key points)*** +i learned about the set -euo +i leanred about the fucntions +i learned about the collecting commands in one function \ No newline at end of file diff --git a/2026/day-19/day-19-project.md b/2026/day-19/day-19-project.md new file mode 100644 index 0000000000..8af012a672 --- /dev/null +++ b/2026/day-19/day-19-project.md @@ -0,0 +1,150 @@ +## Challenge Tasks + +### Task 1: Log Rotation Script +if [ $# -ne 1 ];then + display_usage +fi +if [ ! -d "${source_dir}" ];then + echo "directory does not exists" + exit 1 +fi +COUNT=$(find "$1" -type f -name "*.log" -mtime +7 | wc -l ) +if [ "$COUNT" -gt 0 ]; then + find "$1" -type f -name "*.log" -mtime +7 -exec gzip {} \; +fi + +COUNT2=$(find "$1" -type f -name "*.gz" -mtime +30 | wc -l ) +if [ "$COUNT2" -gt 0 ];then + find "$1" -type f -name "*.gz" -mtime +30 -exec rm {} \; +fi + +echo "Log Rotation Summary:" +echo "Compressed files: $COUNT" +echo "Deleted files: $COUNT2" + +`output` + +ubuntu@ip-172-31-27-220:~$ cat /var/log/myapp +cat: /var/log/myapp: Is a directory +ubuntu@ip-172-31-27-220:~$ ls -l /var/log/myapp +total 0 +-rw-r--r-- 1 root root 0 Feb 15 11:12 test.log +ubuntu@ip-172-31-27-220:~$ sudo ./log_rotate.sh /var/log/myapp +Log Rotation Summary: +Compressed files: 1 +Deleted files: 0 +ubuntu@ip-172-31-27-220:~$ ls -l /var/log/myapp +total 4 +-rw-r--r-- 1 root root 29 Feb 15 11:12 test.log.gz +ubuntu@ip-172-31-27-220:~$ +### Task 2: Server Backup Script +#!/bin/bash +set -e + +# Arguments +source_dir="$1" +backup_dir="$2" +timestamp=$(date '+%Y-%m-%d-%H-%M-%S') + +# Check if source directory exists +if [ ! -d "$source_dir" ]; then + echo "Error: Source directory '$source_dir' does not exist." + exit 1 +fi + +# Create backup directory if it doesn't exist +mkdir -p "$backup_dir" + +# Archive name and path +archive_name="backup +archive_path="${backup_dir}/${archive_name}" + +# Create the backup +tar -czf "$archive_path" -C "$source_dir" + +# Verify archive creation +if [ -f "$archive_path" ]; then + archive_size=$(du -h "$archive_path" | cut -f1) + echo "Backup created successfully: $archive_name" + echo "Size: $archive_size" +else + echo "Error: Backup failed!" + exit 1 +fi + +# Delete backups older than 14 days +find "$backup_dir" -name "backup-*.tar.gz" -type f -mtime +14 -exec rm -f {} \; + +echo "Old backups deleted (older than 14 days)." + +~ +`output` +ubuntu@ip-172-31-27-220:~$ ./backup.sh /home/ubuntu/data /home/ubuntu/backups +Backup created successfully: backup-2026-02-26-12-05-47.tar.gz +Size: 4.0K +Old backups deleted (older than 14 days). +ubuntu@ip-172-31-27-220:~$ ls -lh /home/ubuntu/backups +total 12K +-rw-rw-r-- 1 ubuntu ubuntu 111 Feb 25 12:30 backup-2026-02-25-12-30-35.tar.gz +-rw-rw-r-- 1 ubuntu ubuntu 215 Feb 25 12:31 backup-2026-02-25-12-31-13.tar.gz +-rw-rw-r-- 1 ubuntu ubuntu 215 Feb 26 12:05 backup-2026-02-26-12-05-47.tar.gz +ubuntu@ip-172-31-27-220:~$ + +### Task 4: Combine — Scheduled Maintenance Script +#!/bin/bash + +LOG_FILE="/var/log/maintenance.log" +LOG_DIR="/home/ubuntu/logs" +SOURCE_DIR="/home/ubuntu/data" +BACKUP_DIR="/home/ubuntu/backups" + +log() { + echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> "$LOG_FILE" +} + +log "===== Maintenance Started =====" + +# Run Log Rotation (pass log directory) +if /usr/bin/bash /home/ubuntu/log_rotate.sh "$LOG_DIR" >> "$LOG_FILE" 2>&1; then + log "Log rotation completed successfully." +else + log "Log rotation failed!" +fi + +# Run Backup +if /usr/bin/bash /home/ubuntu/backup.sh "$SOURCE_DIR" "$BACKUP_DIR" >> "$LOG_FILE" 2>&1; then + log "Backup completed successfully." +else + log "Backup failed!" +fi + +log "===== Maintenance Finished =====" +log "" +~ +~ +`output` +ubuntu@ip-172-31-27-220:~$ sudo ./maintenance.sh +sudo cat /var/log/maintenance.log +[2026-02-26 12:15:02] ===== Maintenance Started ===== + +[2026-02-26 12:15:02] Log rotation failed! +/home/ubuntu/backup.sh: line 38: syntax error near unexpected token `(' +[2026-02-26 12:15:02] Backup failed! +[2026-02-26 12:15:02] ===== Maintenance Finished ===== +[2026-02-26 12:15:02] +[2026-02-26 12:17:36] ===== Maintenance Started ===== + +[2026-02-26 12:17:36] Log rotation failed! +Backup created successfully: backup-2026-02-26-12-17-36.tar.gz +Size: 4.0K +Old backups deleted (older than 14 days). +[2026-02-26 12:17:36] Backup completed successfully. +[2026-02-26 12:17:36] ===== Maintenance Finished ===== +[2026-02-26 12:17:36] +[2026-02-26 12:20:07] ===== Maintenance Started ===== +Usage: /home/ubuntu/log_rotate.sh +[2026-02-26 12:20:07] Log rotation failed! +Backup created successfully: backup-2026-02-26-12-20-07.tar.gz +Size: 4.0K +Old backups deleted (older than 14 days). +[2026-02-26 12:20:07] Backup completed successfully. \ No newline at end of file diff --git a/2026/day-20/day-20-solution.md b/2026/day-20/day-20-solution.md new file mode 100644 index 0000000000..7229a576e7 --- /dev/null +++ b/2026/day-20/day-20-solution.md @@ -0,0 +1,1465 @@ +`script` +#!/bin/bash + +# Check arguments +if [ $# -ne 1 ]; then + echo "Usage: $0 " + exit 1 +fi + +log_file="$1" + +# Check if file exists +if [ ! -f "$log_file" ]; then + echo "Error: File does not exist." + exit 1 +fi + +# Get date +current_date=$(date +%Y-%m-%d) + +# Report file name +report_file="log_report_${current_date}.txt" + +# Generate report +{ +echo "===== Log Summary Report =====" +echo "Date of Analysis: $current_date" +echo "Log File: $log_file" +echo "Total Lines Processed: $(wc -l < "$log_file")" +echo "Total Error Count: $(grep -c '\[error\]' "$log_file")" +echo +echo "--- Top 5 Error Messages ---" +grep '\[error\]' "$log_file" | cut -d']' -f3 | sort | uniq -c | sort -rn | head -5 +echo "--- Critical Events (with line numbers) ---" +} > "$report_file:" + +echo "Report generated: $report_file" + +`output` +ubuntu@ip-172-31-27-220:~$ cat log_report_2026-03-01.txt: +===== Log Summary Report ===== +Date of Analysis: 2026-03-01 +Log File: /home/ubuntu/day-20.log +Total Lines Processed: 2000 +Total Error Count: 595 + +--- Top 5 Error Messages --- + 369 mod_jk child workerEnv in error state 6 + 101 mod_jk child workerEnv in error state 7 + 44 mod_jk child workerEnv in error state 8 + 20 mod_jk child workerEnv in error state 9 + 12 mod_jk child init 1 -2 + +--- Critical Events (with line numbers) --- +1:[Sun Dec 04 04:47:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +3:[Sun Dec 04 04:51:08 2005] [notice] jk2_init() Found child 6725 in scoreboard slot 10 +4:[Sun Dec 04 04:51:09 2005] [notice] jk2_init() Found child 6726 in scoreboard slot 8 +5:[Sun Dec 04 04:51:09 2005] [notice] jk2_init() Found child 6728 in scoreboard slot 6 +6:[Sun Dec 04 04:51:14 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +7:[Sun Dec 04 04:51:14 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +8:[Sun Dec 04 04:51:14 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +12:[Sun Dec 04 04:51:37 2005] [notice] jk2_init() Found child 6736 in scoreboard slot 10 +13:[Sun Dec 04 04:51:38 2005] [notice] jk2_init() Found child 6733 in scoreboard slot 7 +14:[Sun Dec 04 04:51:38 2005] [notice] jk2_init() Found child 6734 in scoreboard slot 9 +15:[Sun Dec 04 04:51:52 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +16:[Sun Dec 04 04:51:52 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +18:[Sun Dec 04 04:52:04 2005] [notice] jk2_init() Found child 6738 in scoreboard slot 6 +19:[Sun Dec 04 04:52:04 2005] [notice] jk2_init() Found child 6741 in scoreboard slot 9 +20:[Sun Dec 04 04:52:05 2005] [notice] jk2_init() Found child 6740 in scoreboard slot 7 +21:[Sun Dec 04 04:52:05 2005] [notice] jk2_init() Found child 6737 in scoreboard slot 8 +22:[Sun Dec 04 04:52:12 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +23:[Sun Dec 04 04:52:12 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +24:[Sun Dec 04 04:52:12 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +28:[Sun Dec 04 04:52:36 2005] [notice] jk2_init() Found child 6748 in scoreboard slot 6 +29:[Sun Dec 04 04:52:36 2005] [notice] jk2_init() Found child 6744 in scoreboard slot 10 +30:[Sun Dec 04 04:52:36 2005] [notice] jk2_init() Found child 6745 in scoreboard slot 8 +31:[Sun Dec 04 04:52:49 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +32:[Sun Dec 04 04:52:49 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +35:[Sun Dec 04 04:53:05 2005] [notice] jk2_init() Found child 6750 in scoreboard slot 7 +36:[Sun Dec 04 04:53:05 2005] [notice] jk2_init() Found child 6751 in scoreboard slot 9 +37:[Sun Dec 04 04:53:05 2005] [notice] jk2_init() Found child 6752 in scoreboard slot 10 +38:[Sun Dec 04 04:53:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +39:[Sun Dec 04 04:53:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +42:[Sun Dec 04 04:53:29 2005] [notice] jk2_init() Found child 6754 in scoreboard slot 8 +43:[Sun Dec 04 04:53:29 2005] [notice] jk2_init() Found child 6755 in scoreboard slot 6 +44:[Sun Dec 04 04:53:40 2005] [notice] jk2_init() Found child 6756 in scoreboard slot 7 +45:[Sun Dec 04 04:53:51 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +47:[Sun Dec 04 04:54:15 2005] [notice] jk2_init() Found child 6763 in scoreboard slot 10 +48:[Sun Dec 04 04:54:15 2005] [notice] jk2_init() Found child 6766 in scoreboard slot 6 +49:[Sun Dec 04 04:54:15 2005] [notice] jk2_init() Found child 6767 in scoreboard slot 7 +50:[Sun Dec 04 04:54:15 2005] [notice] jk2_init() Found child 6765 in scoreboard slot 8 +51:[Sun Dec 04 04:54:18 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +52:[Sun Dec 04 04:54:18 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +53:[Sun Dec 04 04:54:18 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +54:[Sun Dec 04 04:54:18 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +59:[Sun Dec 04 04:54:20 2005] [notice] jk2_init() Found child 6768 in scoreboard slot 9 +60:[Sun Dec 04 04:54:20 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +62:[Sun Dec 04 04:56:52 2005] [notice] jk2_init() Found child 8527 in scoreboard slot 10 +63:[Sun Dec 04 04:56:52 2005] [notice] jk2_init() Found child 8533 in scoreboard slot 8 +64:[Sun Dec 04 04:56:57 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +65:[Sun Dec 04 04:56:57 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +68:[Sun Dec 04 04:57:20 2005] [notice] jk2_init() Found child 8536 in scoreboard slot 6 +69:[Sun Dec 04 04:57:20 2005] [notice] jk2_init() Found child 8539 in scoreboard slot 7 +70:[Sun Dec 04 04:57:24 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +71:[Sun Dec 04 04:57:24 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +74:[Sun Dec 04 04:57:49 2005] [notice] jk2_init() Found child 8541 in scoreboard slot 9 +75:[Sun Dec 04 04:58:11 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +77:[Sun Dec 04 04:58:45 2005] [notice] jk2_init() Found child 8547 in scoreboard slot 10 +78:[Sun Dec 04 04:58:57 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +80:[Sun Dec 04 04:59:28 2005] [notice] jk2_init() Found child 8554 in scoreboard slot 6 +81:[Sun Dec 04 04:59:27 2005] [notice] jk2_init() Found child 8553 in scoreboard slot 8 +82:[Sun Dec 04 04:59:35 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +83:[Sun Dec 04 04:59:35 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +86:[Sun Dec 04 05:00:03 2005] [notice] jk2_init() Found child 8560 in scoreboard slot 7 +87:[Sun Dec 04 05:00:09 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +89:[Sun Dec 04 05:00:13 2005] [notice] jk2_init() Found child 8565 in scoreboard slot 9 +90:[Sun Dec 04 05:00:13 2005] [notice] jk2_init() Found child 8573 in scoreboard slot 10 +91:[Sun Dec 04 05:00:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +93:[Sun Dec 04 05:00:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +95:[Sun Dec 04 05:01:20 2005] [notice] jk2_init() Found child 8584 in scoreboard slot 7 +96:[Sun Dec 04 05:01:20 2005] [notice] jk2_init() Found child 8587 in scoreboard slot 9 +97:[Sun Dec 04 05:02:14 2005] [notice] jk2_init() Found child 8603 in scoreboard slot 10 +98:[Sun Dec 04 05:02:14 2005] [notice] jk2_init() Found child 8605 in scoreboard slot 8 +99:[Sun Dec 04 05:04:03 2005] [notice] jk2_init() Found child 8764 in scoreboard slot 10 +100:[Sun Dec 04 05:04:03 2005] [notice] jk2_init() Found child 8765 in scoreboard slot 11 +101:[Sun Dec 04 05:04:03 2005] [notice] jk2_init() Found child 8763 in scoreboard slot 9 +102:[Sun Dec 04 05:04:03 2005] [notice] jk2_init() Found child 8744 in scoreboard slot 8 +103:[Sun Dec 04 05:04:03 2005] [notice] jk2_init() Found child 8743 in scoreboard slot 7 +104:[Sun Dec 04 05:04:03 2005] [notice] jk2_init() Found child 8738 in scoreboard slot 6 +105:[Sun Dec 04 05:04:03 2005] [notice] jk2_init() Found child 8766 in scoreboard slot 12 +106:[Sun Dec 04 05:04:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +108:[Sun Dec 04 05:04:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +110:[Sun Dec 04 05:04:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +112:[Sun Dec 04 05:04:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +114:[Sun Dec 04 05:04:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +116:[Sun Dec 04 05:04:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +118:[Sun Dec 04 05:04:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +120:[Sun Dec 04 05:11:51 2005] [notice] jk2_init() Found child 25792 in scoreboard slot 6 +121:[Sun Dec 04 05:12:07 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +123:[Sun Dec 04 05:12:26 2005] [notice] jk2_init() Found child 25798 in scoreboard slot 7 +124:[Sun Dec 04 05:12:26 2005] [notice] jk2_init() Found child 25803 in scoreboard slot 8 +125:[Sun Dec 04 05:12:28 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +127:[Sun Dec 04 05:12:28 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +129:[Sun Dec 04 05:12:30 2005] [notice] jk2_init() Found child 25805 in scoreboard slot 9 +130:[Sun Dec 04 05:12:30 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +133:[Sun Dec 04 05:15:13 2005] [notice] jk2_init() Found child 1000 in scoreboard slot 10 +134:[Sun Dec 04 05:15:16 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +136:[Sun Dec 04 06:01:00 2005] [notice] jk2_init() Found child 32347 in scoreboard slot 6 +137:[Sun Dec 04 06:01:00 2005] [notice] jk2_init() Found child 32348 in scoreboard slot 7 +138:[Sun Dec 04 06:01:21 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +139:[Sun Dec 04 06:01:21 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +141:[Sun Dec 04 06:01:42 2005] [notice] jk2_init() Found child 32352 in scoreboard slot 9 +142:[Sun Dec 04 06:01:42 2005] [notice] jk2_init() Found child 32353 in scoreboard slot 10 +143:[Sun Dec 04 06:01:42 2005] [notice] jk2_init() Found child 32354 in scoreboard slot 6 +144:[Sun Dec 04 06:02:01 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +146:[Sun Dec 04 06:02:05 2005] [notice] jk2_init() Found child 32359 in scoreboard slot 9 +147:[Sun Dec 04 06:02:05 2005] [notice] jk2_init() Found child 32360 in scoreboard slot 11 +148:[Sun Dec 04 06:02:05 2005] [notice] jk2_init() Found child 32358 in scoreboard slot 8 +149:[Sun Dec 04 06:02:05 2005] [notice] jk2_init() Found child 32355 in scoreboard slot 7 +150:[Sun Dec 04 06:02:07 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +152:[Sun Dec 04 06:02:07 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +154:[Sun Dec 04 06:02:07 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +156:[Sun Dec 04 06:02:07 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +158:[Sun Dec 04 06:06:00 2005] [notice] jk2_init() Found child 32388 in scoreboard slot 8 +159:[Sun Dec 04 06:06:00 2005] [notice] jk2_init() Found child 32387 in scoreboard slot 7 +160:[Sun Dec 04 06:06:00 2005] [notice] jk2_init() Found child 32386 in scoreboard slot 6 +161:[Sun Dec 04 06:06:10 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +162:[Sun Dec 04 06:06:11 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +165:[Sun Dec 04 06:06:20 2005] [notice] jk2_init() Found child 32389 in scoreboard slot 9 +166:[Sun Dec 04 06:06:24 2005] [notice] jk2_init() Found child 32391 in scoreboard slot 10 +167:[Sun Dec 04 06:06:24 2005] [notice] jk2_init() Found child 32390 in scoreboard slot 8 +168:[Sun Dec 04 06:06:24 2005] [notice] jk2_init() Found child 32392 in scoreboard slot 6 +169:[Sun Dec 04 06:06:26 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +171:[Sun Dec 04 06:06:26 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +173:[Sun Dec 04 06:06:26 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +175:[Sun Dec 04 06:06:26 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +177:[Sun Dec 04 06:11:11 2005] [notice] jk2_init() Found child 32410 in scoreboard slot 7 +178:[Sun Dec 04 06:11:11 2005] [notice] jk2_init() Found child 32411 in scoreboard slot 9 +179:[Sun Dec 04 06:12:31 2005] [notice] jk2_init() Found child 32423 in scoreboard slot 9 +180:[Sun Dec 04 06:12:31 2005] [notice] jk2_init() Found child 32422 in scoreboard slot 8 +181:[Sun Dec 04 06:12:31 2005] [notice] jk2_init() Found child 32419 in scoreboard slot 6 +182:[Sun Dec 04 06:12:31 2005] [notice] jk2_init() Found child 32421 in scoreboard slot 11 +183:[Sun Dec 04 06:12:31 2005] [notice] jk2_init() Found child 32420 in scoreboard slot 7 +184:[Sun Dec 04 06:12:31 2005] [notice] jk2_init() Found child 32424 in scoreboard slot 10 +185:[Sun Dec 04 06:12:37 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +186:[Sun Dec 04 06:12:37 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +187:[Sun Dec 04 06:12:37 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +188:[Sun Dec 04 06:12:37 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +189:[Sun Dec 04 06:12:37 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +195:[Sun Dec 04 06:12:59 2005] [notice] jk2_init() Found child 32425 in scoreboard slot 6 +196:[Sun Dec 04 06:13:01 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +198:[Sun Dec 04 06:16:10 2005] [notice] jk2_init() Found child 32432 in scoreboard slot 7 +199:[Sun Dec 04 06:16:10 2005] [notice] jk2_init() Found child 32434 in scoreboard slot 9 +200:[Sun Dec 04 06:16:10 2005] [notice] jk2_init() Found child 32433 in scoreboard slot 8 +201:[Sun Dec 04 06:16:21 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +202:[Sun Dec 04 06:16:21 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +205:[Sun Dec 04 06:16:21 2005] [notice] jk2_init() Found child 32435 in scoreboard slot 10 +206:[Sun Dec 04 06:16:21 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +208:[Sun Dec 04 06:16:37 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +210:[Sun Dec 04 06:16:51 2005] [notice] jk2_init() Found child 32436 in scoreboard slot 6 +211:[Sun Dec 04 06:16:51 2005] [notice] jk2_init() Found child 32437 in scoreboard slot 7 +212:[Sun Dec 04 06:17:02 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +213:[Sun Dec 04 06:17:02 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +215:[Sun Dec 04 06:17:06 2005] [notice] jk2_init() Found child 32438 in scoreboard slot 8 +216:[Sun Dec 04 06:17:18 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +218:[Sun Dec 04 06:17:23 2005] [notice] jk2_init() Found child 32440 in scoreboard slot 10 +219:[Sun Dec 04 06:17:23 2005] [notice] jk2_init() Found child 32439 in scoreboard slot 9 +220:[Sun Dec 04 06:17:23 2005] [notice] jk2_init() Found child 32441 in scoreboard slot 6 +221:[Sun Dec 04 06:17:33 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +222:[Sun Dec 04 06:17:33 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +225:[Sun Dec 04 06:17:55 2005] [notice] jk2_init() Found child 32442 in scoreboard slot 7 +226:[Sun Dec 04 06:17:55 2005] [notice] jk2_init() Found child 32443 in scoreboard slot 8 +227:[Sun Dec 04 06:17:55 2005] [notice] jk2_init() Found child 32444 in scoreboard slot 9 +228:[Sun Dec 04 06:18:08 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +229:[Sun Dec 04 06:18:08 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +232:[Sun Dec 04 06:18:12 2005] [notice] jk2_init() Found child 32445 in scoreboard slot 10 +233:[Sun Dec 04 06:18:23 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +235:[Sun Dec 04 06:18:41 2005] [notice] jk2_init() Found child 32447 in scoreboard slot 7 +236:[Sun Dec 04 06:18:39 2005] [notice] jk2_init() Found child 32446 in scoreboard slot 6 +237:[Sun Dec 04 06:18:40 2005] [notice] jk2_init() Found child 32448 in scoreboard slot 8 +238:[Sun Dec 04 06:18:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +239:[Sun Dec 04 06:18:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +242:[Sun Dec 04 06:19:05 2005] [notice] jk2_init() Found child 32449 in scoreboard slot 9 +243:[Sun Dec 04 06:19:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +244:[Sun Dec 04 06:19:19 2005] [notice] jk2_init() Found child 32450 in scoreboard slot 10 +246:[Sun Dec 04 06:19:19 2005] [notice] jk2_init() Found child 32452 in scoreboard slot 7 +247:[Sun Dec 04 06:19:19 2005] [notice] jk2_init() Found child 32451 in scoreboard slot 6 +248:[Sun Dec 04 06:19:31 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +249:[Sun Dec 04 06:19:31 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +252:[Sun Dec 04 06:19:56 2005] [notice] jk2_init() Found child 32454 in scoreboard slot 7 +253:[Sun Dec 04 06:19:56 2005] [notice] jk2_init() Found child 32453 in scoreboard slot 8 +254:[Sun Dec 04 06:19:56 2005] [notice] jk2_init() Found child 32455 in scoreboard slot 9 +255:[Sun Dec 04 06:20:30 2005] [notice] jk2_init() Found child 32467 in scoreboard slot 9 +256:[Sun Dec 04 06:20:30 2005] [notice] jk2_init() Found child 32464 in scoreboard slot 8 +257:[Sun Dec 04 06:20:30 2005] [notice] jk2_init() Found child 32465 in scoreboard slot 7 +258:[Sun Dec 04 06:20:30 2005] [notice] jk2_init() Found child 32466 in scoreboard slot 11 +259:[Sun Dec 04 06:20:30 2005] [notice] jk2_init() Found child 32457 in scoreboard slot 6 +260:[Sun Dec 04 06:20:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +261:[Sun Dec 04 06:20:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +262:[Sun Dec 04 06:20:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +266:[Sun Dec 04 06:22:18 2005] [notice] jk2_init() Found child 32475 in scoreboard slot 8 +267:[Sun Dec 04 06:22:48 2005] [notice] jk2_init() Found child 32478 in scoreboard slot 11 +268:[Sun Dec 04 06:22:48 2005] [notice] jk2_init() Found child 32477 in scoreboard slot 10 +269:[Sun Dec 04 06:22:48 2005] [notice] jk2_init() Found child 32479 in scoreboard slot 6 +270:[Sun Dec 04 06:22:48 2005] [notice] jk2_init() Found child 32480 in scoreboard slot 8 +271:[Sun Dec 04 06:22:48 2005] [notice] jk2_init() Found child 32476 in scoreboard slot 7 +272:[Sun Dec 04 06:22:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +273:[Sun Dec 04 06:22:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +274:[Sun Dec 04 06:22:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +275:[Sun Dec 04 06:22:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +276:[Sun Dec 04 06:22:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +282:[Sun Dec 04 06:23:12 2005] [notice] jk2_init() Found child 32483 in scoreboard slot 7 +283:[Sun Dec 04 06:23:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +285:[Sun Dec 04 06:30:41 2005] [notice] jk2_init() Found child 32507 in scoreboard slot 9 +286:[Sun Dec 04 06:30:43 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +288:[Sun Dec 04 06:36:07 2005] [notice] jk2_init() Found child 32529 in scoreboard slot 6 +289:[Sun Dec 04 06:36:07 2005] [notice] jk2_init() Found child 32528 in scoreboard slot 10 +290:[Sun Dec 04 06:36:10 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +292:[Sun Dec 04 06:36:10 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +294:[Sun Dec 04 06:40:54 2005] [notice] jk2_init() Found child 32548 in scoreboard slot 9 +295:[Sun Dec 04 06:40:54 2005] [notice] jk2_init() Found child 32546 in scoreboard slot 8 +296:[Sun Dec 04 06:40:55 2005] [notice] jk2_init() Found child 32547 in scoreboard slot 7 +297:[Sun Dec 04 06:41:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +298:[Sun Dec 04 06:41:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +299:[Sun Dec 04 06:41:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +303:[Sun Dec 04 06:41:29 2005] [notice] jk2_init() Found child 32549 in scoreboard slot 10 +304:[Sun Dec 04 06:41:29 2005] [notice] jk2_init() Found child 32550 in scoreboard slot 6 +305:[Sun Dec 04 06:41:45 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +306:[Sun Dec 04 06:41:45 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +309:[Sun Dec 04 06:42:11 2005] [notice] jk2_init() Found child 32551 in scoreboard slot 8 +310:[Sun Dec 04 06:42:11 2005] [notice] jk2_init() Found child 32552 in scoreboard slot 7 +311:[Sun Dec 04 06:42:25 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +312:[Sun Dec 04 06:42:23 2005] [notice] jk2_init() Found child 32554 in scoreboard slot 10 +313:[Sun Dec 04 06:42:25 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +314:[Sun Dec 04 06:42:23 2005] [notice] jk2_init() Found child 32553 in scoreboard slot 9 +317:[Sun Dec 04 06:42:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +318:[Sun Dec 04 06:42:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +321:[Sun Dec 04 06:43:20 2005] [notice] jk2_init() Found child 32556 in scoreboard slot 8 +322:[Sun Dec 04 06:43:20 2005] [notice] jk2_init() Found child 32555 in scoreboard slot 6 +323:[Sun Dec 04 06:43:20 2005] [notice] jk2_init() Found child 32557 in scoreboard slot 7 +324:[Sun Dec 04 06:43:34 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +325:[Sun Dec 04 06:43:34 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +326:[Sun Dec 04 06:43:34 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +329:[Sun Dec 04 06:43:56 2005] [notice] jk2_init() Found child 32558 in scoreboard slot 9 +330:[Sun Dec 04 06:44:18 2005] [notice] jk2_init() Found child 32560 in scoreboard slot 6 +331:[Sun Dec 04 06:44:18 2005] [notice] jk2_init() Found child 32561 in scoreboard slot 8 +332:[Sun Dec 04 06:44:39 2005] [notice] jk2_init() Found child 32563 in scoreboard slot 9 +333:[Sun Dec 04 06:44:39 2005] [notice] jk2_init() Found child 32564 in scoreboard slot 10 +334:[Sun Dec 04 06:44:39 2005] [notice] jk2_init() Found child 32565 in scoreboard slot 11 +335:[Sun Dec 04 06:45:32 2005] [notice] jk2_init() Found child 32575 in scoreboard slot 6 +336:[Sun Dec 04 06:45:32 2005] [notice] jk2_init() Found child 32576 in scoreboard slot 7 +337:[Sun Dec 04 06:45:32 2005] [notice] jk2_init() Found child 32569 in scoreboard slot 9 +338:[Sun Dec 04 06:45:32 2005] [notice] jk2_init() Found child 32572 in scoreboard slot 10 +339:[Sun Dec 04 06:45:32 2005] [notice] jk2_init() Found child 32577 in scoreboard slot 11 +340:[Sun Dec 04 06:45:50 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +341:[Sun Dec 04 06:45:50 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +343:[Sun Dec 04 06:46:13 2005] [notice] jk2_init() Found child 32578 in scoreboard slot 8 +344:[Sun Dec 04 06:46:13 2005] [notice] jk2_init() Found child 32580 in scoreboard slot 6 +345:[Sun Dec 04 06:46:12 2005] [notice] jk2_init() Found child 32581 in scoreboard slot 7 +346:[Sun Dec 04 06:46:13 2005] [notice] jk2_init() Found child 32579 in scoreboard slot 9 +347:[Sun Dec 04 06:46:30 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +348:[Sun Dec 04 06:46:30 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +351:[Sun Dec 04 06:46:32 2005] [notice] jk2_init() Found child 32582 in scoreboard slot 10 +352:[Sun Dec 04 06:46:32 2005] [notice] jk2_init() Found child 32584 in scoreboard slot 9 +353:[Sun Dec 04 06:46:32 2005] [notice] jk2_init() Found child 32583 in scoreboard slot 8 +354:[Sun Dec 04 06:46:33 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +356:[Sun Dec 04 06:46:34 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +358:[Sun Dec 04 06:46:34 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +360:[Sun Dec 04 06:47:19 2005] [notice] jk2_init() Found child 32585 in scoreboard slot 6 +361:[Sun Dec 04 06:47:30 2005] [notice] jk2_init() Found child 32587 in scoreboard slot 10 +362:[Sun Dec 04 06:47:30 2005] [notice] jk2_init() Found child 32586 in scoreboard slot 7 +363:[Sun Dec 04 06:47:34 2005] [notice] jk2_init() Found child 32588 in scoreboard slot 8 +364:[Sun Dec 04 06:47:38 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +365:[Sun Dec 04 06:47:39 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +366:[Sun Dec 04 06:47:39 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +368:[Sun Dec 04 06:48:09 2005] [notice] jk2_init() Found child 32592 in scoreboard slot 10 +369:[Sun Dec 04 06:48:09 2005] [notice] jk2_init() Found child 32591 in scoreboard slot 7 +370:[Sun Dec 04 06:48:22 2005] [notice] jk2_init() Found child 32594 in scoreboard slot 6 +371:[Sun Dec 04 06:48:22 2005] [notice] jk2_init() Found child 32593 in scoreboard slot 8 +372:[Sun Dec 04 06:48:48 2005] [notice] jk2_init() Found child 32597 in scoreboard slot 10 +373:[Sun Dec 04 06:49:06 2005] [notice] jk2_init() Found child 32600 in scoreboard slot 9 +374:[Sun Dec 04 06:49:06 2005] [notice] jk2_init() Found child 32601 in scoreboard slot 7 +375:[Sun Dec 04 06:49:23 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +377:[Sun Dec 04 06:49:40 2005] [notice] jk2_init() Found child 32605 in scoreboard slot 9 +378:[Sun Dec 04 06:49:40 2005] [notice] jk2_init() Found child 32604 in scoreboard slot 6 +379:[Sun Dec 04 06:51:13 2005] [notice] jk2_init() Found child 32622 in scoreboard slot 7 +380:[Sun Dec 04 06:51:14 2005] [notice] jk2_init() Found child 32623 in scoreboard slot 11 +381:[Sun Dec 04 06:51:13 2005] [notice] jk2_init() Found child 32624 in scoreboard slot 8 +382:[Sun Dec 04 06:51:13 2005] [notice] jk2_init() Found child 32621 in scoreboard slot 9 +383:[Sun Dec 04 06:51:23 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +385:[Sun Dec 04 06:51:23 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +387:[Sun Dec 04 06:51:23 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +389:[Sun Dec 04 06:51:23 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +391:[Sun Dec 04 06:51:25 2005] [notice] jk2_init() Found child 32626 in scoreboard slot 6 +392:[Sun Dec 04 06:51:26 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +394:[Sun Dec 04 06:52:07 2005] [notice] jk2_init() Found child 32627 in scoreboard slot 9 +395:[Sun Dec 04 06:52:08 2005] [notice] jk2_init() Found child 32628 in scoreboard slot 7 +396:[Sun Dec 04 06:52:13 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +397:[Sun Dec 04 06:52:14 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +400:[Sun Dec 04 06:52:27 2005] [notice] jk2_init() Found child 32630 in scoreboard slot 8 +401:[Sun Dec 04 06:52:27 2005] [notice] jk2_init() Found child 32629 in scoreboard slot 10 +402:[Sun Dec 04 06:52:39 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +403:[Sun Dec 04 06:52:39 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +406:[Sun Dec 04 06:53:04 2005] [notice] jk2_init() Found child 32633 in scoreboard slot 9 +407:[Sun Dec 04 06:53:04 2005] [notice] jk2_init() Found child 32634 in scoreboard slot 11 +408:[Sun Dec 04 06:53:04 2005] [notice] jk2_init() Found child 32632 in scoreboard slot 7 +409:[Sun Dec 04 06:53:23 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +411:[Sun Dec 04 06:53:38 2005] [notice] jk2_init() Found child 32636 in scoreboard slot 6 +412:[Sun Dec 04 06:53:37 2005] [notice] jk2_init() Found child 32637 in scoreboard slot 7 +413:[Sun Dec 04 06:53:37 2005] [notice] jk2_init() Found child 32638 in scoreboard slot 9 +414:[Sun Dec 04 06:54:04 2005] [notice] jk2_init() Found child 32640 in scoreboard slot 8 +415:[Sun Dec 04 06:54:04 2005] [notice] jk2_init() Found child 32641 in scoreboard slot 6 +416:[Sun Dec 04 06:54:04 2005] [notice] jk2_init() Found child 32642 in scoreboard slot 7 +417:[Sun Dec 04 06:54:35 2005] [notice] jk2_init() Found child 32646 in scoreboard slot 6 +418:[Sun Dec 04 06:55:00 2005] [notice] jk2_init() Found child 32648 in scoreboard slot 9 +419:[Sun Dec 04 06:55:00 2005] [notice] jk2_init() Found child 32652 in scoreboard slot 7 +420:[Sun Dec 04 06:55:00 2005] [notice] jk2_init() Found child 32649 in scoreboard slot 10 +421:[Sun Dec 04 06:55:00 2005] [notice] jk2_init() Found child 32651 in scoreboard slot 6 +422:[Sun Dec 04 06:55:00 2005] [notice] jk2_init() Found child 32650 in scoreboard slot 8 +423:[Sun Dec 04 06:55:19 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +424:[Sun Dec 04 06:55:19 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +425:[Sun Dec 04 06:55:19 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +429:[Sun Dec 04 06:55:55 2005] [notice] jk2_init() Found child 32660 in scoreboard slot 6 +430:[Sun Dec 04 06:55:54 2005] [notice] jk2_init() Found child 32658 in scoreboard slot 10 +431:[Sun Dec 04 06:55:54 2005] [notice] jk2_init() Found child 32659 in scoreboard slot 8 +432:[Sun Dec 04 06:55:54 2005] [notice] jk2_init() Found child 32657 in scoreboard slot 9 +433:[Sun Dec 04 06:56:10 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +435:[Sun Dec 04 06:56:37 2005] [notice] jk2_init() Found child 32663 in scoreboard slot 10 +436:[Sun Dec 04 06:56:37 2005] [notice] jk2_init() Found child 32664 in scoreboard slot 8 +437:[Sun Dec 04 06:57:19 2005] [notice] jk2_init() Found child 32670 in scoreboard slot 6 +438:[Sun Dec 04 06:57:19 2005] [notice] jk2_init() Found child 32667 in scoreboard slot 9 +439:[Sun Dec 04 06:57:19 2005] [notice] jk2_init() Found child 32668 in scoreboard slot 10 +440:[Sun Dec 04 06:57:19 2005] [notice] jk2_init() Found child 32669 in scoreboard slot 8 +441:[Sun Dec 04 06:57:19 2005] [notice] jk2_init() Found child 32671 in scoreboard slot 7 +442:[Sun Dec 04 06:57:23 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +443:[Sun Dec 04 06:57:23 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +446:[Sun Dec 04 06:57:23 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +448:[Sun Dec 04 06:57:23 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +450:[Sun Dec 04 06:57:24 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +452:[Sun Dec 04 06:58:12 2005] [notice] jk2_init() Found child 32674 in scoreboard slot 8 +453:[Sun Dec 04 06:58:13 2005] [notice] jk2_init() Found child 32672 in scoreboard slot 9 +454:[Sun Dec 04 06:58:13 2005] [notice] jk2_init() Found child 32673 in scoreboard slot 10 +455:[Sun Dec 04 06:58:27 2005] [notice] jk2_init() Found child 32675 in scoreboard slot 6 +456:[Sun Dec 04 06:58:28 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +457:[Sun Dec 04 06:58:28 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +460:[Sun Dec 04 06:58:51 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +462:[Sun Dec 04 06:58:54 2005] [notice] jk2_init() Found child 32677 in scoreboard slot 7 +463:[Sun Dec 04 06:58:54 2005] [notice] jk2_init() Found child 32676 in scoreboard slot 9 +464:[Sun Dec 04 06:58:54 2005] [notice] jk2_init() Found child 32678 in scoreboard slot 10 +465:[Sun Dec 04 06:59:28 2005] [notice] jk2_init() Found child 32679 in scoreboard slot 8 +466:[Sun Dec 04 06:59:28 2005] [notice] jk2_init() Found child 32680 in scoreboard slot 6 +467:[Sun Dec 04 06:59:34 2005] [notice] jk2_init() Found child 32681 in scoreboard slot 9 +468:[Sun Dec 04 06:59:34 2005] [notice] jk2_init() Found child 32682 in scoreboard slot 7 +469:[Sun Dec 04 06:59:38 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +471:[Sun Dec 04 06:59:45 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +472:[Sun Dec 04 06:59:45 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +475:[Sun Dec 04 06:59:59 2005] [notice] jk2_init() Found child 32683 in scoreboard slot 10 +476:[Sun Dec 04 07:00:06 2005] [notice] jk2_init() Found child 32685 in scoreboard slot 6 +477:[Sun Dec 04 07:00:32 2005] [notice] jk2_init() Found child 32688 in scoreboard slot 11 +478:[Sun Dec 04 07:00:32 2005] [notice] jk2_init() Found child 32695 in scoreboard slot 8 +479:[Sun Dec 04 07:00:32 2005] [notice] jk2_init() Found child 32696 in scoreboard slot 6 +480:[Sun Dec 04 07:01:25 2005] [notice] jk2_init() Found child 32701 in scoreboard slot 10 +481:[Sun Dec 04 07:01:26 2005] [notice] jk2_init() Found child 32702 in scoreboard slot 11 +482:[Sun Dec 04 07:01:55 2005] [notice] jk2_init() Found child 32711 in scoreboard slot 10 +483:[Sun Dec 04 07:01:55 2005] [notice] jk2_init() Found child 32708 in scoreboard slot 7 +484:[Sun Dec 04 07:01:55 2005] [notice] jk2_init() Found child 32710 in scoreboard slot 9 +485:[Sun Dec 04 07:01:55 2005] [notice] jk2_init() Found child 32709 in scoreboard slot 8 +486:[Sun Dec 04 07:01:57 2005] [notice] jk2_init() Found child 32712 in scoreboard slot 6 +487:[Sun Dec 04 07:02:01 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +488:[Sun Dec 04 07:02:01 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +489:[Sun Dec 04 07:02:01 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +490:[Sun Dec 04 07:02:01 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +491:[Sun Dec 04 07:02:01 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +497:[Sun Dec 04 07:02:52 2005] [notice] jk2_init() Found child 32713 in scoreboard slot 7 +498:[Sun Dec 04 07:03:23 2005] [notice] jk2_init() Found child 32717 in scoreboard slot 10 +499:[Sun Dec 04 07:03:48 2005] [notice] jk2_init() Found child 32720 in scoreboard slot 8 +500:[Sun Dec 04 07:04:27 2005] [notice] jk2_init() Found child 32726 in scoreboard slot 8 +501:[Sun Dec 04 07:04:55 2005] [notice] jk2_init() Found child 32730 in scoreboard slot 7 +502:[Sun Dec 04 07:04:55 2005] [notice] jk2_init() Found child 32729 in scoreboard slot 6 +503:[Sun Dec 04 07:04:55 2005] [notice] jk2_init() Found child 32731 in scoreboard slot 8 +504:[Sun Dec 04 07:05:44 2005] [notice] jk2_init() Found child 32739 in scoreboard slot 7 +505:[Sun Dec 04 07:05:44 2005] [notice] jk2_init() Found child 32740 in scoreboard slot 8 +506:[Sun Dec 04 07:06:11 2005] [notice] jk2_init() Found child 32742 in scoreboard slot 10 +507:[Sun Dec 04 07:07:23 2005] [notice] jk2_init() Found child 32758 in scoreboard slot 7 +508:[Sun Dec 04 07:07:23 2005] [notice] jk2_init() Found child 32755 in scoreboard slot 8 +509:[Sun Dec 04 07:07:23 2005] [notice] jk2_init() Found child 32754 in scoreboard slot 11 +510:[Sun Dec 04 07:07:30 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +511:[Sun Dec 04 07:07:30 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +512:[Sun Dec 04 07:07:30 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +516:[Sun Dec 04 07:08:02 2005] [notice] jk2_init() Found child 32761 in scoreboard slot 6 +517:[Sun Dec 04 07:08:02 2005] [notice] jk2_init() Found child 32762 in scoreboard slot 9 +518:[Sun Dec 04 07:08:02 2005] [notice] jk2_init() Found child 32763 in scoreboard slot 10 +519:[Sun Dec 04 07:08:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +521:[Sun Dec 04 07:08:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +523:[Sun Dec 04 07:08:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +525:[Sun Dec 04 07:10:54 2005] [notice] jk2_init() Found child 308 in scoreboard slot 8 +526:[Sun Dec 04 07:11:04 2005] [notice] jk2_init() Found child 310 in scoreboard slot 6 +527:[Sun Dec 04 07:11:04 2005] [notice] jk2_init() Found child 309 in scoreboard slot 7 +528:[Sun Dec 04 07:11:05 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +530:[Sun Dec 04 07:11:13 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +531:[Sun Dec 04 07:11:13 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +534:[Sun Dec 04 07:11:49 2005] [notice] jk2_init() Found child 311 in scoreboard slot 9 +535:[Sun Dec 04 07:12:05 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +537:[Sun Dec 04 07:12:22 2005] [notice] jk2_init() Found child 312 in scoreboard slot 10 +538:[Sun Dec 04 07:12:22 2005] [notice] jk2_init() Found child 313 in scoreboard slot 8 +539:[Sun Dec 04 07:12:40 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +540:[Sun Dec 04 07:12:40 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +543:[Sun Dec 04 07:13:09 2005] [notice] jk2_init() Found child 314 in scoreboard slot 7 +544:[Sun Dec 04 07:13:09 2005] [notice] jk2_init() Found child 315 in scoreboard slot 6 +545:[Sun Dec 04 07:13:10 2005] [notice] jk2_init() Found child 316 in scoreboard slot 9 +546:[Sun Dec 04 07:13:36 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +547:[Sun Dec 04 07:13:36 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +548:[Sun Dec 04 07:13:36 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +551:[Sun Dec 04 07:14:07 2005] [notice] jk2_init() Found child 319 in scoreboard slot 7 +552:[Sun Dec 04 07:14:07 2005] [notice] jk2_init() Found child 317 in scoreboard slot 10 +553:[Sun Dec 04 07:14:08 2005] [notice] jk2_init() Found child 318 in scoreboard slot 8 +554:[Sun Dec 04 07:14:21 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +556:[Sun Dec 04 07:14:47 2005] [notice] jk2_init() Found child 321 in scoreboard slot 9 +557:[Sun Dec 04 07:15:09 2005] [notice] jk2_init() Found child 324 in scoreboard slot 11 +558:[Sun Dec 04 07:15:09 2005] [notice] jk2_init() Found child 323 in scoreboard slot 8 +559:[Sun Dec 04 07:17:56 2005] [notice] jk2_init() Found child 350 in scoreboard slot 9 +560:[Sun Dec 04 07:17:56 2005] [notice] jk2_init() Found child 353 in scoreboard slot 12 +561:[Sun Dec 04 07:17:56 2005] [notice] jk2_init() Found child 352 in scoreboard slot 11 +562:[Sun Dec 04 07:17:56 2005] [notice] jk2_init() Found child 349 in scoreboard slot 8 +563:[Sun Dec 04 07:17:56 2005] [notice] jk2_init() Found child 348 in scoreboard slot 7 +564:[Sun Dec 04 07:17:56 2005] [notice] jk2_init() Found child 347 in scoreboard slot 6 +565:[Sun Dec 04 07:17:56 2005] [notice] jk2_init() Found child 351 in scoreboard slot 10 +566:[Sun Dec 04 07:18:00 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +568:[Sun Dec 04 07:18:00 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +570:[Sun Dec 04 07:18:00 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +572:[Sun Dec 04 07:18:00 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +574:[Sun Dec 04 07:18:00 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +576:[Sun Dec 04 07:18:00 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +578:[Sun Dec 04 07:18:00 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +592:[Sun Dec 04 16:24:03 2005] [notice] jk2_init() Found child 1219 in scoreboard slot 6 +594:[Sun Dec 04 16:24:06 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +596:[Sun Dec 04 16:31:07 2005] [notice] jk2_init() Found child 1248 in scoreboard slot 7 +597:[Sun Dec 04 16:32:37 2005] [notice] jk2_init() Found child 1253 in scoreboard slot 9 +598:[Sun Dec 04 16:32:56 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +600:[Sun Dec 04 16:32:58 2005] [notice] jk2_init() Found child 1254 in scoreboard slot 7 +601:[Sun Dec 04 16:32:58 2005] [notice] jk2_init() Found child 1256 in scoreboard slot 6 +602:[Sun Dec 04 16:32:58 2005] [notice] jk2_init() Found child 1255 in scoreboard slot 8 +603:[Sun Dec 04 16:32:58 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +605:[Sun Dec 04 16:32:58 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +607:[Sun Dec 04 16:32:58 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +609:[Sun Dec 04 16:35:49 2005] [notice] jk2_init() Found child 1262 in scoreboard slot 9 +610:[Sun Dec 04 16:35:52 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +612:[Sun Dec 04 16:41:15 2005] [notice] jk2_init() Found child 1275 in scoreboard slot 6 +613:[Sun Dec 04 16:41:16 2005] [notice] jk2_init() Found child 1276 in scoreboard slot 9 +614:[Sun Dec 04 16:41:22 2005] [notice] jk2_init() Found child 1277 in scoreboard slot 7 +615:[Sun Dec 04 16:41:22 2005] [notice] jk2_init() Found child 1278 in scoreboard slot 8 +616:[Sun Dec 04 16:41:22 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +617:[Sun Dec 04 16:41:22 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +618:[Sun Dec 04 16:41:22 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +619:[Sun Dec 04 16:41:22 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +624:[Sun Dec 04 16:45:52 2005] [notice] jk2_init() Found child 1283 in scoreboard slot 6 +625:[Sun Dec 04 16:45:52 2005] [notice] jk2_init() Found child 1284 in scoreboard slot 9 +626:[Sun Dec 04 16:46:09 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +628:[Sun Dec 04 16:46:45 2005] [notice] jk2_init() Found child 1288 in scoreboard slot 9 +629:[Sun Dec 04 16:47:11 2005] [notice] jk2_init() Found child 1291 in scoreboard slot 6 +630:[Sun Dec 04 16:47:59 2005] [notice] jk2_init() Found child 1296 in scoreboard slot 6 +631:[Sun Dec 04 16:47:59 2005] [notice] jk2_init() Found child 1300 in scoreboard slot 10 +632:[Sun Dec 04 16:47:59 2005] [notice] jk2_init() Found child 1298 in scoreboard slot 8 +633:[Sun Dec 04 16:47:59 2005] [notice] jk2_init() Found child 1297 in scoreboard slot 7 +634:[Sun Dec 04 16:47:59 2005] [notice] jk2_init() Found child 1299 in scoreboard slot 9 +635:[Sun Dec 04 16:48:01 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +637:[Sun Dec 04 16:48:01 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +639:[Sun Dec 04 16:48:01 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +641:[Sun Dec 04 16:48:01 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +643:[Sun Dec 04 16:48:01 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +645:[Sun Dec 04 16:50:53 2005] [notice] jk2_init() Found child 1308 in scoreboard slot 6 +646:[Sun Dec 04 16:50:53 2005] [notice] jk2_init() Found child 1309 in scoreboard slot 7 +647:[Sun Dec 04 16:51:26 2005] [notice] jk2_init() Found child 1313 in scoreboard slot 6 +648:[Sun Dec 04 16:51:26 2005] [notice] jk2_init() Found child 1312 in scoreboard slot 10 +649:[Sun Dec 04 16:52:34 2005] [notice] jk2_init() Found child 1320 in scoreboard slot 8 +650:[Sun Dec 04 16:52:45 2005] [notice] jk2_init() Found child 1321 in scoreboard slot 9 +651:[Sun Dec 04 16:52:45 2005] [notice] jk2_init() Found child 1322 in scoreboard slot 10 +652:[Sun Dec 04 16:52:45 2005] [notice] jk2_init() Found child 1323 in scoreboard slot 6 +653:[Sun Dec 04 16:52:46 2005] [notice] jk2_init() Found child 1324 in scoreboard slot 7 +654:[Sun Dec 04 16:52:46 2005] [notice] jk2_init() Found child 1325 in scoreboard slot 8 +655:[Sun Dec 04 16:52:49 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +656:[Sun Dec 04 16:52:49 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +657:[Sun Dec 04 16:52:49 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +658:[Sun Dec 04 16:52:49 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +659:[Sun Dec 04 16:52:49 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +665:[Sun Dec 04 16:55:54 2005] [notice] jk2_init() Found child 1331 in scoreboard slot 10 +666:[Sun Dec 04 16:56:25 2005] [notice] jk2_init() Found child 1338 in scoreboard slot 7 +667:[Sun Dec 04 16:56:25 2005] [notice] jk2_init() Found child 1334 in scoreboard slot 8 +668:[Sun Dec 04 16:56:25 2005] [notice] jk2_init() Found child 1336 in scoreboard slot 10 +669:[Sun Dec 04 16:56:25 2005] [notice] jk2_init() Found child 1337 in scoreboard slot 6 +670:[Sun Dec 04 16:56:25 2005] [notice] jk2_init() Found child 1335 in scoreboard slot 9 +671:[Sun Dec 04 16:56:27 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +673:[Sun Dec 04 16:56:27 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +675:[Sun Dec 04 16:56:27 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +677:[Sun Dec 04 16:56:27 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +679:[Sun Dec 04 16:56:27 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +681:[Sun Dec 04 17:01:43 2005] [notice] jk2_init() Found child 1358 in scoreboard slot 8 +682:[Sun Dec 04 17:01:43 2005] [notice] jk2_init() Found child 1356 in scoreboard slot 6 +683:[Sun Dec 04 17:01:43 2005] [notice] jk2_init() Found child 1354 in scoreboard slot 9 +684:[Sun Dec 04 17:01:43 2005] [notice] jk2_init() Found child 1357 in scoreboard slot 7 +685:[Sun Dec 04 17:01:43 2005] [notice] jk2_init() Found child 1355 in scoreboard slot 10 +686:[Sun Dec 04 17:01:47 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +688:[Sun Dec 04 17:01:47 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +690:[Sun Dec 04 17:01:47 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +692:[Sun Dec 04 17:01:47 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +694:[Sun Dec 04 17:01:47 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +696:[Sun Dec 04 17:05:45 2005] [notice] jk2_init() Found child 1375 in scoreboard slot 9 +697:[Sun Dec 04 17:05:45 2005] [notice] jk2_init() Found child 1376 in scoreboard slot 10 +698:[Sun Dec 04 17:05:45 2005] [notice] jk2_init() Found child 1377 in scoreboard slot 6 +699:[Sun Dec 04 17:05:48 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +700:[Sun Dec 04 17:05:48 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +701:[Sun Dec 04 17:05:48 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +705:[Sun Dec 04 17:11:23 2005] [notice] jk2_init() Found child 1387 in scoreboard slot 7 +706:[Sun Dec 04 17:11:37 2005] [notice] jk2_init() Found child 1390 in scoreboard slot 10 +707:[Sun Dec 04 17:11:37 2005] [notice] jk2_init() Found child 1388 in scoreboard slot 8 +708:[Sun Dec 04 17:11:37 2005] [notice] jk2_init() Found child 1389 in scoreboard slot 9 +709:[Sun Dec 04 17:12:42 2005] [notice] jk2_init() Found child 1393 in scoreboard slot 8 +710:[Sun Dec 04 17:12:50 2005] [notice] jk2_init() Found child 1395 in scoreboard slot 10 +711:[Sun Dec 04 17:12:50 2005] [notice] jk2_init() Found child 1396 in scoreboard slot 6 +712:[Sun Dec 04 17:12:50 2005] [notice] jk2_init() Found child 1394 in scoreboard slot 9 +713:[Sun Dec 04 17:12:54 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +714:[Sun Dec 04 17:12:54 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +715:[Sun Dec 04 17:12:54 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +716:[Sun Dec 04 17:12:54 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +721:[Sun Dec 04 17:12:56 2005] [notice] jk2_init() Found child 1397 in scoreboard slot 7 +722:[Sun Dec 04 17:12:57 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +724:[Sun Dec 04 17:17:07 2005] [notice] jk2_init() Found child 1414 in scoreboard slot 7 +725:[Sun Dec 04 17:17:07 2005] [notice] jk2_init() Found child 1412 in scoreboard slot 10 +726:[Sun Dec 04 17:17:07 2005] [notice] jk2_init() Found child 1413 in scoreboard slot 6 +727:[Sun Dec 04 17:20:38 2005] [notice] jk2_init() Found child 1448 in scoreboard slot 6 +728:[Sun Dec 04 17:20:38 2005] [notice] jk2_init() Found child 1439 in scoreboard slot 7 +729:[Sun Dec 04 17:20:38 2005] [notice] jk2_init() Found child 1441 in scoreboard slot 9 +730:[Sun Dec 04 17:20:38 2005] [notice] jk2_init() Found child 1450 in scoreboard slot 11 +731:[Sun Dec 04 17:20:39 2005] [notice] jk2_init() Found child 1449 in scoreboard slot 10 +732:[Sun Dec 04 17:20:39 2005] [notice] jk2_init() Found child 1440 in scoreboard slot 8 +733:[Sun Dec 04 17:20:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +734:[Sun Dec 04 17:20:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +735:[Sun Dec 04 17:20:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +736:[Sun Dec 04 17:20:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +737:[Sun Dec 04 17:20:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +743:[Sun Dec 04 17:21:01 2005] [notice] jk2_init() Found child 1452 in scoreboard slot 7 +744:[Sun Dec 04 17:21:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +746:[Sun Dec 04 17:26:04 2005] [notice] jk2_init() Found child 1461 in scoreboard slot 8 +747:[Sun Dec 04 17:26:39 2005] [notice] jk2_init() Found child 1462 in scoreboard slot 6 +748:[Sun Dec 04 17:27:13 2005] [notice] jk2_init() Found child 1466 in scoreboard slot 8 +749:[Sun Dec 04 17:28:00 2005] [notice] jk2_init() Found child 1470 in scoreboard slot 7 +750:[Sun Dec 04 17:28:42 2005] [notice] jk2_init() Found child 1477 in scoreboard slot 6 +751:[Sun Dec 04 17:28:41 2005] [notice] jk2_init() Found child 1476 in scoreboard slot 8 +752:[Sun Dec 04 17:31:00 2005] [notice] jk2_init() Found child 1501 in scoreboard slot 7 +753:[Sun Dec 04 17:31:00 2005] [notice] jk2_init() Found child 1502 in scoreboard slot 6 +754:[Sun Dec 04 17:31:00 2005] [notice] jk2_init() Found child 1498 in scoreboard slot 8 +755:[Sun Dec 04 17:31:00 2005] [notice] jk2_init() Found child 1499 in scoreboard slot 11 +756:[Sun Dec 04 17:31:10 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +757:[Sun Dec 04 17:31:10 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +758:[Sun Dec 04 17:31:10 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +759:[Sun Dec 04 17:31:11 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +764:[Sun Dec 04 17:31:43 2005] [notice] jk2_init() Found child 1503 in scoreboard slot 9 +765:[Sun Dec 04 17:31:43 2005] [notice] jk2_init() Found child 1504 in scoreboard slot 8 +766:[Sun Dec 04 17:31:45 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +767:[Sun Dec 04 17:31:45 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +770:[Sun Dec 04 17:34:52 2005] [notice] jk2_init() Found child 1507 in scoreboard slot 10 +771:[Sun Dec 04 17:34:57 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +774:[Sun Dec 04 17:36:14 2005] [notice] jk2_init() Found child 1512 in scoreboard slot 7 +775:[Sun Dec 04 17:36:14 2005] [notice] jk2_init() Found child 1513 in scoreboard slot 6 +776:[Sun Dec 04 17:37:08 2005] [notice] jk2_init() Found child 1517 in scoreboard slot 7 +777:[Sun Dec 04 17:37:08 2005] [notice] jk2_init() Found child 1518 in scoreboard slot 6 +778:[Sun Dec 04 17:37:47 2005] [notice] jk2_init() Found child 1520 in scoreboard slot 8 +779:[Sun Dec 04 17:37:47 2005] [notice] jk2_init() Found child 1521 in scoreboard slot 10 +780:[Sun Dec 04 17:39:00 2005] [notice] jk2_init() Found child 1529 in scoreboard slot 9 +781:[Sun Dec 04 17:39:01 2005] [notice] jk2_init() Found child 1530 in scoreboard slot 8 +782:[Sun Dec 04 17:39:00 2005] [notice] jk2_init() Found child 1528 in scoreboard slot 7 +783:[Sun Dec 04 17:39:00 2005] [notice] jk2_init() Found child 1527 in scoreboard slot 6 +784:[Sun Dec 04 17:43:08 2005] [notice] jk2_init() Found child 1565 in scoreboard slot 9 +786:[Sun Dec 04 17:43:08 2005] [notice] jk2_init() Found child 1561 in scoreboard slot 6 +787:[Sun Dec 04 17:43:08 2005] [notice] jk2_init() Found child 1563 in scoreboard slot 8 +788:[Sun Dec 04 17:43:08 2005] [notice] jk2_init() Found child 1562 in scoreboard slot 7 +790:[Sun Dec 04 17:43:08 2005] [notice] jk2_init() Found child 1568 in scoreboard slot 13 +791:[Sun Dec 04 17:43:12 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +793:[Sun Dec 04 17:43:12 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +795:[Sun Dec 04 17:43:12 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +797:[Sun Dec 04 17:43:12 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +799:[Sun Dec 04 17:43:12 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +801:[Sun Dec 04 17:43:12 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +803:[Sun Dec 04 17:43:12 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +807:[Sun Dec 04 19:25:51 2005] [notice] jk2_init() Found child 1763 in scoreboard slot 6 +808:[Sun Dec 04 19:25:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +810:[Sun Dec 04 19:32:20 2005] [notice] jk2_init() Found child 1786 in scoreboard slot 8 +811:[Sun Dec 04 19:32:20 2005] [notice] jk2_init() Found child 1787 in scoreboard slot 9 +812:[Sun Dec 04 19:32:32 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +814:[Sun Dec 04 19:32:34 2005] [notice] jk2_init() Found child 1788 in scoreboard slot 6 +815:[Sun Dec 04 19:32:34 2005] [notice] jk2_init() Found child 1790 in scoreboard slot 8 +816:[Sun Dec 04 19:32:34 2005] [notice] jk2_init() Found child 1789 in scoreboard slot 7 +817:[Sun Dec 04 19:32:34 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +819:[Sun Dec 04 19:32:34 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +821:[Sun Dec 04 19:32:34 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +823:[Sun Dec 04 19:35:58 2005] [notice] jk2_init() Found child 1797 in scoreboard slot 9 +824:[Sun Dec 04 19:35:58 2005] [notice] jk2_init() Found child 1798 in scoreboard slot 6 +825:[Sun Dec 04 19:35:58 2005] [notice] jk2_init() Found child 1799 in scoreboard slot 7 +826:[Sun Dec 04 19:35:58 2005] [notice] jk2_init() Found child 1800 in scoreboard slot 10 +827:[Sun Dec 04 19:35:58 2005] [notice] jk2_init() Found child 1801 in scoreboard slot 12 +829:[Sun Dec 04 19:36:07 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +831:[Sun Dec 04 19:36:07 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +833:[Sun Dec 04 19:36:07 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +835:[Sun Dec 04 19:36:07 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +837:[Sun Dec 04 19:36:07 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +839:[Sun Dec 04 19:41:20 2005] [notice] jk2_init() Found child 1816 in scoreboard slot 9 +840:[Sun Dec 04 19:41:20 2005] [notice] jk2_init() Found child 1814 in scoreboard slot 7 +841:[Sun Dec 04 19:41:20 2005] [notice] jk2_init() Found child 1813 in scoreboard slot 6 +842:[Sun Dec 04 19:41:20 2005] [notice] jk2_init() Found child 1815 in scoreboard slot 8 +843:[Sun Dec 04 19:41:21 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +845:[Sun Dec 04 19:41:21 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +847:[Sun Dec 04 19:41:21 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +849:[Sun Dec 04 19:41:21 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +851:[Sun Dec 04 19:46:04 2005] [notice] jk2_init() Found child 1821 in scoreboard slot 6 +852:[Sun Dec 04 19:46:04 2005] [notice] jk2_init() Found child 1822 in scoreboard slot 7 +853:[Sun Dec 04 19:46:13 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +854:[Sun Dec 04 19:46:13 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +857:[Sun Dec 04 19:46:16 2005] [notice] jk2_init() Found child 1823 in scoreboard slot 8 +858:[Sun Dec 04 19:46:19 2005] [notice] jk2_init() Found child 1824 in scoreboard slot 9 +859:[Sun Dec 04 19:46:20 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +861:[Sun Dec 04 19:46:20 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +863:[Sun Dec 04 19:50:39 2005] [notice] jk2_init() Found child 1833 in scoreboard slot 7 +864:[Sun Dec 04 19:50:39 2005] [notice] jk2_init() Found child 1832 in scoreboard slot 6 +865:[Sun Dec 04 19:50:51 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +866:[Sun Dec 04 19:50:51 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +868:[Sun Dec 04 19:50:57 2005] [notice] jk2_init() Found child 1834 in scoreboard slot 8 +870:[Sun Dec 04 19:51:16 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +872:[Sun Dec 04 19:51:43 2005] [notice] jk2_init() Found child 1835 in scoreboard slot 9 +873:[Sun Dec 04 19:51:52 2005] [notice] jk2_init() Found child 1836 in scoreboard slot 6 +874:[Sun Dec 04 19:51:52 2005] [notice] jk2_init() Found child 1837 in scoreboard slot 7 +875:[Sun Dec 04 19:51:54 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +877:[Sun Dec 04 19:51:54 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +879:[Sun Dec 04 19:51:54 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +881:[Sun Dec 04 19:56:51 2005] [notice] jk2_init() Found child 1851 in scoreboard slot 6 +882:[Sun Dec 04 19:56:51 2005] [notice] jk2_init() Found child 1852 in scoreboard slot 9 +883:[Sun Dec 04 19:56:51 2005] [notice] jk2_init() Found child 1853 in scoreboard slot 7 +884:[Sun Dec 04 19:56:51 2005] [notice] jk2_init() Found child 1850 in scoreboard slot 8 +885:[Sun Dec 04 19:56:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +887:[Sun Dec 04 19:56:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +889:[Sun Dec 04 19:56:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +891:[Sun Dec 04 19:56:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +893:[Sun Dec 04 20:01:00 2005] [notice] jk2_init() Found child 1861 in scoreboard slot 8 +894:[Sun Dec 04 20:01:00 2005] [notice] jk2_init() Found child 1862 in scoreboard slot 6 +895:[Sun Dec 04 20:01:30 2005] [notice] jk2_init() Found child 1867 in scoreboard slot 8 +896:[Sun Dec 04 20:01:30 2005] [notice] jk2_init() Found child 1864 in scoreboard slot 7 +897:[Sun Dec 04 20:01:30 2005] [notice] jk2_init() Found child 1868 in scoreboard slot 6 +898:[Sun Dec 04 20:01:30 2005] [notice] jk2_init() Found child 1863 in scoreboard slot 9 +899:[Sun Dec 04 20:01:37 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +900:[Sun Dec 04 20:01:37 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +901:[Sun Dec 04 20:01:37 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +902:[Sun Dec 04 20:01:37 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +907:[Sun Dec 04 20:05:55 2005] [notice] jk2_init() Found child 1887 in scoreboard slot 8 +908:[Sun Dec 04 20:05:55 2005] [notice] jk2_init() Found child 1885 in scoreboard slot 9 +909:[Sun Dec 04 20:05:55 2005] [notice] jk2_init() Found child 1888 in scoreboard slot 6 +910:[Sun Dec 04 20:05:55 2005] [notice] jk2_init() Found child 1886 in scoreboard slot 7 +911:[Sun Dec 04 20:05:58 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +913:[Sun Dec 04 20:05:59 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +915:[Sun Dec 04 20:05:59 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +917:[Sun Dec 04 20:05:59 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +919:[Sun Dec 04 20:11:09 2005] [notice] jk2_init() Found child 1899 in scoreboard slot 7 +920:[Sun Dec 04 20:11:09 2005] [notice] jk2_init() Found child 1900 in scoreboard slot 8 +921:[Sun Dec 04 20:11:09 2005] [notice] jk2_init() Found child 1901 in scoreboard slot 6 +922:[Sun Dec 04 20:11:09 2005] [notice] jk2_init() Found child 1898 in scoreboard slot 9 +923:[Sun Dec 04 20:11:14 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +925:[Sun Dec 04 20:11:14 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +927:[Sun Dec 04 20:11:14 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +929:[Sun Dec 04 20:11:14 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +931:[Sun Dec 04 20:16:10 2005] [notice] jk2_init() Found child 1912 in scoreboard slot 9 +932:[Sun Dec 04 20:16:10 2005] [notice] jk2_init() Found child 1915 in scoreboard slot 6 +933:[Sun Dec 04 20:16:10 2005] [notice] jk2_init() Found child 1913 in scoreboard slot 7 +934:[Sun Dec 04 20:16:10 2005] [notice] jk2_init() Found child 1914 in scoreboard slot 8 +935:[Sun Dec 04 20:16:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +936:[Sun Dec 04 20:16:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +937:[Sun Dec 04 20:16:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +938:[Sun Dec 04 20:16:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +943:[Sun Dec 04 20:20:57 2005] [notice] jk2_init() Found child 1931 in scoreboard slot 7 +944:[Sun Dec 04 20:21:09 2005] [notice] jk2_init() Found child 1932 in scoreboard slot 8 +945:[Sun Dec 04 20:21:08 2005] [notice] jk2_init() Found child 1933 in scoreboard slot 6 +946:[Sun Dec 04 20:21:21 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +947:[Sun Dec 04 20:21:31 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +948:[Sun Dec 04 20:21:31 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +949:[Sun Dec 04 20:21:37 2005] [notice] jk2_init() Found child 1934 in scoreboard slot 9 +951:[Sun Dec 04 20:22:09 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +952:[Sun Dec 04 20:22:12 2005] [notice] jk2_init() Found child 1936 in scoreboard slot 8 +953:[Sun Dec 04 20:22:12 2005] [notice] jk2_init() Found child 1935 in scoreboard slot 7 +954:[Sun Dec 04 20:22:49 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +956:[Sun Dec 04 20:22:57 2005] [notice] jk2_init() Found child 1937 in scoreboard slot 6 +957:[Sun Dec 04 20:23:12 2005] [notice] jk2_init() Found child 1938 in scoreboard slot 9 +958:[Sun Dec 04 20:24:45 2005] [notice] jk2_init() Found child 1950 in scoreboard slot 9 +959:[Sun Dec 04 20:24:45 2005] [notice] jk2_init() Found child 1951 in scoreboard slot 7 +960:[Sun Dec 04 20:24:45 2005] [notice] jk2_init() Found child 1949 in scoreboard slot 6 +961:[Sun Dec 04 20:24:45 2005] [notice] jk2_init() Found child 1948 in scoreboard slot 8 +962:[Sun Dec 04 20:24:49 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +964:[Sun Dec 04 20:24:49 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +966:[Sun Dec 04 20:24:49 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +968:[Sun Dec 04 20:24:49 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +970:[Sun Dec 04 20:26:10 2005] [notice] jk2_init() Found child 1957 in scoreboard slot 8 +971:[Sun Dec 04 20:26:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +973:[Sun Dec 04 20:26:58 2005] [notice] jk2_init() Found child 1959 in scoreboard slot 9 +974:[Sun Dec 04 20:26:58 2005] [notice] jk2_init() Found child 1958 in scoreboard slot 6 +975:[Sun Dec 04 20:27:43 2005] [notice] jk2_init() Found child 1961 in scoreboard slot 8 +976:[Sun Dec 04 20:28:00 2005] [notice] jk2_init() Found child 1962 in scoreboard slot 6 +977:[Sun Dec 04 20:28:00 2005] [notice] jk2_init() Found child 1963 in scoreboard slot 9 +978:[Sun Dec 04 20:28:26 2005] [notice] jk2_init() Found child 1964 in scoreboard slot 7 +979:[Sun Dec 04 20:28:39 2005] [notice] jk2_init() Found child 1966 in scoreboard slot 6 +980:[Sun Dec 04 20:28:39 2005] [notice] jk2_init() Found child 1967 in scoreboard slot 9 +981:[Sun Dec 04 20:28:39 2005] [notice] jk2_init() Found child 1965 in scoreboard slot 8 +982:[Sun Dec 04 20:29:34 2005] [notice] jk2_init() Found child 1970 in scoreboard slot 6 +983:[Sun Dec 04 20:30:59 2005] [notice] jk2_init() Found child 1984 in scoreboard slot 10 +984:[Sun Dec 04 20:31:35 2005] [notice] jk2_init() Found child 1990 in scoreboard slot 9 +985:[Sun Dec 04 20:32:37 2005] [notice] jk2_init() Found child 1999 in scoreboard slot 6 +986:[Sun Dec 04 20:32:37 2005] [notice] jk2_init() Found child 2000 in scoreboard slot 7 +987:[Sun Dec 04 20:32:37 2005] [notice] jk2_init() Found child 1998 in scoreboard slot 9 +988:[Sun Dec 04 20:32:50 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +989:[Sun Dec 04 20:32:50 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +990:[Sun Dec 04 20:32:50 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +994:[Sun Dec 04 20:33:35 2005] [notice] jk2_init() Found child 2002 in scoreboard slot 8 +995:[Sun Dec 04 20:33:35 2005] [notice] jk2_init() Found child 2001 in scoreboard slot 9 +996:[Sun Dec 04 20:33:47 2005] [notice] jk2_init() Found child 2005 in scoreboard slot 7 +997:[Sun Dec 04 20:33:47 2005] [notice] jk2_init() Found child 2004 in scoreboard slot 6 +998:[Sun Dec 04 20:34:13 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1000:[Sun Dec 04 20:34:20 2005] [notice] jk2_init() Found child 2007 in scoreboard slot 8 +1001:[Sun Dec 04 20:34:20 2005] [notice] jk2_init() Found child 2006 in scoreboard slot 9 +1002:[Sun Dec 04 20:34:21 2005] [notice] jk2_init() Found child 2008 in scoreboard slot 6 +1003:[Sun Dec 04 20:34:25 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1005:[Sun Dec 04 20:34:25 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1007:[Sun Dec 04 20:34:25 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1009:[Sun Dec 04 20:37:29 2005] [notice] jk2_init() Found child 2028 in scoreboard slot 9 +1010:[Sun Dec 04 20:37:29 2005] [notice] jk2_init() Found child 2027 in scoreboard slot 7 +1011:[Sun Dec 04 20:37:29 2005] [notice] jk2_init() Found child 2029 in scoreboard slot 8 +1012:[Sun Dec 04 20:37:46 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1013:[Sun Dec 04 20:37:46 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1016:[Sun Dec 04 20:38:10 2005] [notice] jk2_init() Found child 2030 in scoreboard slot 6 +1017:[Sun Dec 04 20:38:10 2005] [notice] jk2_init() Found child 2031 in scoreboard slot 7 +1018:[Sun Dec 04 20:38:11 2005] [notice] jk2_init() Found child 2032 in scoreboard slot 9 +1019:[Sun Dec 04 20:38:14 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1020:[Sun Dec 04 20:38:14 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1021:[Sun Dec 04 20:38:14 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1025:[Sun Dec 04 20:41:12 2005] [notice] jk2_init() Found child 2042 in scoreboard slot 8 +1026:[Sun Dec 04 20:41:47 2005] [notice] jk2_init() Found child 2045 in scoreboard slot 9 +1027:[Sun Dec 04 20:42:42 2005] [notice] jk2_init() Found child 2051 in scoreboard slot 8 +1028:[Sun Dec 04 20:44:29 2005] [notice] jk2_init() Found child 2059 in scoreboard slot 7 +1029:[Sun Dec 04 20:44:29 2005] [notice] jk2_init() Found child 2060 in scoreboard slot 9 +1030:[Sun Dec 04 20:44:30 2005] [notice] jk2_init() Found child 2061 in scoreboard slot 8 +1031:[Sun Dec 04 20:47:16 2005] [notice] jk2_init() Found child 2081 in scoreboard slot 6 +1033:[Sun Dec 04 20:47:16 2005] [notice] jk2_init() Found child 2083 in scoreboard slot 8 +1034:[Sun Dec 04 20:47:16 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1036:[Sun Dec 04 20:47:16 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1038:[Sun Dec 04 20:47:16 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1041:[Sun Dec 04 20:47:17 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1044:[Sun Dec 04 20:47:17 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1047:[Sun Dec 04 20:47:17 2005] [notice] jk2_init() Found child 2084 in scoreboard slot 9 +1048:[Sun Dec 04 20:47:17 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1050:[Sun Dec 04 20:47:17 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1054:[Mon Dec 05 03:21:00 2005] [notice] jk2_init() Found child 2760 in scoreboard slot 6 +1055:[Mon Dec 05 03:21:02 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1057:[Mon Dec 05 03:23:21 2005] [notice] jk2_init() Found child 2763 in scoreboard slot 7 +1058:[Mon Dec 05 03:23:24 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1061:[Mon Dec 05 03:25:44 2005] [notice] jk2_init() Found child 2773 in scoreboard slot 6 +1062:[Mon Dec 05 03:25:46 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1064:[Mon Dec 05 03:36:51 2005] [notice] jk2_init() Found child 2813 in scoreboard slot 7 +1065:[Mon Dec 05 03:36:51 2005] [notice] jk2_init() Found child 2815 in scoreboard slot 8 +1066:[Mon Dec 05 03:36:51 2005] [notice] jk2_init() Found child 2812 in scoreboard slot 6 +1067:[Mon Dec 05 03:36:51 2005] [notice] jk2_init() Found child 2811 in scoreboard slot 9 +1068:[Mon Dec 05 03:36:57 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1069:[Mon Dec 05 03:36:57 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1070:[Mon Dec 05 03:36:57 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1071:[Mon Dec 05 03:36:57 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1076:[Mon Dec 05 03:40:46 2005] [notice] jk2_init() Found child 2823 in scoreboard slot 9 +1077:[Mon Dec 05 03:40:55 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1079:[Mon Dec 05 03:44:50 2005] [notice] jk2_init() Found child 2824 in scoreboard slot 10 +1081:[Mon Dec 05 03:44:50 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1083:[Mon Dec 05 03:46:38 2005] [notice] jk2_init() Found child 2838 in scoreboard slot 10 +1084:[Mon Dec 05 03:46:38 2005] [notice] jk2_init() Found child 2836 in scoreboard slot 9 +1085:[Mon Dec 05 03:46:38 2005] [notice] jk2_init() Found child 2837 in scoreboard slot 6 +1086:[Mon Dec 05 03:46:50 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1087:[Mon Dec 05 03:47:02 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1088:[Mon Dec 05 03:47:19 2005] [notice] jk2_init() Found child 2840 in scoreboard slot 8 +1089:[Mon Dec 05 03:47:19 2005] [notice] jk2_init() Found child 2841 in scoreboard slot 6 +1090:[Mon Dec 05 03:47:19 2005] [notice] jk2_init() Found child 2842 in scoreboard slot 9 +1091:[Mon Dec 05 03:47:53 2005] [notice] jk2_init() Found child 2846 in scoreboard slot 9 +1092:[Mon Dec 05 03:47:53 2005] [notice] jk2_init() Found child 2843 in scoreboard slot 7 +1093:[Mon Dec 05 03:47:53 2005] [notice] jk2_init() Found child 2844 in scoreboard slot 8 +1094:[Mon Dec 05 03:47:53 2005] [notice] jk2_init() Found child 2845 in scoreboard slot 6 +1095:[Mon Dec 05 03:47:54 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1097:[Mon Dec 05 03:47:54 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1099:[Mon Dec 05 03:47:54 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1101:[Mon Dec 05 03:47:54 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1103:[Mon Dec 05 03:50:49 2005] [notice] jk2_init() Found child 2857 in scoreboard slot 9 +1104:[Mon Dec 05 03:50:50 2005] [notice] jk2_init() Found child 2854 in scoreboard slot 7 +1105:[Mon Dec 05 03:50:49 2005] [notice] jk2_init() Found child 2855 in scoreboard slot 8 +1106:[Mon Dec 05 03:50:49 2005] [notice] jk2_init() Found child 2856 in scoreboard slot 6 +1107:[Mon Dec 05 03:50:59 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1108:[Mon Dec 05 03:50:59 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1109:[Mon Dec 05 03:50:59 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1110:[Mon Dec 05 03:50:59 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1115:[Mon Dec 05 03:56:12 2005] [notice] jk2_init() Found child 2866 in scoreboard slot 7 +1116:[Mon Dec 05 03:56:12 2005] [notice] jk2_init() Found child 2867 in scoreboard slot 8 +1117:[Mon Dec 05 03:56:12 2005] [notice] jk2_init() Found child 2865 in scoreboard slot 9 +1118:[Mon Dec 05 03:56:12 2005] [notice] jk2_init() Found child 2864 in scoreboard slot 6 +1119:[Mon Dec 05 03:56:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1121:[Mon Dec 05 03:56:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1123:[Mon Dec 05 03:56:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1125:[Mon Dec 05 03:56:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1127:[Mon Dec 05 04:00:55 2005] [notice] jk2_init() Found child 2877 in scoreboard slot 10 +1128:[Mon Dec 05 04:01:18 2005] [notice] jk2_init() Found child 2883 in scoreboard slot 9 +1129:[Mon Dec 05 04:01:18 2005] [notice] jk2_init() Found child 2878 in scoreboard slot 7 +1130:[Mon Dec 05 04:01:18 2005] [notice] jk2_init() Found child 2880 in scoreboard slot 8 +1131:[Mon Dec 05 04:01:18 2005] [notice] jk2_init() Found child 2879 in scoreboard slot 6 +1132:[Mon Dec 05 04:01:23 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1134:[Mon Dec 05 04:01:23 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1136:[Mon Dec 05 04:01:23 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1138:[Mon Dec 05 04:01:23 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1140:[Mon Dec 05 04:06:19 2005] [notice] jk2_init() Found child 3667 in scoreboard slot 7 +1141:[Mon Dec 05 04:06:19 2005] [notice] jk2_init() Found child 3669 in scoreboard slot 6 +1142:[Mon Dec 05 04:06:27 2005] [notice] jk2_init() Found child 3670 in scoreboard slot 8 +1143:[Mon Dec 05 04:06:43 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1144:[Mon Dec 05 04:06:43 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1147:[Mon Dec 05 04:07:23 2005] [notice] jk2_init() Found child 3672 in scoreboard slot 7 +1148:[Mon Dec 05 04:07:37 2005] [notice] jk2_init() Found child 3673 in scoreboard slot 6 +1149:[Mon Dec 05 04:07:48 2005] [notice] jk2_init() Found child 3675 in scoreboard slot 9 +1150:[Mon Dec 05 04:07:48 2005] [notice] jk2_init() Found child 3674 in scoreboard slot 8 +1151:[Mon Dec 05 04:08:37 2005] [notice] jk2_init() Found child 3678 in scoreboard slot 8 +1152:[Mon Dec 05 04:08:37 2005] [notice] jk2_init() Found child 3681 in scoreboard slot 6 +1153:[Mon Dec 05 04:08:37 2005] [notice] jk2_init() Found child 3679 in scoreboard slot 9 +1154:[Mon Dec 05 04:08:37 2005] [notice] jk2_init() Found child 3680 in scoreboard slot 7 +1155:[Mon Dec 05 04:08:57 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1156:[Mon Dec 05 04:09:32 2005] [notice] jk2_init() Found child 3685 in scoreboard slot 6 +1157:[Mon Dec 05 04:10:47 2005] [notice] jk2_init() Found child 3698 in scoreboard slot 9 +1158:[Mon Dec 05 04:10:47 2005] [notice] jk2_init() Found child 3690 in scoreboard slot 6 +1159:[Mon Dec 05 04:10:47 2005] [notice] jk2_init() Found child 3691 in scoreboard slot 8 +1160:[Mon Dec 05 04:13:54 2005] [notice] jk2_init() Found child 3744 in scoreboard slot 6 +1161:[Mon Dec 05 04:13:54 2005] [notice] jk2_init() Found child 3747 in scoreboard slot 8 +1162:[Mon Dec 05 04:13:54 2005] [notice] jk2_init() Found child 3754 in scoreboard slot 12 +1163:[Mon Dec 05 04:13:54 2005] [notice] jk2_init() Found child 3755 in scoreboard slot 13 +1164:[Mon Dec 05 04:13:54 2005] [notice] jk2_init() Found child 3753 in scoreboard slot 10 +1165:[Mon Dec 05 04:13:54 2005] [notice] jk2_init() Found child 3752 in scoreboard slot 9 +1166:[Mon Dec 05 04:13:54 2005] [notice] jk2_init() Found child 3746 in scoreboard slot 7 +1167:[Mon Dec 05 04:14:00 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1168:[Mon Dec 05 04:14:00 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1170:[Mon Dec 05 04:14:00 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1172:[Mon Dec 05 04:14:00 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1174:[Mon Dec 05 04:14:00 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1176:[Mon Dec 05 04:14:00 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1178:[Mon Dec 05 04:14:00 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1181:[Mon Dec 05 05:06:42 2005] [notice] jk2_init() Found child 4596 in scoreboard slot 8 +1182:[Mon Dec 05 05:06:42 2005] [notice] jk2_init() Found child 4595 in scoreboard slot 7 +1183:[Mon Dec 05 05:06:42 2005] [notice] jk2_init() Found child 4594 in scoreboard slot 6 +1184:[Mon Dec 05 05:06:47 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1185:[Mon Dec 05 05:06:47 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1186:[Mon Dec 05 05:06:47 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1190:[Mon Dec 05 05:11:04 2005] [notice] jk2_init() Found child 4609 in scoreboard slot 7 +1191:[Mon Dec 05 05:11:04 2005] [notice] jk2_init() Found child 4608 in scoreboard slot 6 +1192:[Mon Dec 05 05:11:34 2005] [notice] jk2_init() Found child 4611 in scoreboard slot 9 +1193:[Mon Dec 05 05:11:54 2005] [notice] jk2_init() Found child 4613 in scoreboard slot 7 +1194:[Mon Dec 05 05:11:54 2005] [notice] jk2_init() Found child 4612 in scoreboard slot 6 +1195:[Mon Dec 05 05:12:32 2005] [notice] jk2_init() Found child 4615 in scoreboard slot 9 +1196:[Mon Dec 05 05:12:56 2005] [notice] jk2_init() Found child 4616 in scoreboard slot 6 +1197:[Mon Dec 05 05:12:56 2005] [notice] jk2_init() Found child 4617 in scoreboard slot 7 +1198:[Mon Dec 05 05:12:56 2005] [notice] jk2_init() Found child 4618 in scoreboard slot 8 +1199:[Mon Dec 05 05:15:29 2005] [notice] jk2_init() Found child 4634 in scoreboard slot 6 +1200:[Mon Dec 05 05:15:29 2005] [notice] jk2_init() Found child 4637 in scoreboard slot 7 +1201:[Mon Dec 05 05:15:29 2005] [notice] jk2_init() Found child 4631 in scoreboard slot 9 +1202:[Mon Dec 05 05:15:29 2005] [notice] jk2_init() Found child 4630 in scoreboard slot 8 +1203:[Mon Dec 05 05:15:33 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1205:[Mon Dec 05 05:15:33 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1207:[Mon Dec 05 05:15:33 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1209:[Mon Dec 05 05:15:33 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1211:[Mon Dec 05 06:35:27 2005] [notice] jk2_init() Found child 4820 in scoreboard slot 8 +1212:[Mon Dec 05 06:35:27 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1214:[Mon Dec 05 06:36:58 2005] [notice] jk2_init() Found child 4821 in scoreboard slot 10 +1215:[Mon Dec 05 06:36:58 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1218:[Mon Dec 05 07:16:00 2005] [notice] jk2_init() Found child 4893 in scoreboard slot 7 +1219:[Mon Dec 05 07:16:00 2005] [notice] jk2_init() Found child 4892 in scoreboard slot 6 +1220:[Mon Dec 05 07:16:03 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1222:[Mon Dec 05 07:16:03 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1224:[Mon Dec 05 07:21:03 2005] [notice] jk2_init() Found child 4907 in scoreboard slot 6 +1225:[Mon Dec 05 07:21:02 2005] [notice] jk2_init() Found child 4906 in scoreboard slot 9 +1226:[Mon Dec 05 07:21:02 2005] [notice] jk2_init() Found child 4905 in scoreboard slot 8 +1227:[Mon Dec 05 07:21:09 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1228:[Mon Dec 05 07:21:09 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1231:[Mon Dec 05 07:21:09 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1233:[Mon Dec 05 07:25:55 2005] [notice] jk2_init() Found child 4916 in scoreboard slot 8 +1234:[Mon Dec 05 07:25:55 2005] [notice] jk2_init() Found child 4917 in scoreboard slot 9 +1235:[Mon Dec 05 07:25:55 2005] [notice] jk2_init() Found child 4915 in scoreboard slot 7 +1236:[Mon Dec 05 07:25:59 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1237:[Mon Dec 05 07:25:59 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1238:[Mon Dec 05 07:25:59 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1242:[Mon Dec 05 07:31:22 2005] [notice] jk2_init() Found child 4932 in scoreboard slot 6 +1243:[Mon Dec 05 07:32:03 2005] [notice] jk2_init() Found child 4938 in scoreboard slot 8 +1244:[Mon Dec 05 07:32:03 2005] [notice] jk2_init() Found child 4935 in scoreboard slot 9 +1245:[Mon Dec 05 07:32:03 2005] [notice] jk2_init() Found child 4936 in scoreboard slot 6 +1246:[Mon Dec 05 07:32:03 2005] [notice] jk2_init() Found child 4937 in scoreboard slot 7 +1247:[Mon Dec 05 07:32:06 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1249:[Mon Dec 05 07:32:06 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1251:[Mon Dec 05 07:32:06 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1253:[Mon Dec 05 07:32:06 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1255:[Mon Dec 05 07:36:19 2005] [notice] jk2_init() Found child 4950 in scoreboard slot 7 +1256:[Mon Dec 05 07:37:47 2005] [notice] jk2_init() Found child 4961 in scoreboard slot 6 +1257:[Mon Dec 05 07:37:48 2005] [notice] jk2_init() Found child 4962 in scoreboard slot 7 +1258:[Mon Dec 05 07:37:48 2005] [notice] jk2_init() Found child 4960 in scoreboard slot 9 +1259:[Mon Dec 05 07:37:48 2005] [notice] jk2_init() Found child 4959 in scoreboard slot 8 +1260:[Mon Dec 05 07:37:58 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1261:[Mon Dec 05 07:37:58 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1262:[Mon Dec 05 07:37:58 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1263:[Mon Dec 05 07:37:58 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1268:[Mon Dec 05 07:41:07 2005] [notice] jk2_init() Found child 4974 in scoreboard slot 9 +1269:[Mon Dec 05 07:41:35 2005] [notice] jk2_init() Found child 4975 in scoreboard slot 6 +1270:[Mon Dec 05 07:41:50 2005] [notice] jk2_init() Found child 4977 in scoreboard slot 8 +1271:[Mon Dec 05 07:41:50 2005] [notice] jk2_init() Found child 4976 in scoreboard slot 7 +1272:[Mon Dec 05 07:43:07 2005] [notice] jk2_init() Found child 4984 in scoreboard slot 7 +1273:[Mon Dec 05 07:43:08 2005] [notice] jk2_init() Found child 4985 in scoreboard slot 10 +1274:[Mon Dec 05 07:43:07 2005] [notice] jk2_init() Found child 4983 in scoreboard slot 6 +1275:[Mon Dec 05 07:43:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1277:[Mon Dec 05 07:43:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1279:[Mon Dec 05 07:43:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1281:[Mon Dec 05 07:43:19 2005] [notice] jk2_init() Found child 4986 in scoreboard slot 8 +1282:[Mon Dec 05 07:43:19 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1284:[Mon Dec 05 07:46:01 2005] [notice] jk2_init() Found child 4991 in scoreboard slot 6 +1285:[Mon Dec 05 07:46:01 2005] [notice] jk2_init() Found child 4992 in scoreboard slot 7 +1286:[Mon Dec 05 07:46:46 2005] [notice] jk2_init() Found child 4996 in scoreboard slot 7 +1287:[Mon Dec 05 07:46:46 2005] [notice] jk2_init() Found child 4995 in scoreboard slot 6 +1288:[Mon Dec 05 07:47:13 2005] [notice] jk2_init() Found child 4998 in scoreboard slot 8 +1289:[Mon Dec 05 07:47:13 2005] [notice] jk2_init() Found child 4999 in scoreboard slot 6 +1290:[Mon Dec 05 07:47:21 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1291:[Mon Dec 05 07:47:21 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1292:[Mon Dec 05 07:47:21 2005] [notice] jk2_init() Found child 5000 in scoreboard slot 7 +1294:[Mon Dec 05 07:47:21 2005] [notice] jk2_init() Found child 5001 in scoreboard slot 9 +1296:[Mon Dec 05 07:47:36 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1297:[Mon Dec 05 07:47:36 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1300:[Mon Dec 05 07:48:04 2005] [notice] jk2_init() Found child 5002 in scoreboard slot 8 +1301:[Mon Dec 05 07:48:04 2005] [notice] jk2_init() Found child 5003 in scoreboard slot 6 +1302:[Mon Dec 05 07:48:46 2005] [notice] jk2_init() Found child 5005 in scoreboard slot 9 +1303:[Mon Dec 05 07:48:46 2005] [notice] jk2_init() Found child 5006 in scoreboard slot 8 +1304:[Mon Dec 05 07:48:55 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1305:[Mon Dec 05 07:48:55 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1308:[Mon Dec 05 07:48:56 2005] [notice] jk2_init() Found child 5007 in scoreboard slot 6 +1309:[Mon Dec 05 07:48:56 2005] [notice] jk2_init() Found child 5008 in scoreboard slot 7 +1310:[Mon Dec 05 07:48:56 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1312:[Mon Dec 05 07:48:56 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1314:[Mon Dec 05 07:50:54 2005] [notice] jk2_init() Found child 5017 in scoreboard slot 8 +1315:[Mon Dec 05 07:50:54 2005] [notice] jk2_init() Found child 5016 in scoreboard slot 9 +1316:[Mon Dec 05 07:51:22 2005] [notice] jk2_init() Found child 5018 in scoreboard slot 6 +1317:[Mon Dec 05 07:51:20 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1319:[Mon Dec 05 07:51:39 2005] [notice] jk2_init() Found child 5020 in scoreboard slot 9 +1320:[Mon Dec 05 07:51:39 2005] [notice] jk2_init() Found child 5019 in scoreboard slot 7 +1321:[Mon Dec 05 07:51:56 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1322:[Mon Dec 05 07:51:56 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1325:[Mon Dec 05 07:52:29 2005] [notice] jk2_init() Found child 5021 in scoreboard slot 8 +1326:[Mon Dec 05 07:52:29 2005] [notice] jk2_init() Found child 5022 in scoreboard slot 6 +1327:[Mon Dec 05 07:52:56 2005] [notice] jk2_init() Found child 5024 in scoreboard slot 9 +1328:[Mon Dec 05 07:52:56 2005] [notice] jk2_init() Found child 5023 in scoreboard slot 7 +1329:[Mon Dec 05 07:52:55 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1331:[Mon Dec 05 07:53:24 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1332:[Mon Dec 05 07:54:01 2005] [notice] jk2_init() Found child 5029 in scoreboard slot 8 +1333:[Mon Dec 05 07:54:02 2005] [notice] jk2_init() Found child 5030 in scoreboard slot 6 +1334:[Mon Dec 05 07:54:48 2005] [notice] jk2_init() Found child 5033 in scoreboard slot 8 +1335:[Mon Dec 05 07:54:48 2005] [notice] jk2_init() Found child 5032 in scoreboard slot 9 +1336:[Mon Dec 05 07:55:00 2005] [notice] jk2_init() Found child 5035 in scoreboard slot 7 +1337:[Mon Dec 05 07:55:00 2005] [notice] jk2_init() Found child 5034 in scoreboard slot 6 +1338:[Mon Dec 05 07:55:07 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1340:[Mon Dec 05 07:55:07 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1342:[Mon Dec 05 07:55:07 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1344:[Mon Dec 05 07:55:13 2005] [notice] jk2_init() Found child 5036 in scoreboard slot 9 +1345:[Mon Dec 05 07:57:01 2005] [notice] jk2_init() Found child 5050 in scoreboard slot 8 +1346:[Mon Dec 05 07:57:01 2005] [notice] jk2_init() Found child 5049 in scoreboard slot 7 +1347:[Mon Dec 05 07:57:01 2005] [notice] jk2_init() Found child 5048 in scoreboard slot 6 +1348:[Mon Dec 05 07:57:02 2005] [notice] jk2_init() Found child 5051 in scoreboard slot 9 +1351:[Mon Dec 05 07:57:02 2005] [notice] jk2_init() Found child 5052 in scoreboard slot 10 +1352:[Mon Dec 05 07:57:02 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1354:[Mon Dec 05 07:57:02 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1356:[Mon Dec 05 07:57:02 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1358:[Mon Dec 05 07:57:02 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1360:[Mon Dec 05 07:57:02 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1362:[Mon Dec 05 07:57:02 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1364:[Mon Dec 05 07:57:02 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1367:[Mon Dec 05 09:36:13 2005] [notice] jk2_init() Found child 5271 in scoreboard slot 7 +1368:[Mon Dec 05 09:36:13 2005] [notice] jk2_init() Found child 5270 in scoreboard slot 6 +1369:[Mon Dec 05 09:36:14 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1371:[Mon Dec 05 09:36:14 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1373:[Mon Dec 05 09:55:21 2005] [notice] jk2_init() Found child 5295 in scoreboard slot 8 +1374:[Mon Dec 05 09:55:21 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1376:[Mon Dec 05 10:10:32 2005] [notice] jk2_init() Found child 5330 in scoreboard slot 9 +1377:[Mon Dec 05 10:10:33 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1379:[Mon Dec 05 10:16:20 2005] [notice] jk2_init() Found child 5344 in scoreboard slot 7 +1380:[Mon Dec 05 10:16:52 2005] [notice] jk2_init() Found child 5347 in scoreboard slot 6 +1381:[Mon Dec 05 10:16:53 2005] [notice] jk2_init() Found child 5348 in scoreboard slot 7 +1382:[Mon Dec 05 10:17:45 2005] [notice] jk2_init() Found child 5350 in scoreboard slot 9 +1383:[Mon Dec 05 10:17:45 2005] [notice] jk2_init() Found child 5349 in scoreboard slot 8 +1384:[Mon Dec 05 10:17:49 2005] [notice] jk2_init() Found child 5352 in scoreboard slot 7 +1385:[Mon Dec 05 10:17:50 2005] [notice] jk2_init() Found child 5351 in scoreboard slot 6 +1386:[Mon Dec 05 10:17:51 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1387:[Mon Dec 05 10:17:51 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1388:[Mon Dec 05 10:17:51 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1389:[Mon Dec 05 10:17:51 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1394:[Mon Dec 05 10:21:05 2005] [notice] jk2_init() Found child 5366 in scoreboard slot 9 +1395:[Mon Dec 05 10:21:05 2005] [notice] jk2_init() Found child 5365 in scoreboard slot 8 +1396:[Mon Dec 05 10:21:05 2005] [notice] jk2_init() Found child 5367 in scoreboard slot 6 +1397:[Mon Dec 05 10:21:07 2005] [notice] jk2_init() Found child 5368 in scoreboard slot 7 +1398:[Mon Dec 05 10:21:13 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1399:[Mon Dec 05 10:21:13 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1400:[Mon Dec 05 10:21:13 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1401:[Mon Dec 05 10:21:13 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1406:[Mon Dec 05 10:26:26 2005] [notice] jk2_init() Found child 5384 in scoreboard slot 7 +1407:[Mon Dec 05 10:26:26 2005] [notice] jk2_init() Found child 5385 in scoreboard slot 8 +1408:[Mon Dec 05 10:26:25 2005] [notice] jk2_init() Found child 5386 in scoreboard slot 9 +1409:[Mon Dec 05 10:26:25 2005] [notice] jk2_init() Found child 5387 in scoreboard slot 6 +1410:[Mon Dec 05 10:26:31 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1412:[Mon Dec 05 10:26:31 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1414:[Mon Dec 05 10:26:31 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1416:[Mon Dec 05 10:26:31 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1418:[Mon Dec 05 10:26:36 2005] [notice] jk2_init() Found child 5388 in scoreboard slot 10 +1419:[Mon Dec 05 10:26:36 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1423:[Mon Dec 05 10:31:40 2005] [notice] jk2_init() Found child 5404 in scoreboard slot 8 +1424:[Mon Dec 05 10:31:40 2005] [notice] jk2_init() Found child 5405 in scoreboard slot 9 +1425:[Mon Dec 05 10:33:41 2005] [notice] jk2_init() Found child 5418 in scoreboard slot 6 +1426:[Mon Dec 05 10:33:41 2005] [notice] jk2_init() Found child 5419 in scoreboard slot 7 +1427:[Mon Dec 05 10:33:41 2005] [notice] jk2_init() Found child 5417 in scoreboard slot 9 +1428:[Mon Dec 05 10:33:41 2005] [notice] jk2_init() Found child 5416 in scoreboard slot 8 +1429:[Mon Dec 05 10:33:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1431:[Mon Dec 05 10:33:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1433:[Mon Dec 05 10:33:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1435:[Mon Dec 05 10:33:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1437:[Mon Dec 05 10:36:10 2005] [notice] jk2_init() Found child 5426 in scoreboard slot 6 +1438:[Mon Dec 05 10:36:10 2005] [notice] jk2_init() Found child 5425 in scoreboard slot 9 +1439:[Mon Dec 05 10:36:58 2005] [notice] jk2_init() Found child 5428 in scoreboard slot 8 +1440:[Mon Dec 05 10:37:27 2005] [notice] jk2_init() Found child 5429 in scoreboard slot 9 +1441:[Mon Dec 05 10:37:27 2005] [notice] jk2_init() Found child 5430 in scoreboard slot 6 +1442:[Mon Dec 05 10:38:00 2005] [notice] jk2_init() Found child 5434 in scoreboard slot 6 +1443:[Mon Dec 05 10:38:00 2005] [notice] jk2_init() Found child 5433 in scoreboard slot 9 +1444:[Mon Dec 05 10:38:00 2005] [notice] jk2_init() Found child 5435 in scoreboard slot 7 +1445:[Mon Dec 05 10:38:00 2005] [notice] jk2_init() Found child 5432 in scoreboard slot 8 +1446:[Mon Dec 05 10:38:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1448:[Mon Dec 05 10:38:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1450:[Mon Dec 05 10:38:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1452:[Mon Dec 05 10:38:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1454:[Mon Dec 05 10:41:14 2005] [notice] jk2_init() Found child 5470 in scoreboard slot 9 +1455:[Mon Dec 05 10:41:14 2005] [notice] jk2_init() Found child 5469 in scoreboard slot 8 +1456:[Mon Dec 05 10:42:23 2005] [notice] jk2_init() Found child 5474 in scoreboard slot 9 +1457:[Mon Dec 05 10:42:23 2005] [notice] jk2_init() Found child 5475 in scoreboard slot 6 +1458:[Mon Dec 05 10:43:19 2005] [notice] jk2_init() Found child 5482 in scoreboard slot 9 +1459:[Mon Dec 05 10:43:19 2005] [notice] jk2_init() Found child 5480 in scoreboard slot 7 +1460:[Mon Dec 05 10:43:19 2005] [notice] jk2_init() Found child 5479 in scoreboard slot 6 +1461:[Mon Dec 05 10:43:19 2005] [notice] jk2_init() Found child 5481 in scoreboard slot 8 +1462:[Mon Dec 05 10:43:34 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1463:[Mon Dec 05 10:43:34 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1465:[Mon Dec 05 10:43:41 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1467:[Mon Dec 05 10:43:48 2005] [notice] jk2_init() Found child 5484 in scoreboard slot 7 +1468:[Mon Dec 05 10:43:48 2005] [notice] jk2_init() Found child 5483 in scoreboard slot 6 +1469:[Mon Dec 05 10:43:51 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1471:[Mon Dec 05 10:43:51 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1473:[Mon Dec 05 10:46:55 2005] [notice] jk2_init() Found child 5497 in scoreboard slot 7 +1474:[Mon Dec 05 10:46:55 2005] [notice] jk2_init() Found child 5495 in scoreboard slot 9 +1475:[Mon Dec 05 10:46:55 2005] [notice] jk2_init() Found child 5494 in scoreboard slot 8 +1476:[Mon Dec 05 10:46:55 2005] [notice] jk2_init() Found child 5496 in scoreboard slot 6 +1477:[Mon Dec 05 10:47:12 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1478:[Mon Dec 05 10:47:12 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1481:[Mon Dec 05 10:47:32 2005] [notice] jk2_init() Found child 5499 in scoreboard slot 9 +1482:[Mon Dec 05 10:47:33 2005] [notice] jk2_init() Found child 5498 in scoreboard slot 8 +1483:[Mon Dec 05 10:47:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1484:[Mon Dec 05 10:47:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1487:[Mon Dec 05 10:47:47 2005] [notice] jk2_init() Found child 5500 in scoreboard slot 6 +1488:[Mon Dec 05 10:47:47 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1490:[Mon Dec 05 10:48:43 2005] [notice] jk2_init() Found child 5503 in scoreboard slot 10 +1491:[Mon Dec 05 10:48:46 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1494:[Mon Dec 05 10:51:12 2005] [notice] jk2_init() Found child 5515 in scoreboard slot 7 +1495:[Mon Dec 05 10:51:12 2005] [notice] jk2_init() Found child 5516 in scoreboard slot 8 +1496:[Mon Dec 05 10:51:33 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1497:[Mon Dec 05 10:51:33 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1500:[Mon Dec 05 10:51:59 2005] [notice] jk2_init() Found child 5517 in scoreboard slot 6 +1501:[Mon Dec 05 10:52:00 2005] [notice] jk2_init() Found child 5518 in scoreboard slot 9 +1502:[Mon Dec 05 10:52:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1503:[Mon Dec 05 10:52:15 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1505:[Mon Dec 05 10:53:42 2005] [notice] jk2_init() Found child 5527 in scoreboard slot 7 +1506:[Mon Dec 05 10:53:42 2005] [notice] jk2_init() Found child 5526 in scoreboard slot 9 +1507:[Mon Dec 05 10:55:47 2005] [notice] jk2_init() Found child 5538 in scoreboard slot 9 +1508:[Mon Dec 05 10:59:25 2005] [notice] jk2_init() Found child 5565 in scoreboard slot 9 +1509:[Mon Dec 05 10:59:25 2005] [notice] jk2_init() Found child 5563 in scoreboard slot 7 +1510:[Mon Dec 05 10:59:25 2005] [notice] jk2_init() Found child 5562 in scoreboard slot 6 +1511:[Mon Dec 05 10:59:25 2005] [notice] jk2_init() Found child 5564 in scoreboard slot 8 +1512:[Mon Dec 05 10:59:25 2005] [notice] jk2_init() Found child 5567 in scoreboard slot 12 +1513:[Mon Dec 05 10:59:25 2005] [notice] jk2_init() Found child 5568 in scoreboard slot 13 +1514:[Mon Dec 05 10:59:25 2005] [notice] jk2_init() Found child 5566 in scoreboard slot 10 +1515:[Mon Dec 05 10:59:29 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1516:[Mon Dec 05 10:59:29 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1517:[Mon Dec 05 10:59:29 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1518:[Mon Dec 05 10:59:29 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1519:[Mon Dec 05 10:59:29 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1520:[Mon Dec 05 10:59:29 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1521:[Mon Dec 05 10:59:29 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1529:[Mon Dec 05 11:02:05 2005] [notice] jk2_init() Found child 5579 in scoreboard slot 6 +1530:[Mon Dec 05 11:04:16 2005] [notice] jk2_init() Found child 5592 in scoreboard slot 8 +1531:[Mon Dec 05 11:04:16 2005] [notice] jk2_init() Found child 5593 in scoreboard slot 9 +1532:[Mon Dec 05 11:06:50 2005] [notice] jk2_init() Found child 5616 in scoreboard slot 6 +1533:[Mon Dec 05 11:06:51 2005] [notice] jk2_init() Found child 5617 in scoreboard slot 7 +1534:[Mon Dec 05 11:06:51 2005] [notice] jk2_init() Found child 5618 in scoreboard slot 8 +1535:[Mon Dec 05 11:06:51 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1537:[Mon Dec 05 11:06:51 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1539:[Mon Dec 05 11:06:51 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1542:[Mon Dec 05 11:06:52 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1545:[Mon Dec 05 11:06:52 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1548:[Mon Dec 05 11:06:52 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1551:[Mon Dec 05 11:06:52 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1553:[Mon Dec 05 12:35:57 2005] [notice] jk2_init() Found child 5785 in scoreboard slot 6 +1554:[Mon Dec 05 12:35:57 2005] [notice] jk2_init() Found child 5786 in scoreboard slot 7 +1555:[Mon Dec 05 12:36:36 2005] [notice] jk2_init() Found child 5790 in scoreboard slot 7 +1556:[Mon Dec 05 12:36:36 2005] [notice] jk2_init() Found child 5788 in scoreboard slot 9 +1557:[Mon Dec 05 12:36:36 2005] [notice] jk2_init() Found child 5789 in scoreboard slot 6 +1558:[Mon Dec 05 12:36:36 2005] [notice] jk2_init() Found child 5787 in scoreboard slot 8 +1559:[Mon Dec 05 12:36:39 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1561:[Mon Dec 05 12:36:39 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1563:[Mon Dec 05 12:36:39 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1565:[Mon Dec 05 12:36:39 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1567:[Mon Dec 05 12:40:37 2005] [notice] jk2_init() Found child 5798 in scoreboard slot 8 +1568:[Mon Dec 05 12:40:38 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1570:[Mon Dec 05 12:50:42 2005] [notice] jk2_init() Found child 5811 in scoreboard slot 6 +1571:[Mon Dec 05 12:50:42 2005] [notice] jk2_init() Found child 5810 in scoreboard slot 9 +1572:[Mon Dec 05 12:50:43 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1573:[Mon Dec 05 12:50:43 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1576:[Mon Dec 05 12:55:48 2005] [notice] jk2_init() Found child 5817 in scoreboard slot 8 +1577:[Mon Dec 05 12:55:48 2005] [notice] jk2_init() Found child 5816 in scoreboard slot 7 +1578:[Mon Dec 05 12:55:49 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1580:[Mon Dec 05 12:55:49 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1582:[Mon Dec 05 13:00:33 2005] [notice] jk2_init() Found child 5825 in scoreboard slot 9 +1583:[Mon Dec 05 13:00:33 2005] [notice] jk2_init() Found child 5826 in scoreboard slot 6 +1584:[Mon Dec 05 13:00:34 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1586:[Mon Dec 05 13:00:34 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1588:[Mon Dec 05 13:05:24 2005] [notice] jk2_init() Found child 5845 in scoreboard slot 7 +1589:[Mon Dec 05 13:05:24 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1591:[Mon Dec 05 13:10:55 2005] [notice] jk2_init() Found child 5856 in scoreboard slot 8 +1592:[Mon Dec 05 13:10:59 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1594:[Mon Dec 05 13:16:27 2005] [notice] jk2_init() Found child 5877 in scoreboard slot 9 +1595:[Mon Dec 05 13:16:27 2005] [notice] jk2_init() Found child 5876 in scoreboard slot 8 +1596:[Mon Dec 05 13:16:27 2005] [notice] jk2_init() Found child 5878 in scoreboard slot 6 +1597:[Mon Dec 05 13:16:27 2005] [notice] jk2_init() Found child 5875 in scoreboard slot 7 +1598:[Mon Dec 05 13:16:29 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1600:[Mon Dec 05 13:16:29 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1602:[Mon Dec 05 13:16:29 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1604:[Mon Dec 05 13:16:29 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1606:[Mon Dec 05 13:21:35 2005] [notice] jk2_init() Found child 5893 in scoreboard slot 9 +1607:[Mon Dec 05 13:21:34 2005] [notice] jk2_init() Found child 5892 in scoreboard slot 8 +1608:[Mon Dec 05 13:22:45 2005] [notice] jk2_init() Found child 5901 in scoreboard slot 9 +1609:[Mon Dec 05 13:22:45 2005] [notice] jk2_init() Found child 5899 in scoreboard slot 7 +1610:[Mon Dec 05 13:22:45 2005] [notice] jk2_init() Found child 5900 in scoreboard slot 8 +1611:[Mon Dec 05 13:22:45 2005] [notice] jk2_init() Found child 5898 in scoreboard slot 6 +1612:[Mon Dec 05 13:22:48 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1614:[Mon Dec 05 13:22:48 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1616:[Mon Dec 05 13:22:48 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1618:[Mon Dec 05 13:22:48 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1620:[Mon Dec 05 13:26:03 2005] [notice] jk2_init() Found child 5912 in scoreboard slot 7 +1621:[Mon Dec 05 13:26:37 2005] [notice] jk2_init() Found child 5914 in scoreboard slot 9 +1622:[Mon Dec 05 13:26:37 2005] [notice] jk2_init() Found child 5915 in scoreboard slot 6 +1623:[Mon Dec 05 13:27:15 2005] [notice] jk2_init() Found child 5917 in scoreboard slot 8 +1624:[Mon Dec 05 13:27:14 2005] [notice] jk2_init() Found child 5916 in scoreboard slot 7 +1625:[Mon Dec 05 13:27:15 2005] [notice] jk2_init() Found child 5919 in scoreboard slot 6 +1626:[Mon Dec 05 13:27:15 2005] [notice] jk2_init() Found child 5918 in scoreboard slot 9 +1627:[Mon Dec 05 13:28:14 2005] [notice] jk2_init() Found child 5925 in scoreboard slot 8 +1628:[Mon Dec 05 13:28:14 2005] [notice] jk2_init() Found child 5923 in scoreboard slot 6 +1629:[Mon Dec 05 13:28:14 2005] [notice] jk2_init() Found child 5924 in scoreboard slot 7 +1630:[Mon Dec 05 13:28:14 2005] [notice] jk2_init() Found child 5922 in scoreboard slot 9 +1631:[Mon Dec 05 13:28:17 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1633:[Mon Dec 05 13:28:17 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1635:[Mon Dec 05 13:28:17 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1637:[Mon Dec 05 13:28:17 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1639:[Mon Dec 05 13:31:19 2005] [notice] jk2_init() Found child 5935 in scoreboard slot 9 +1640:[Mon Dec 05 13:31:19 2005] [notice] jk2_init() Found child 5936 in scoreboard slot 6 +1641:[Mon Dec 05 13:31:53 2005] [notice] jk2_init() Found child 5938 in scoreboard slot 8 +1642:[Mon Dec 05 13:31:53 2005] [notice] jk2_init() Found child 5937 in scoreboard slot 7 +1643:[Mon Dec 05 13:32:01 2005] [notice] jk2_init() Found child 5940 in scoreboard slot 6 +1644:[Mon Dec 05 13:32:01 2005] [notice] jk2_init() Found child 5939 in scoreboard slot 9 +1645:[Mon Dec 05 13:32:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1646:[Mon Dec 05 13:32:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1649:[Mon Dec 05 13:32:09 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1650:[Mon Dec 05 13:32:09 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1653:[Mon Dec 05 13:32:28 2005] [notice] jk2_init() Found child 5942 in scoreboard slot 8 +1654:[Mon Dec 05 13:32:28 2005] [notice] jk2_init() Found child 5941 in scoreboard slot 7 +1655:[Mon Dec 05 13:32:30 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1656:[Mon Dec 05 13:32:30 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1659:[Mon Dec 05 13:36:27 2005] [notice] jk2_init() Found child 5954 in scoreboard slot 7 +1660:[Mon Dec 05 13:36:27 2005] [notice] jk2_init() Found child 5953 in scoreboard slot 6 +1661:[Mon Dec 05 13:36:58 2005] [notice] jk2_init() Found child 5956 in scoreboard slot 9 +1662:[Mon Dec 05 13:36:58 2005] [notice] jk2_init() Found child 5957 in scoreboard slot 6 +1663:[Mon Dec 05 13:36:58 2005] [notice] jk2_init() Found child 5955 in scoreboard slot 8 +1664:[Mon Dec 05 13:37:47 2005] [notice] jk2_init() Found child 5961 in scoreboard slot 6 +1665:[Mon Dec 05 13:37:47 2005] [notice] jk2_init() Found child 5960 in scoreboard slot 9 +1666:[Mon Dec 05 13:38:52 2005] [notice] jk2_init() Found child 5968 in scoreboard slot 9 +1667:[Mon Dec 05 13:38:53 2005] [notice] jk2_init() Found child 5965 in scoreboard slot 6 +1668:[Mon Dec 05 13:38:52 2005] [notice] jk2_init() Found child 5967 in scoreboard slot 8 +1669:[Mon Dec 05 13:38:53 2005] [notice] jk2_init() Found child 5969 in scoreboard slot 10 +1670:[Mon Dec 05 13:38:52 2005] [notice] jk2_init() Found child 5966 in scoreboard slot 7 +1671:[Mon Dec 05 13:39:09 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1672:[Mon Dec 05 13:39:09 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1675:[Mon Dec 05 13:39:36 2005] [notice] jk2_init() Found child 5970 in scoreboard slot 6 +1676:[Mon Dec 05 13:39:36 2005] [notice] jk2_init() Found child 5971 in scoreboard slot 7 +1677:[Mon Dec 05 13:39:41 2005] [notice] jk2_init() Found child 5972 in scoreboard slot 8 +1678:[Mon Dec 05 13:39:41 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1679:[Mon Dec 05 13:39:41 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1680:[Mon Dec 05 13:39:41 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1684:[Mon Dec 05 13:41:11 2005] [notice] jk2_init() Found child 5981 in scoreboard slot 9 +1685:[Mon Dec 05 13:41:12 2005] [notice] jk2_init() Found child 5982 in scoreboard slot 6 +1686:[Mon Dec 05 13:41:58 2005] [notice] jk2_init() Found child 5984 in scoreboard slot 8 +1687:[Mon Dec 05 13:41:58 2005] [notice] jk2_init() Found child 5985 in scoreboard slot 9 +1688:[Mon Dec 05 13:43:27 2005] [notice] jk2_init() Found child 5992 in scoreboard slot 8 +1689:[Mon Dec 05 13:43:27 2005] [notice] jk2_init() Found child 5993 in scoreboard slot 9 +1690:[Mon Dec 05 13:43:27 2005] [notice] jk2_init() Found child 5990 in scoreboard slot 6 +1691:[Mon Dec 05 13:43:27 2005] [notice] jk2_init() Found child 5991 in scoreboard slot 7 +1692:[Mon Dec 05 13:43:43 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1693:[Mon Dec 05 13:43:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1694:[Mon Dec 05 13:43:43 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1698:[Mon Dec 05 13:44:18 2005] [notice] jk2_init() Found child 5995 in scoreboard slot 7 +1699:[Mon Dec 05 13:44:18 2005] [notice] jk2_init() Found child 5996 in scoreboard slot 8 +1700:[Mon Dec 05 13:44:32 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1701:[Mon Dec 05 13:44:35 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1704:[Mon Dec 05 13:44:53 2005] [notice] jk2_init() Found child 5997 in scoreboard slot 9 +1705:[Mon Dec 05 13:45:01 2005] [notice] jk2_init() Found child 5998 in scoreboard slot 6 +1706:[Mon Dec 05 13:45:01 2005] [notice] jk2_init() Found child 5999 in scoreboard slot 7 +1707:[Mon Dec 05 13:45:08 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1708:[Mon Dec 05 13:45:08 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1709:[Mon Dec 05 13:45:08 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1713:[Mon Dec 05 13:46:20 2005] [notice] jk2_init() Found child 6007 in scoreboard slot 7 +1714:[Mon Dec 05 13:46:20 2005] [notice] jk2_init() Found child 6006 in scoreboard slot 6 +1715:[Mon Dec 05 13:46:20 2005] [notice] jk2_init() Found child 6005 in scoreboard slot 9 +1716:[Mon Dec 05 13:46:50 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1717:[Mon Dec 05 13:47:06 2005] [notice] jk2_init() Found child 6008 in scoreboard slot 8 +1718:[Mon Dec 05 13:47:06 2005] [notice] jk2_init() Found child 6009 in scoreboard slot 9 +1719:[Mon Dec 05 13:47:09 2005] [notice] jk2_init() Found child 6011 in scoreboard slot 7 +1720:[Mon Dec 05 13:47:09 2005] [notice] jk2_init() Found child 6010 in scoreboard slot 6 +1721:[Mon Dec 05 13:47:11 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1722:[Mon Dec 05 13:47:11 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1723:[Mon Dec 05 13:47:11 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1724:[Mon Dec 05 13:47:11 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1729:[Mon Dec 05 13:51:17 2005] [notice] jk2_init() Found child 6028 in scoreboard slot 9 +1730:[Mon Dec 05 13:52:19 2005] [notice] jk2_init() Found child 6036 in scoreboard slot 9 +1731:[Mon Dec 05 13:52:19 2005] [notice] jk2_init() Found child 6033 in scoreboard slot 6 +1732:[Mon Dec 05 13:52:19 2005] [notice] jk2_init() Found child 6035 in scoreboard slot 8 +1733:[Mon Dec 05 13:52:19 2005] [notice] jk2_init() Found child 6034 in scoreboard slot 7 +1734:[Mon Dec 05 13:52:33 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1735:[Mon Dec 05 13:52:33 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1738:[Mon Dec 05 13:53:00 2005] [notice] jk2_init() Found child 6038 in scoreboard slot 7 +1739:[Mon Dec 05 13:53:00 2005] [notice] jk2_init() Found child 6037 in scoreboard slot 6 +1740:[Mon Dec 05 13:53:00 2005] [notice] jk2_init() Found child 6039 in scoreboard slot 10 +1741:[Mon Dec 05 13:53:31 2005] [notice] jk2_init() Found child 6043 in scoreboard slot 9 +1742:[Mon Dec 05 13:53:31 2005] [notice] jk2_init() Found child 6042 in scoreboard slot 7 +1743:[Mon Dec 05 13:53:31 2005] [notice] jk2_init() Found child 6041 in scoreboard slot 6 +1744:[Mon Dec 05 13:53:34 2005] [notice] jk2_init() Found child 6044 in scoreboard slot 8 +1745:[Mon Dec 05 13:53:35 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1746:[Mon Dec 05 13:53:35 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1747:[Mon Dec 05 13:53:35 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1748:[Mon Dec 05 13:53:35 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1753:[Mon Dec 05 13:56:21 2005] [notice] jk2_init() Found child 6052 in scoreboard slot 6 +1754:[Mon Dec 05 13:56:38 2005] [notice] jk2_init() Found child 6053 in scoreboard slot 7 +1755:[Mon Dec 05 13:57:07 2005] [notice] jk2_init() Found child 6054 in scoreboard slot 9 +1756:[Mon Dec 05 13:57:07 2005] [notice] jk2_init() Found child 6055 in scoreboard slot 8 +1757:[Mon Dec 05 13:58:31 2005] [notice] jk2_init() Found child 6063 in scoreboard slot 8 +1758:[Mon Dec 05 13:58:31 2005] [notice] jk2_init() Found child 6062 in scoreboard slot 9 +1759:[Mon Dec 05 13:59:43 2005] [notice] jk2_init() Found child 6069 in scoreboard slot 7 +1760:[Mon Dec 05 13:59:43 2005] [notice] jk2_init() Found child 6070 in scoreboard slot 9 +1761:[Mon Dec 05 13:59:43 2005] [notice] jk2_init() Found child 6071 in scoreboard slot 8 +1762:[Mon Dec 05 14:01:47 2005] [notice] jk2_init() Found child 6100 in scoreboard slot 7 +1763:[Mon Dec 05 14:01:47 2005] [notice] jk2_init() Found child 6101 in scoreboard slot 8 +1764:[Mon Dec 05 14:01:47 2005] [notice] jk2_init() Found child 6099 in scoreboard slot 6 +1765:[Mon Dec 05 14:01:48 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1767:[Mon Dec 05 14:01:48 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1769:[Mon Dec 05 14:01:48 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1771:[Mon Dec 05 14:11:40 2005] [notice] jk2_init() Found child 6115 in scoreboard slot 10 +1773:[Mon Dec 05 14:11:45 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1775:[Mon Dec 05 15:31:06 2005] [notice] jk2_init() Found child 6259 in scoreboard slot 6 +1776:[Mon Dec 05 15:31:06 2005] [notice] jk2_init() Found child 6260 in scoreboard slot 7 +1777:[Mon Dec 05 15:31:09 2005] [notice] jk2_init() Found child 6261 in scoreboard slot 8 +1778:[Mon Dec 05 15:31:10 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1780:[Mon Dec 05 15:31:10 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1782:[Mon Dec 05 15:31:10 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1784:[Mon Dec 05 15:40:59 2005] [notice] jk2_init() Found child 6277 in scoreboard slot 7 +1785:[Mon Dec 05 15:40:59 2005] [notice] jk2_init() Found child 6276 in scoreboard slot 6 +1786:[Mon Dec 05 15:41:32 2005] [notice] jk2_init() Found child 6280 in scoreboard slot 7 +1787:[Mon Dec 05 15:41:32 2005] [notice] jk2_init() Found child 6278 in scoreboard slot 8 +1788:[Mon Dec 05 15:41:32 2005] [notice] jk2_init() Found child 6279 in scoreboard slot 6 +1789:[Mon Dec 05 15:41:32 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1791:[Mon Dec 05 15:41:32 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1793:[Mon Dec 05 15:41:32 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1795:[Mon Dec 05 15:45:42 2005] [notice] jk2_init() Found child 6285 in scoreboard slot 8 +1796:[Mon Dec 05 15:45:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1798:[Mon Dec 05 15:50:53 2005] [notice] jk2_init() Found child 6293 in scoreboard slot 6 +1799:[Mon Dec 05 15:50:53 2005] [notice] jk2_init() Found child 6294 in scoreboard slot 7 +1800:[Mon Dec 05 15:51:18 2005] [notice] jk2_init() Found child 6297 in scoreboard slot 7 +1801:[Mon Dec 05 15:51:18 2005] [notice] jk2_init() Found child 6295 in scoreboard slot 8 +1802:[Mon Dec 05 15:51:18 2005] [notice] jk2_init() Found child 6296 in scoreboard slot 6 +1803:[Mon Dec 05 15:51:20 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1805:[Mon Dec 05 15:51:20 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1807:[Mon Dec 05 15:51:20 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1809:[Mon Dec 05 15:55:31 2005] [notice] jk2_init() Found child 6302 in scoreboard slot 8 +1810:[Mon Dec 05 15:55:32 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1812:[Mon Dec 05 16:01:17 2005] [notice] jk2_init() Found child 6310 in scoreboard slot 6 +1813:[Mon Dec 05 16:02:00 2005] [notice] jk2_init() Found child 6315 in scoreboard slot 6 +1814:[Mon Dec 05 16:02:00 2005] [notice] jk2_init() Found child 6316 in scoreboard slot 7 +1815:[Mon Dec 05 16:02:00 2005] [notice] jk2_init() Found child 6314 in scoreboard slot 8 +1816:[Mon Dec 05 16:02:02 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1817:[Mon Dec 05 16:02:02 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1818:[Mon Dec 05 16:02:02 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1822:[Mon Dec 05 16:06:07 2005] [notice] jk2_init() Found child 6333 in scoreboard slot 8 +1823:[Mon Dec 05 16:06:21 2005] [notice] jk2_init() Found child 6335 in scoreboard slot 7 +1824:[Mon Dec 05 16:07:08 2005] [notice] jk2_init() Found child 6339 in scoreboard slot 8 +1825:[Mon Dec 05 16:07:08 2005] [notice] jk2_init() Found child 6340 in scoreboard slot 6 +1826:[Mon Dec 05 16:07:08 2005] [notice] jk2_init() Found child 6338 in scoreboard slot 7 +1827:[Mon Dec 05 16:07:09 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1829:[Mon Dec 05 16:07:09 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1831:[Mon Dec 05 16:07:09 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1833:[Mon Dec 05 16:10:43 2005] [notice] jk2_init() Found child 6351 in scoreboard slot 8 +1834:[Mon Dec 05 16:10:43 2005] [notice] jk2_init() Found child 6350 in scoreboard slot 7 +1835:[Mon Dec 05 16:10:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1837:[Mon Dec 05 16:10:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1839:[Mon Dec 05 16:16:34 2005] [notice] jk2_init() Found child 6368 in scoreboard slot 8 +1840:[Mon Dec 05 16:16:34 2005] [notice] jk2_init() Found child 6367 in scoreboard slot 7 +1841:[Mon Dec 05 16:16:34 2005] [notice] jk2_init() Found child 6366 in scoreboard slot 6 +1842:[Mon Dec 05 16:16:36 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1844:[Mon Dec 05 16:16:36 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1846:[Mon Dec 05 16:16:36 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1848:[Mon Dec 05 16:21:25 2005] [notice] jk2_init() Found child 6387 in scoreboard slot 7 +1849:[Mon Dec 05 16:21:25 2005] [notice] jk2_init() Found child 6386 in scoreboard slot 6 +1850:[Mon Dec 05 16:21:25 2005] [notice] jk2_init() Found child 6385 in scoreboard slot 8 +1851:[Mon Dec 05 16:21:29 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1853:[Mon Dec 05 16:21:29 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1855:[Mon Dec 05 16:21:29 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1857:[Mon Dec 05 16:26:00 2005] [notice] jk2_init() Found child 6400 in scoreboard slot 7 +1858:[Mon Dec 05 16:26:00 2005] [notice] jk2_init() Found child 6399 in scoreboard slot 6 +1859:[Mon Dec 05 16:26:00 2005] [notice] jk2_init() Found child 6398 in scoreboard slot 8 +1860:[Mon Dec 05 16:26:05 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1862:[Mon Dec 05 16:26:05 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1864:[Mon Dec 05 16:26:05 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1866:[Mon Dec 05 16:31:48 2005] [notice] jk2_init() Found child 6420 in scoreboard slot 6 +1867:[Mon Dec 05 16:31:49 2005] [notice] jk2_init() Found child 6421 in scoreboard slot 7 +1868:[Mon Dec 05 16:31:49 2005] [notice] jk2_init() Found child 6422 in scoreboard slot 8 +1869:[Mon Dec 05 16:31:52 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1870:[Mon Dec 05 16:31:52 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1871:[Mon Dec 05 16:31:52 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1875:[Mon Dec 05 16:36:06 2005] [notice] jk2_init() Found child 6434 in scoreboard slot 7 +1876:[Mon Dec 05 16:36:06 2005] [notice] jk2_init() Found child 6433 in scoreboard slot 6 +1877:[Mon Dec 05 16:36:42 2005] [notice] jk2_init() Found child 6435 in scoreboard slot 8 +1878:[Mon Dec 05 16:37:03 2005] [notice] jk2_init() Found child 6437 in scoreboard slot 7 +1879:[Mon Dec 05 16:38:17 2005] [notice] jk2_init() Found child 6443 in scoreboard slot 7 +1880:[Mon Dec 05 16:38:17 2005] [notice] jk2_init() Found child 6442 in scoreboard slot 6 +1881:[Mon Dec 05 16:39:59 2005] [notice] jk2_init() Found child 6453 in scoreboard slot 10 +1882:[Mon Dec 05 16:39:59 2005] [notice] jk2_init() Found child 6451 in scoreboard slot 7 +1883:[Mon Dec 05 16:39:59 2005] [notice] jk2_init() Found child 6452 in scoreboard slot 8 +1884:[Mon Dec 05 16:40:06 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1885:[Mon Dec 05 16:40:06 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1886:[Mon Dec 05 16:40:06 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1891:[Mon Dec 05 17:31:37 2005] [notice] jk2_init() Found child 6561 in scoreboard slot 10 +1893:[Mon Dec 05 17:31:41 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1895:[Mon Dec 05 17:35:57 2005] [notice] jk2_init() Found child 6569 in scoreboard slot 8 +1896:[Mon Dec 05 17:35:57 2005] [notice] jk2_init() Found child 6568 in scoreboard slot 7 +1897:[Mon Dec 05 17:35:58 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1899:[Mon Dec 05 17:35:58 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1901:[Mon Dec 05 17:40:38 2005] [notice] jk2_init() Found child 6577 in scoreboard slot 7 +1902:[Mon Dec 05 17:40:38 2005] [notice] jk2_init() Found child 6578 in scoreboard slot 8 +1903:[Mon Dec 05 17:40:39 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1905:[Mon Dec 05 17:40:39 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1907:[Mon Dec 05 17:46:02 2005] [notice] jk2_init() Found child 6585 in scoreboard slot 7 +1908:[Mon Dec 05 17:46:02 2005] [notice] jk2_init() Found child 6586 in scoreboard slot 8 +1909:[Mon Dec 05 17:46:06 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1910:[Mon Dec 05 17:46:06 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1913:[Mon Dec 05 17:50:40 2005] [notice] jk2_init() Found child 6595 in scoreboard slot 8 +1914:[Mon Dec 05 17:50:40 2005] [notice] jk2_init() Found child 6594 in scoreboard slot 7 +1915:[Mon Dec 05 17:50:41 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1917:[Mon Dec 05 17:50:41 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1919:[Mon Dec 05 17:55:35 2005] [notice] jk2_init() Found child 6601 in scoreboard slot 8 +1920:[Mon Dec 05 17:55:35 2005] [notice] jk2_init() Found child 6600 in scoreboard slot 7 +1921:[Mon Dec 05 17:55:35 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1922:[Mon Dec 05 17:55:35 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1925:[Mon Dec 05 18:00:24 2005] [notice] jk2_init() Found child 6609 in scoreboard slot 7 +1926:[Mon Dec 05 18:00:26 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1928:[Mon Dec 05 18:10:56 2005] [notice] jk2_init() Found child 6639 in scoreboard slot 7 +1929:[Mon Dec 05 18:10:56 2005] [notice] jk2_init() Found child 6638 in scoreboard slot 8 +1930:[Mon Dec 05 18:10:58 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1932:[Mon Dec 05 18:10:58 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1934:[Mon Dec 05 18:15:45 2005] [notice] jk2_init() Found child 6652 in scoreboard slot 7 +1935:[Mon Dec 05 18:15:45 2005] [notice] jk2_init() Found child 6651 in scoreboard slot 8 +1936:[Mon Dec 05 18:15:47 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1937:[Mon Dec 05 18:15:47 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1940:[Mon Dec 05 18:20:51 2005] [notice] jk2_init() Found child 6670 in scoreboard slot 7 +1941:[Mon Dec 05 18:20:51 2005] [notice] jk2_init() Found child 6669 in scoreboard slot 8 +1942:[Mon Dec 05 18:20:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1944:[Mon Dec 05 18:20:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1946:[Mon Dec 05 18:26:06 2005] [notice] jk2_init() Found child 6684 in scoreboard slot 7 +1947:[Mon Dec 05 18:27:29 2005] [notice] jk2_init() Found child 6688 in scoreboard slot 8 +1948:[Mon Dec 05 18:27:33 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1950:[Mon Dec 05 18:27:37 2005] [notice] jk2_init() Found child 6689 in scoreboard slot 7 +1951:[Mon Dec 05 18:27:37 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1953:[Mon Dec 05 18:35:51 2005] [notice] jk2_init() Found child 6707 in scoreboard slot 8 +1954:[Mon Dec 05 18:35:51 2005] [notice] jk2_init() Found child 6708 in scoreboard slot 7 +1955:[Mon Dec 05 18:35:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1957:[Mon Dec 05 18:35:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1959:[Mon Dec 05 18:40:54 2005] [notice] jk2_init() Found child 6719 in scoreboard slot 7 +1960:[Mon Dec 05 18:40:54 2005] [notice] jk2_init() Found child 6718 in scoreboard slot 8 +1961:[Mon Dec 05 18:40:54 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1963:[Mon Dec 05 18:40:54 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1965:[Mon Dec 05 18:45:51 2005] [notice] jk2_init() Found child 6725 in scoreboard slot 7 +1966:[Mon Dec 05 18:45:51 2005] [notice] jk2_init() Found child 6724 in scoreboard slot 8 +1967:[Mon Dec 05 18:45:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1969:[Mon Dec 05 18:45:53 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1971:[Mon Dec 05 18:50:30 2005] [notice] jk2_init() Found child 6733 in scoreboard slot 8 +1972:[Mon Dec 05 18:50:31 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1974:[Mon Dec 05 18:56:03 2005] [notice] jk2_init() Found child 6740 in scoreboard slot 7 +1975:[Mon Dec 05 18:56:03 2005] [notice] jk2_init() Found child 6741 in scoreboard slot 8 +1976:[Mon Dec 05 18:56:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1978:[Mon Dec 05 18:56:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1980:[Mon Dec 05 19:00:43 2005] [notice] jk2_init() Found child 6750 in scoreboard slot 8 +1981:[Mon Dec 05 19:00:43 2005] [notice] jk2_init() Found child 6749 in scoreboard slot 7 +1982:[Mon Dec 05 19:00:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1983:[Mon Dec 05 19:00:44 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1986:[Mon Dec 05 19:00:54 2005] [notice] jk2_init() Found child 6751 in scoreboard slot 10 +1987:[Mon Dec 05 19:00:54 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1990:[Mon Dec 05 19:11:00 2005] [notice] jk2_init() Found child 6780 in scoreboard slot 7 +1991:[Mon Dec 05 19:11:04 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1993:[Mon Dec 05 19:14:08 2005] [notice] jk2_init() Found child 6784 in scoreboard slot 8 +1995:[Mon Dec 05 19:14:11 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +1997:[Mon Dec 05 19:15:55 2005] [notice] jk2_init() Found child 6791 in scoreboard slot 8 +1998:[Mon Dec 05 19:15:55 2005] [notice] jk2_init() Found child 6790 in scoreboard slot 7 +1999:[Mon Dec 05 19:15:57 2005] [notice] workerEnv.init() ok /etc/httpd/conf/workers2.properties +ubuntu@ip-172-31-27-220:~$ + +***what i learned*** +i learned using grep command +sort and finding uniq using grep +finding specific words in script diff --git a/2026/day-21/shell_scripting_cheatsheet.md b/2026/day-21/shell_scripting_cheatsheet.md new file mode 100644 index 0000000000..ed5c31e79e --- /dev/null +++ b/2026/day-21/shell_scripting_cheatsheet.md @@ -0,0 +1,114 @@ +### Tasks + +### Task 1: Basics +* **Shebang (`#!/bin/bash`)**: The first line that tells the kernel which shell to use to execute the script. +* **Execution**: + * `chmod +x script.sh` - Grant execute permission. + * `./script.sh` - Run the script. + * `bash script.sh` - Run script through bash explicitly. +* **Comments**: Use `#` for single-line or inline notes. +* **Variables**: + * `VAR="Value"` (No spaces around `=`). + * `"$VAR"` (Double quotes: expands variables). + * `'$VAR'` (Single quotes: literal string, no expansion). +* **User Input**: `read -p "Enter username: " USERNAME` +* **Arguments**: + * `$0`: Script name | `$1-$9`: Arguments | `$#`: Number of args | `$@`: All args | `$?`: Exit status of last command. + +--- + +### Task 2: Operators and Conditionals + +### Comparison Operators +* **Strings**: `==` (equal), `!=` (not equal), `-z` (empty), `-n` (not empty). +* **Integers**: `-eq` (==), `-ne` (!=), `-lt` (<), `-gt` (>), `-le` (<=), `-ge` (>=). +* **Files**: `-f` (file exists), `-d` (directory exists), `-x` (executable), `-s` (not empty). + +### Logic & Syntax +```bash +# If/Else Logic +if [[ "$STATUS" == "active" ]] && [[ "$USER" == "root" ]]; then + echo "Access Granted" +elif [[ "$STATUS" == "pending" ]]; then + echo "Wait..." +else + echo "Denied" +fi + +# Case Statements +case "$ACTION" in + start) systemctl start nginx ;; + stop) systemctl stop nginx ;; + *) echo "Usage: start|stop" ;; +esac +``` + +### Task 3: Loops +```bash +# List-based For Loop +for item in upload backup logs; do + mkdir -p "/mnt/$item" +done + +# C-style For Loop +for ((i=1; i<=5; i++)); do + echo "Iteration $i" +done + +# While Loop (Reading a file line by line) +while read -r line; do + echo "Processing: $line" +done < data.txt +``` + +### Task 4: Functions +```bash + +check_status() { + local service=$1 # Use 'local' to prevent global scope issues + systemctl is-active --quiet "$service" + if [ $? -eq 0 ]; then + echo "$service is running" + else + return 1 + fi +} + +# Call the function +check_status "docker" +``` +### Task 5: Text Processing Commands +**grep**: grep -ri "error" /var/log (Search recursively, case-insensitive). + +**awk**: awk -F':' '{print $1}' /etc/passwd (Print 1st column using : as delimiter). + +**sed**: sed -i 's/search/replace/g' config.yml (In-place string replacement). + +**cut**: cut -d',' -f2 file.csv (Extract 2nd field of a CSV). + +**sort/uniq**: sort file.txt | uniq -c (Sort lines and count unique occurrences). + +**tr**: cat file | tr 'a-z' 'A-Z' (Convert to uppercase). + +**wc**: wc -l file.txt (Line count). + +**tail -f**: tail -f /var/log/syslog (Live stream log updates). + +### Task 6: Useful Patterns and One-Liners +Delete files older than 30 days: find /logs -mtime +30 -type f -delete + +Check if a port is open: netstat -tuln | grep :8080 + +Count lines in all .log files: cat *.log | wc -l + +Find the largest 5 files: du -ah . | sort -rh | head -n 5 + +Monitor disk usage alert: df -h | awk '$5+0 > 80 {print "Warning: " $1 " is at " $5}' + +set -e # Exit script immediately if a command fails +set -u # Exit if an uninitialized variable is used +set -o pipefail # Catch errors in piped commands +set -x # Print commands for debugging (Trace mode) + +# Cleanup trap +trap "echo 'Cleaning up...'; rm -f /tmp/temp_*" EXIT \ No newline at end of file diff --git a/2026/day-22/day-22-notes.md b/2026/day-22/day-22-notes.md new file mode 100644 index 0000000000..6259083140 --- /dev/null +++ b/2026/day-22/day-22-notes.md @@ -0,0 +1,165 @@ +### Task 1: Install and Configure Git + +ubuntu@ip-172-31-27-220:~$ git --version +git version 2.43.0 +mkdir ubuntu@ip-172-31-27-220:~$ git config --global user.name "akash" +ubuntu@ip-172-31-27-220:~$ git config --global user.email "akashjaura@gmail.com" +ubuntu@ip-172-31-27-220:~$ git config --global --list +user.name=akash +user.email=akashjaura@gmail.com +ubuntu@ip-172-31-27-220:~$ git config --list +user.name=akash +user.email=akashjaura@gmail.com +ubuntu@ip-172-31-27-220:~$ + +### Task 2: Create Your Git Project +1. Create a new folder called `devops-git-practice` +ubuntu@ip-172-31-27-220:~$ cd devops-git-practice/ + +2. Initialize it as a Git repository +ubuntu@ip-172-31-27-220:~/devops-git-practice$ git init +hint: Using 'master' as the name for the initial branch. This default branch name +hint: is subject to change. To configure the initial branch name to use in all +hint: of your new repositories, which will suppress this warning, call: +hint: +hint: git config --global init.defaultBranch +hint: +hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and +hint: 'development'. The just-created branch can be renamed via this command: +hint: +hint: git branch -m +Initialized empty Git repository in /home/ubuntu/devops-git-practice/.git/ + +3. Explore the hidden `.git/` directory — look at what's inside +ubuntu@ip-172-31-27-220:~/devops-git-practice$ ls .git +HEAD branches config description hooks info objects refs +ubuntu@ip-172-31-27-220:~/devops-git-practice$ cd .git +ubuntu@ip-172-31-27-220:~/devops-git-practice/.git$ ls -l +total 32 +-rw-rw-r-- 1 ubuntu ubuntu 23 Mar 3 11:09 HEAD +drwxrwxr-x 2 ubuntu ubuntu 4096 Mar 3 11:09 branches +-rw-rw-r-- 1 ubuntu ubuntu 92 Mar 3 11:09 config +-rw-rw-r-- 1 ubuntu ubuntu 73 Mar 3 11:09 description +drwxrwxr-x 2 ubuntu ubuntu 4096 Mar 3 11:09 hooks +drwxrwxr-x 2 ubuntu ubuntu 4096 Mar 3 11:09 info +drwxrwxr-x 4 ubuntu ubuntu 4096 Mar 3 11:09 objects +drwxrwxr-x 4 ubuntu ubuntu 4096 Mar 3 11:09 refs +ubuntu@ip-172-31-27-220:~/devops-git-practice/.git$ + +### Task 4: Stage and Commit +ubuntu@ip-172-31-27-220:~/devops-git-practice$ vim git-commands.md +ubuntu@ip-172-31-27-220:~/devops-git-practice$ git status +On branch master +Untracked files: + (use "git add ..." to include in what will be committed) + git-commands.md + +nothing added to commit but untracked files present (use "git add" to track) +ubuntu@ip-172-31-27-220:~/devops-git-practice$ git add git-commands.md +ubuntu@ip-172-31-27-220:~/devops-git-practice$ git status +On branch master +Changes to be committed: + (use "git restore --staged ..." to unstage) + new file: git-commands.md + +ubuntu@ip-172-31-27-220:~/devops-git-practice$ git commit -m "git commands cheatsheet added" +[master ac91840] git commands cheatsheet added + 1 file changed, 51 insertions(+) + create mode 100644 git-commands.md + + ### Task 5: Make More Changes and Build History +ubuntu@ip-172-31-27-220:~/devops-git-practice$ vim git-commands.md +ubuntu@ip-172-31-27-220:~/devops-git-practice$ git status +On branch master +Changes not staged for commit: + (use "git add ..." to update what will be committed) + (use "git restore ..." to discard changes in working directory) + modified: git-commands.md + +no changes added to commit (use "git add" and/or "git commit -a") +ubuntu@ip-172-31-27-220:~/devops-git-practice$ git commit -m "git commands log added" +On branch master +Changes not staged for commit: + (use "git add ..." to update what will be committed) + (use "git restore ..." to discard changes in working directory) + modified: git-commands.md + +no changes added to commit (use "git add" and/or "git commit -a") +ubuntu@ip-172-31-27-220:~/devops-git-practice$ git add git-commands.md +ubuntu@ip-172-31-27-220:~/devops-git-practice$ git commit -m "git commands log added" +[master 5a79e22] git commands log added + 1 file changed, 1 insertion(+) +ubuntu@ip-172-31-27-220:~/devops-git-practice$ git status +On branch master +nothing to commit, working tree clean +ubuntu@ip-172-31-27-220:~/devops-git-practice$ vim git-commands.md +ubuntu@ip-172-31-27-220:~/devops-git-practice$ git status +On branch master +Changes not staged for commit: + (use "git add ..." to update what will be committed) + (use "git restore ..." to discard changes in working directory) + modified: git-commands.md + +no changes added to commit (use "git add" and/or "git commit -a") +ubuntu@ip-172-31-27-220:~/devops-git-practice$ git add git-commands.md +ubuntu@ip-172-31-27-220:~/devops-git-practice$ git status +On branch master +Changes to be committed: + (use "git restore --staged ..." to unstage) + modified: git-commands.md + +ubuntu@ip-172-31-27-220:~/devops-git-practice$ git commit -m "git commands log description added" +[master 8151662] git commands log description added + 1 file changed, 2 insertions(+) + +`full history in a compact format` +ubuntu@ip-172-31-27-220:~/devops-git-practice$ git log +commit 81516623a0d1019de1074454c349b7475de09eed (HEAD -> master) +Author: akash +Date: Tue Mar 3 11:22:05 2026 +0000 + + git commands log description added + +commit 5a79e227f5b01b7d37927dbdf904c0c4292ff221 +Author: akash +Date: Tue Mar 3 11:21:04 2026 +0000 + + git commands log added + +commit ac918408da268fb0988128bde7e7c093274da0ac +Author: akash +Date: Tue Mar 3 11:19:40 2026 +0000 + + git commands cheatsheet added + +commit b43f1b62d1ac85199fd9b77006e9255fbc6f4873 +Author: akash +Date: Tue Mar 3 11:11:43 2026 +0000 + + file added +ubuntu@ip-172-31-27-220:~/devops-git-practice$ + + +### Task 6: Understand the Git Workflow +Answer these questions in your own words (add them to a `day-22-notes.md` file): +1. What is the difference between `git add` and `git commit`? +git add: Used when a new file is created or updated to tell Git to track those changes. + +git commit: Used to save the staged changes permanently. Once committed, the file's current state is saved in history and cannot be lost. + +2. What does the **staging area** do? Why doesn't Git just commit directly? +The staging area lets us use git status to see exactly which files are modified before saving them. Git doesn't commit directly because we might have unwanted files that shouldn't be included. Staging ensures only the correct changes impact the system. + +3. What information does `git log` show you? +git log shows the previous history of all commits. It includes the unique commit ID, the author's name, the email of who committed it, and the date/message. + +4. What is the `.git/` folder and what happens if you delete it? +The .git/ folder is created when you initialize a project. It contains all the "brain" info like HEAD and branches. If you delete this folder, all version history is removed, and the folder becomes a normal directory again. + +5. What is the difference between a **working directory**, **staging area**, and **repository**? +Working Directory: The actual folder where you are currently making changes to files. + +Staging Area: The middle zone where you prepare and check your changes using git add. + +Repository: The final storage (the .git folder) where all your confirmed history and commits live. +and \ No newline at end of file diff --git a/2026/day-22/git-logs.png b/2026/day-22/git-logs.png new file mode 100644 index 0000000000..df2bde6a08 Binary files /dev/null and b/2026/day-22/git-logs.png differ diff --git a/2026/day-25/day-25-notes.md b/2026/day-25/day-25-notes.md new file mode 100644 index 0000000000..952f758908 --- /dev/null +++ b/2026/day-25/day-25-notes.md @@ -0,0 +1,85 @@ +## Challenge Tasks +- What is the difference between `--soft`, `--mixed`, and `--hard`? + - Which one is destructive and why? + + --hard is destructive because it overwrites the working Directory. If uncommitted code changes in files, git reset --hard will wipe them out instantly. + + - When would you use each one? + + --soft -> when we realized that we forgot to add a file to the last commit or want to join two commits together into one. + --mixed -> when we want to change the commit message and our code is safe, but have to git add it again. + --hard -> it deletes the commit also and remove content of files that are changed in this commit + + - Should you ever use `git reset` on commits that are already pushed? + + If reset and "Force Push," are used then it delete the history that colleagues might have already pulled. This creates "diverged histories" and can lead to hours of manual cleanup. + +### Task 2: Git Revert — Hands-On + + - How is `git revert` different from `git reset`? + + git reset is a History Rewriting tool. It physically moves the HEAD pointer backward as if the commit never happened, effectively deleting or moving commits from the history log. + + git revert is a History Appending tool. It doesn't move the pointer backward; instead, it moves the pointer forward by adding a new commit that performs the inverse operation of an older commit. + - Why is revert considered **safer** than reset for shared branches? + + git revert is safer because it does not alter existing history. + + If we have already pushed our code to a shared repository (like GitHub/GitLab), git reset will break the history for everyone else on our team (because their local copy won't match the new, "shorter" history). git revert is perfectly safe to push, as it simply adds a new change that everyone else can pull without conflicts + - When would you use revert vs reset? + + Use git reset when: + + - You are working only on your local machine and haven't pushed your work to a remote server. + + - You made a mistake in your very recent local commits and want to "clean up" your personal workspace before sharing it. + + Use git revert when: + + - You have already pushed your code to a remote repository. + +- You want to undo a specific change but need to maintain a clear audit trail of who reverted what and when. + +- You are working on a team and need to ensure your "undo" action doesn't disrupt the history of your teammates. + +### Task 3 Git: Reset vs. Revert Comparison + +| Feature | `git reset` | `git revert` | +| :--- | :--- | :--- | +| **What it does** | Moves the branch pointer backward to a previous commit. | Creates a **new** commit that performs the inverse (opposite) of an existing commit. | +| **Removes commit from history?** | **Yes.** It "erases" the commits from the history log. | **No.** It keeps the original commits and adds a new one to the log. | +| **Safe for shared/pushed branches?** | **No.** It rewrites history, which causes conflicts for team members. | **Yes.** Since it adds a new commit, it is safe to push. | +| **When to use** | Use for **private, local changes** that haven't shared/pushed yet. | Use for **public, pushed changes** where we need to undo something safely. | + +--- + +### Task 4: Branching Strategies +# Branching Strategies: Documentation + +## 1. Trunk-Based Development +* **How it works:** Developers merge small, frequent updates directly into a single "trunk" (or `main`) branch, often multiple times a day. Long-lived feature branches are avoided. +* **Flow:** + `[Main/Trunk] <-- (commit/merge) <-- [Developer A]` + `[Main/Trunk] <-- (commit/merge) <-- [Developer B]` +* **Used:** High-performing DevOps teams practicing CI/CD. +* **Pros:** Minimal merge conflicts; high visibility; forces automated testing; enables very fast release cycles. +* **Cons:** Requires high developer discipline; relies heavily on automated test suites; can be daunting for beginners. + +## 2. GitHub Flow +* **How it works:** A simple, branch-based workflow. Create a descriptive branch from `main`, push it, open a Pull Request (PR) for review, and merge it back into `main` once approved. +* **Flow:** + `Main <--- (create branch) --- Feature Branch --- (PR & Merge) ---> Main` +* **Used:** Web applications and SaaS products with only one version in production. +* **Pros:** Simple to learn; keeps `main` deployable; encourages code reviews through PRs. +* **Cons:** Struggles with multiple production versions; if `main` breaks, it blocks deployments. + +## 3. GitFlow +* **How it works:** A strict, structured strategy using multiple long-lived branches: `main` (production), `develop` (integration), `feature/`, `release/`, and `hotfix/`. +* **Flow:** + `Main <--- (Release) <--- Develop <--- Feature Branch` +* **Used:** Complex projects needing multi-version support or strict release gates. +* **Pros:** Highly organized; excellent for complex release schedules; clear separation of concerns. +* **Cons:** High complexity; slower release cycles; potential for "merge hell"; less suitable for CI/CD. + +--- + diff --git a/2026/day-42/README.md b/2026/day-42/README.md new file mode 100644 index 0000000000..fe2cf8db23 --- /dev/null +++ b/2026/day-42/README.md @@ -0,0 +1,120 @@ +# Day 42 – Runners: GitHub-Hosted & Self-Hosted + +## Task +Every job needs a machine to run on. Today you understand **runners** — GitHub's hosted ones and how to set up your own self-hosted runner on a real server. + +--- + +## Expected Output +- A self-hosted runner registered to your GitHub repo +- A workflow that runs a job on your self-hosted runner +- A markdown file: `day-42-runners.md` + +--- + +## Challenge Tasks + +### Task 1: GitHub-Hosted Runners +1. Create a workflow with 3 jobs, each on a different OS: + - `ubuntu-latest` + - `windows-latest` + - `macos-latest` +2. In each job, print: + - The OS name + - The runner's hostname + - The current user running the job +3. Watch all 3 run in parallel + +Write in your notes: What is a GitHub-hosted runner? Who manages it? + +--- + +### Task 2: Explore What's Pre-installed +1. On the `ubuntu-latest` runner, run a step that prints: + - Docker version + - Python version + - Node version + - Git version +2. Look up the GitHub docs for the full list of pre-installed software on `ubuntu-latest` + +Write in your notes: Why does it matter that runners come with tools pre-installed? + +--- + +### Task 3: Set Up a Self-Hosted Runner +1. Go to your GitHub repo → Settings → Actions → Runners → **New self-hosted runner** +2. Choose Linux as the OS +3. Follow the instructions to download and configure the runner on: + - Your local machine, OR + - A cloud VM (EC2, Utho, or any VPS) +4. Start the runner — verify it shows as **Idle** in GitHub + +**Verify:** Your runner appears in the Runners list with a green dot. + +--- + +### Task 4: Use Your Self-Hosted Runner +1. Create `.github/workflows/self-hosted.yml` +2. Set `runs-on: self-hosted` +3. Add steps that: + - Print the hostname of the machine (it should be YOUR machine/VM) + - Print the working directory + - Create a file and verify it exists on your machine after the run +4. Trigger it and watch it run on your own hardware + +**Verify:** Check your machine — is the file there? + +--- + +### Task 5: Labels +1. Add a **label** to your self-hosted runner (e.g., `my-linux-runner`) +2. Update your workflow to use `runs-on: [self-hosted, my-linux-runner]` +3. Trigger it — does it still pick up the job? + +Write in your notes: Why are labels useful when you have multiple self-hosted runners? + +--- + +### Task 6: GitHub-Hosted vs Self-Hosted +Fill this in your notes: + +| | GitHub-Hosted | Self-Hosted | +|---|---|---| +| Who manages it? | ? | ? | +| Cost | ? | ? | +| Pre-installed tools | ? | ? | +| Good for | ? | ? | +| Security concern | ? | ? | + +--- + +## Hints +- Runner setup script is generated by GitHub — just copy and run it +- Self-hosted runner runs as a background service: `./run.sh` +- To run as a service (persistent): `sudo ./svc.sh install && sudo ./svc.sh start` +- `runs-on: self-hosted` targets any self-hosted runner +- `runs-on: [self-hosted, linux, my-label]` targets specific ones + +--- + +## Documentation +Create `day-42-runners.md` with: +- Screenshot of your self-hosted runner showing as Idle in GitHub +- Screenshot of a job running on your self-hosted runner +- The comparison table from Task 6 + +--- + +## Submission +1. Add `day-42-runners.md` to `2026/day-42/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your self-hosted runner screenshot on LinkedIn — running CI on your own machine is a cool flex. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-43/README.md b/2026/day-43/README.md new file mode 100644 index 0000000000..eaff40d08b --- /dev/null +++ b/2026/day-43/README.md @@ -0,0 +1,95 @@ +# Day 43 – Jobs, Steps, Env Vars & Conditionals + +## Task +Today you learn how to **control the flow** of your pipeline — multi-job workflows, passing data between jobs, environment variables, and running steps only when certain conditions are met. + +--- + +## Expected Output +- New workflow files in your `github-actions-practice` repo +- A markdown file: `day-43-jobs-steps.md` + +--- + +## Challenge Tasks + +### Task 1: Multi-Job Workflow +Create `.github/workflows/multi-job.yml` with 3 jobs: +- `build` — prints "Building the app" +- `test` — prints "Running tests" +- `deploy` — prints "Deploying" + +Make `test` run only **after** `build` succeeds. +Make `deploy` run only **after** `test` succeeds. + +**Verify:** Check the workflow graph in the Actions tab — does it show the dependency chain? + +--- + +### Task 2: Environment Variables +In a new workflow, use environment variables at 3 levels: +1. **Workflow level** — `APP_NAME: myapp` +2. **Job level** — `ENVIRONMENT: staging` +3. **Step level** — `VERSION: 1.0.0` + +Print all three in a single step and verify each is accessible. + +Then use a **GitHub context variable** — print the commit SHA and the actor (who triggered the run). + +--- + +### Task 3: Job Outputs +1. Create a job that **sets an output** — e.g., today's date as a string +2. Create a second job that **reads that output** and prints it +3. Pass the value using `outputs:` and `needs..outputs.` + +Write in your notes: Why would you pass outputs between jobs? + +--- + +### Task 4: Conditionals +In a workflow, add: +1. A step that only runs when the branch is `main` +2. A step that only runs when the previous step **failed** +3. A job that only runs on **push** events, not on pull requests +4. A step with `continue-on-error: true` — what does this do? + +--- + +### Task 5: Putting It Together +Create `.github/workflows/smart-pipeline.yml` that: +1. Triggers on push to any branch +2. Has a `lint` job and a `test` job running in parallel +3. Has a `summary` job that runs after both, prints whether it's a `main` branch push or a feature branch push, and prints the commit message + +--- + +## Hints +- Job dependency: `needs: [job-name]` +- Set output: `echo "date=$(date)" >> $GITHUB_OUTPUT` +- Read output: `${{ needs.job-name.outputs.date }}` +- Conditionals: `if: github.ref == 'refs/heads/main'` +- Commit message: `${{ github.event.commits[0].message }}` + +--- + +## Documentation +Create `day-43-jobs-steps.md` with: +- Key workflow snippets +- What `needs:` and `outputs:` do in your own words + +--- + +## Submission +1. Add `day-43-jobs-steps.md` to `2026/day-43/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share the dependency chain diagram from your multi-job workflow on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-44/README.md b/2026/day-44/README.md new file mode 100644 index 0000000000..980e396626 --- /dev/null +++ b/2026/day-44/README.md @@ -0,0 +1,100 @@ +# Day 44 – Secrets, Artifacts & Running Real Tests in CI + +## Task +Today your pipeline starts doing **real work** — storing sensitive values securely, saving build outputs, and running actual tests from your previous days. + +--- + +## Expected Output +- New workflow files in your `github-actions-practice` repo +- A markdown file: `day-44-secrets-artifacts.md` +- A passing test run in CI + +--- + +## Challenge Tasks + +### Task 1: GitHub Secrets +1. Go to your repo → Settings → Secrets and Variables → Actions +2. Create a secret called `MY_SECRET_MESSAGE` +3. Create a workflow that reads it and prints: `The secret is set: true` (never print the actual value) +4. Try to print `${{ secrets.MY_SECRET_MESSAGE }}` directly — what does GitHub show? + +Write in your notes: Why should you never print secrets in CI logs? + +--- + +### Task 2: Use Secrets as Environment Variables +1. Pass a secret to a step as an environment variable +2. Use it in a shell command without ever hardcoding it +3. Add `DOCKER_USERNAME` and `DOCKER_TOKEN` as secrets (you'll need these on Day 45) + +--- + +### Task 3: Upload Artifacts +1. Create a step that generates a file — e.g., a test report or a log file +2. Use `actions/upload-artifact` to save it +3. After the workflow runs, download the artifact from the Actions tab + +**Verify:** Can you see and download it from GitHub? + +--- + +### Task 4: Download Artifacts Between Jobs +1. Job 1: generate a file and upload it as an artifact +2. Job 2: download the artifact from Job 1 and use it (print its contents) + +Write in your notes: When would you use artifacts in a real pipeline? + +--- + +### Task 5: Run Real Tests in CI +Take any script from your earlier days (Python or Shell) and run it in CI: +1. Add your script to the `github-actions-practice` repo +2. Write a workflow that: + - Checks out the code + - Installs any dependencies needed + - Runs the script + - Fails the pipeline if the script exits with a non-zero code +3. Intentionally break the script — verify the pipeline goes red +4. Fix it — verify it goes green again + +--- + +### Task 6: Caching +1. Add `actions/cache` to a workflow that installs dependencies +2. Run it twice — observe the time difference +3. Write in your notes: What is being cached and where is it stored? + +--- + +## Hints +- Secrets: `${{ secrets.SECRET_NAME }}` +- Upload artifact: `uses: actions/upload-artifact@v4` +- Download artifact: `uses: actions/download-artifact@v4` +- Cache: `uses: actions/cache@v4` +- GitHub masks secret values in logs automatically + +--- + +## Documentation +Create `day-44-secrets-artifacts.md` with: +- Screenshots of artifact download +- Screenshot of your passing test run +- What you learned about secrets management + +--- + +## Submission +1. Add `day-44-secrets-artifacts.md` to `2026/day-44/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your first real test run passing in CI on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-45/.github/workflows/docker-publish.yml b/2026/day-45/.github/workflows/docker-publish.yml new file mode 100644 index 0000000000..e880cbdb30 --- /dev/null +++ b/2026/day-45/.github/workflows/docker-publish.yml @@ -0,0 +1,26 @@ +name: dockerizing application +on: + push: + branches: + - main +jobs: + docker: + runs-on: ubuntu-latest + steps: + - name: GITHUB CHECKOUT + uses: actions/checkout@v4 + + - name: Login to docker hub + uses: docker/login-action@v3 + with: + username: ${{ vars.DOCKERHUB_USERNAME }} + password: ${{ secrets.DOCKER_TOKEN }} + + - name: Set up docker buildx + uses: docker/setup-buildx-action@v3 + + - name: Build and push + uses: docker/build-push-action@v6 + with: + push: true + tags: akashjaura16/your-app-name:latest \ No newline at end of file diff --git a/2026/day-45/README.md b/2026/day-45/README.md new file mode 100644 index 0000000000..11de33cd4b --- /dev/null +++ b/2026/day-45/README.md @@ -0,0 +1,99 @@ +# Day 45 – Docker Build & Push in GitHub Actions + +## Task +Today you build a **complete CI/CD pipeline** — code pushed to GitHub automatically builds a Docker image and ships it to Docker Hub. No manual steps. + +This is exactly what happens in real production pipelines. + +--- + +## Expected Output +- A complete workflow: `.github/workflows/docker-publish.yml` +- Your Docker image live on Docker Hub +- A status badge in your repo README +- A markdown file: `day-45-docker-cicd.md` + +--- + +## Challenge Tasks + +### Task 1: Prepare +1. Use the app you Dockerized on Day 36 (or any simple Dockerfile) +2. Add the Dockerfile to your `github-actions-practice` repo (or create a minimal one) +3. Make sure `DOCKER_USERNAME` and `DOCKER_TOKEN` secrets are set from Day 44 + +--- + +### Task 2: Build the Docker Image in CI +Create `.github/workflows/docker-publish.yml` that: +1. Triggers on push to `main` +2. Checks out the code +3. Builds the Docker image and tags it + +**Verify:** Check the build step logs — does the image build successfully? + +--- + +### Task 3: Push to Docker Hub +Add steps to: +1. Log in to Docker Hub using your secrets +2. Tag the image as `username/repo:latest` and also `username/repo:sha-` +3. Push both tags + +**Verify:** Go to Docker Hub — is your image there with both tags? + +--- + +### Task 4: Only Push on Main +Add a condition so the push step only runs on the `main` branch — not on feature branches or PRs. + +Test it: push to a feature branch and verify the image is built but NOT pushed. + +--- + +### Task 5: Add a Status Badge +1. Get the badge URL for your `docker-publish` workflow from the Actions tab +2. Add it to your `README.md` +3. Push — the badge should show green + +--- + +### Task 6: Pull and Run It +1. On your local machine (or a cloud server), pull the image you just pushed +2. Run it +3. Confirm it works + +Write in your notes: What is the full journey from `git push` to a running container? + +--- + +## Hints +- Docker login: `uses: docker/login-action@v3` +- Build and push: `uses: docker/build-push-action@v5` +- Short SHA: `${{ github.sha }}` (use `cut` or `slice` to get first 7 chars) +- Badge URL format: `https://github.com///actions/workflows/.yml/badge.svg` + +--- + +## Documentation +Create `day-45-docker-cicd.md` with: +- Your complete workflow YAML +- Docker Hub link to your image +- Screenshot of the pipeline run +- The full journey described in Task 6 + +--- + +## Submission +1. Add `day-45-docker-cicd.md` to `2026/day-45/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your Docker Hub image link and the green badge on LinkedIn. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-45/Task-Manager b/2026/day-45/Task-Manager new file mode 160000 index 0000000000..765dc36598 --- /dev/null +++ b/2026/day-45/Task-Manager @@ -0,0 +1 @@ +Subproject commit 765dc36598031da1f5f5ba42205acc90d7de2a0d diff --git a/2026/day-45/day-45-docker-cicd.md b/2026/day-45/day-45-docker-cicd.md new file mode 100644 index 0000000000..67374c4509 --- /dev/null +++ b/2026/day-45/day-45-docker-cicd.md @@ -0,0 +1,141 @@ +# Day 45 — Dockerizing App with GitHub Actions + + +> Built a CI/CD pipeline that auto-builds the Flask Task Manager Docker image and pushes it to Docker Hub on every push to `main`. + +--- + +## What I Built + +A GitHub Actions workflow connecting Day 36 (Dockerizing Flask app) with real CI/CD automation. Every `git push` to `main` now automatically builds and ships the image to Docker Hub — no manual steps. + +--- + +## Workflow File — `.github/workflows/docker_publish.yml` + +```yaml +name: dockerizing application + +on: + push: + branches: + - main + +jobs: + docker: + runs-on: ubuntu-latest + steps: + - name: GITHUB CHECKOUT + uses: actions/checkout@v4 + + - name: Login to Docker Hub + uses: docker/login-action@v3 + with: + username: ${{ vars.DOCKERHUB_USERNAME }} + password: ${{ secrets.DOCKER_TOKEN }} + + - name: Set up Docker Buildx + uses: docker/setup-buildx-action@v3 + + - name: Build and push + uses: docker/build-push-action@v6 + with: + context: 2026/day-45/Task-Manager + push: true + tags: akashjaura16/task-manager:latest +``` + +--- + +## Secrets & Variables Setup + +| Name | Type | Value | +|------|------|-------| +| `DOCKERHUB_USERNAME` | Variable | `akashjaura16` | +| `DOCKER_TOKEN` | Secret | Docker Hub Personal Access Token (Read & Write) | + +> Generate token at: hub.docker.com → Account Settings → Personal Access Tokens → Read & Write + +--- + +## Docker Hub Image + +```bash +docker pull akashjaura16/task-manager:latest +docker run -p 5000:5000 akashjaura16/task-manager:latest +``` + +→ https://hub.docker.com/r/akashjaura16/task-manager + +--- + +## Bugs I Hit & Fixed + +### Bug 1 — Wrong action names (extra "s" typo) + +**Error:** Workflow failed immediately — actions not found + +| Wrong (what I used) | Correct (fix) | +|---------------------|---------------| +| `docker/login-actions@v4` | `docker/login-action@v3` | +| `docker/setup-buildx-actions@v4` | `docker/setup-buildx-action@v3` | +| `docker/build-push-actions@v7` | `docker/build-push-action@v6` | + +**Lesson:** The official Docker actions have no "s" — `login-action`, not `login-actions`. + +--- + +### Bug 2 — Docker Hub token with read-only permissions + +**Error:** +``` +Error response from daemon: unauthorized: incorrect username or password +``` + +The username was correct (`akashjaura16`) but the token was set to **Read-only** — Docker Hub rejected the push. + +**Fix:** +1. hub.docker.com → Account Settings → Personal Access Tokens +2. Delete old token +3. Generate new token → set **Read & Write** +4. Update `DOCKER_TOKEN` secret in GitHub repo settings +5. Trigger re-run: +```powershell +git commit --allow-empty -m "fix: docker token read write" +git push +``` + +--- + +## Key Concepts Learned + +- GitHub Actions can build and push Docker images automatically on every commit +- Docker Hub Personal Access Tokens must have **Read & Write** permissions for push to work +- Action names are case-sensitive and version-specific — always double check the exact name +- `vars.*` is used for non-sensitive config (username), `secrets.*` for sensitive data (token) +- `context:` in `build-push-action` points to the folder containing your Dockerfile + +--- + +## Tasks Completed + +- [x] GitHub Actions workflow triggers on push to `main` +- [x] Workflow checks out repo code +- [x] Logs into Docker Hub using secrets +- [x] Builds Docker image from Flask Task Manager app +- [x] Pushes image as `akashjaura16/task-manager:latest` +- [x] Debugged and fixed wrong action names +- [x] Debugged and fixed read-only token issue +- [x] Image live and pullable on Docker Hub + +--- + +## Resources + +- [docker/login-action](https://github.com/docker/login-action) +- [docker/build-push-action](https://github.com/docker/build-push-action) +- [Docker Hub Access Tokens](https://docs.docker.com/security/for-developers/access-tokens/) + +--- + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` `#GitHubActions` `#Docker` diff --git a/2026/day-46/README.md b/2026/day-46/README.md new file mode 100644 index 0000000000..27f643e1df --- /dev/null +++ b/2026/day-46/README.md @@ -0,0 +1,132 @@ +# Day 46 – Reusable Workflows & Composite Actions + +## Task +You've been writing workflows from scratch every time. In the real world, teams **don't repeat themselves** — they create reusable workflows that any repo can call like a function. Today you learn `workflow_call` and composite actions. + +--- + +## Expected Output +- A reusable workflow and a caller workflow in your `github-actions-practice` repo +- A custom composite action +- A markdown file: `day-46-reusable-workflows.md` + +--- + +## Challenge Tasks + +### Task 1: Understand `workflow_call` +Before writing any code, research and answer in your notes: +1. What is a **reusable workflow**? +2. What is the `workflow_call` trigger? +3. How is calling a reusable workflow different from using a regular action (`uses:`)? +4. Where must a reusable workflow file live? + +--- + +### Task 2: Create Your First Reusable Workflow +Create `.github/workflows/reusable-build.yml`: +1. Set the trigger to `workflow_call` +2. Add an `inputs:` section with: + - `app_name` (string, required) + - `environment` (string, required, default: `staging`) +3. Add a `secrets:` section with: + - `docker_token` (required) +4. Create a job that: + - Checks out the code + - Prints `Building for ` + - Prints `Docker token is set: true` (never print the actual secret) + +**Verify:** This file alone won't run — it needs a caller. That's next. + +--- + +### Task 3: Create a Caller Workflow +Create `.github/workflows/call-build.yml`: +1. Trigger on push to `main` +2. Add a job that uses your reusable workflow: + ```yaml + jobs: + build: + uses: ./.github/workflows/reusable-build.yml + with: + app_name: "my-web-app" + environment: "production" + secrets: + docker_token: ${{ secrets.DOCKER_TOKEN }} + ``` +3. Push to `main` and watch it run + +**Verify:** In the Actions tab, do you see the caller triggering the reusable workflow? Click into the job — can you see the inputs printed? + +--- + +### Task 4: Add Outputs to the Reusable Workflow +Extend `reusable-build.yml`: +1. Add an `outputs:` section that exposes a `build_version` value +2. Inside the job, generate a version string (e.g., `v1.0-`) and set it as output +3. In your caller workflow, add a second job that: + - Depends on the build job (`needs:`) + - Reads and prints the `build_version` output + +**Verify:** Does the second job print the version from the reusable workflow? + +--- + +### Task 5: Create a Composite Action +Create a **custom composite action** in your repo at `.github/actions/setup-and-greet/action.yml`: +1. Define inputs: `name` and `language` (default: `en`) +2. Add steps that: + - Print a greeting in the specified language + - Print the current date and runner OS + - Set an output called `greeted` with value `true` +3. Use the composite action in a new workflow with `uses: ./.github/actions/setup-and-greet` + +**Verify:** Does your custom action run and print the greeting? + +--- + +### Task 6: Reusable Workflow vs Composite Action +Fill this in your notes: + +| | Reusable Workflow | Composite Action | +|---|---|---| +| Triggered by | `workflow_call` | `uses:` in a step | +| Can contain jobs? | ? | ? | +| Can contain multiple steps? | ? | ? | +| Lives where? | ? | ? | +| Can accept secrets directly? | ? | ? | +| Best for | ? | ? | + +--- + +## Hints +- Reusable workflows must be in `.github/workflows/` directory +- Caller syntax: `uses: ./.github/workflows/file.yml` (same repo) or `uses: org/repo/.github/workflows/file.yml@main` (cross-repo) +- Composite action: `action.yml` with `runs: using: "composite"` +- Reusable workflow outputs: `on: workflow_call: outputs: name: value: ${{ jobs.job-id.outputs.name }}` +- A reusable workflow can be called by at most 20 unique caller workflows in a single run + +--- + +## Documentation +Create `day-46-reusable-workflows.md` with: +- Your reusable workflow and caller workflow YAML +- Your composite action YAML +- The comparison table from Task 6 +- Screenshot of the caller workflow triggering the reusable one + +--- + +## Submission +1. Add `day-46-reusable-workflows.md` to `2026/day-46/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share how you built your first reusable workflow on LinkedIn — this is a real production skill. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-47/README.md b/2026/day-47/README.md new file mode 100644 index 0000000000..6279290536 --- /dev/null +++ b/2026/day-47/README.md @@ -0,0 +1,151 @@ +# Day 47 – Advanced Triggers: PR Events, Cron Schedules & Event-Driven Pipelines + +## Task +You've used `push` and basic `pull_request` triggers. But GitHub Actions supports **dozens of event types** — today you go deep into PR lifecycle events, scheduled cron jobs, and chaining workflows together. + +--- + +## Expected Output +- Multiple workflow files demonstrating advanced triggers +- A markdown file: `day-47-advanced-triggers.md` +- At least one scheduled workflow running on your repo + +--- + +## Challenge Tasks + +### Task 1: Pull Request Event Types +Create `.github/workflows/pr-lifecycle.yml` that triggers on `pull_request` with **specific activity types**: +1. Trigger on: `opened`, `synchronize`, `reopened`, `closed` +2. Add steps that: + - Print which event type fired: `${{ github.event.action }}` + - Print the PR title: `${{ github.event.pull_request.title }}` + - Print the PR author: `${{ github.event.pull_request.user.login }}` + - Print the source branch and target branch +3. Add a conditional step that only runs when the PR is **merged** (closed + merged = true) + +Test it: create a PR, push an update to it, then merge it. Watch the workflow fire each time with a different event type. + +--- + +### Task 2: PR Validation Workflow +Create `.github/workflows/pr-checks.yml` — a real-world PR gate: +1. Trigger on `pull_request` to `main` +2. Add a job `file-size-check` that: + - Checks out the code + - Fails if any file in the PR is larger than 1 MB +3. Add a job `branch-name-check` that: + - Reads the branch name from `${{ github.head_ref }}` + - Fails if it doesn't follow the pattern `feature/*`, `fix/*`, or `docs/*` +4. Add a job `pr-body-check` that: + - Reads the PR body: `${{ github.event.pull_request.body }}` + - Warns (but doesn't fail) if the PR description is empty + +**Verify:** Open a PR from a badly named branch — does the check fail? + +--- + +### Task 3: Scheduled Workflows (Cron Deep Dive) +Create `.github/workflows/scheduled-tasks.yml`: +1. Add a `schedule` trigger with cron: `'30 2 * * 1'` (every Monday at 2:30 AM UTC) +2. Add **another** cron entry: `'0 */6 * * *'` (every 6 hours) +3. In the job, print which schedule triggered using `${{ github.event.schedule }}` +4. Add a step that acts as a **health check** — curl a URL and check the response code + +Write in your notes: +- The cron expression for: every weekday at 9 AM IST +- The cron expression for: first day of every month at midnight +- Why GitHub says scheduled workflows may be delayed or skipped on inactive repos + +**Important:** Also add `workflow_dispatch` so you can test it manually without waiting for the schedule. + +--- + +### Task 4: Path & Branch Filters +Create `.github/workflows/smart-triggers.yml`: +1. Trigger on push but **only** when files in `src/` or `app/` change: + ```yaml + on: + push: + paths: + - 'src/**' + - 'app/**' + ``` +2. Add `paths-ignore` in a second workflow that skips runs when only docs change: + ```yaml + paths-ignore: + - '*.md' + - 'docs/**' + ``` +3. Add branch filters to only trigger on `main` and `release/*` branches +4. Test it: push a change to a `.md` file — does the workflow skip? + +Write in your notes: When would you use `paths` vs `paths-ignore`? + +--- + +### Task 5: `workflow_run` — Chain Workflows Together +Create two workflows: +1. `.github/workflows/tests.yml` — runs tests on every push +2. `.github/workflows/deploy-after-tests.yml` — triggers **only after** `tests.yml` completes successfully: + ```yaml + on: + workflow_run: + workflows: ["Run Tests"] + types: [completed] + ``` +3. In the deploy workflow, add a conditional: + - Only proceed if the triggering workflow **succeeded** (`${{ github.event.workflow_run.conclusion == 'success' }}`) + - Print a warning and exit if it failed + +**Verify:** Push a commit — does the test workflow run first, then trigger the deploy workflow? + +--- + +### Task 6: `repository_dispatch` — External Event Triggers +1. Create `.github/workflows/external-trigger.yml` with trigger `repository_dispatch` +2. Set it to respond to event type: `deploy-request` +3. Print the client payload: `${{ github.event.client_payload.environment }}` +4. Trigger it using `curl` or `gh`: + ```bash + gh api repos///dispatches \ + -f event_type=deploy-request \ + -f client_payload='{"environment":"production"}' + ``` + +Write in your notes: When would an external system (like a Slack bot or monitoring tool) trigger a pipeline? + +--- + +## Hints +- PR merge check: `if: github.event.pull_request.merged == true` +- Cron syntax: `minute hour day-of-month month day-of-week` +- Scheduled workflows only run on the **default branch** +- `workflow_run` gives you access to the triggering workflow's conclusion and artifacts +- `repository_dispatch` requires a personal access token with `repo` scope +- Path filters use glob patterns — `**` matches nested directories + +--- + +## Documentation +Create `day-47-advanced-triggers.md` with: +- Your workflow YAML files +- The cron expressions from Task 3 +- Screenshot of the PR checks running on a pull request +- Explanation of `workflow_run` vs `workflow_call` in your own words + +--- + +## Submission +1. Add `day-47-advanced-triggers.md` to `2026/day-47/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your PR validation workflow on LinkedIn — automated PR gates are a real DevOps flex. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-48/README.md b/2026/day-48/README.md new file mode 100644 index 0000000000..d3eae5a38b --- /dev/null +++ b/2026/day-48/README.md @@ -0,0 +1,161 @@ +# Day 48 – GitHub Actions Project: End-to-End CI/CD Pipeline + +## Task +You've learned workflows, triggers, secrets, Docker builds, reusable workflows, and advanced events. Today you **put it all together** in one project — a complete, production-style CI/CD pipeline that builds, tests, and deploys using everything you've learned from Day 40 to Day 47. + +This is your GitHub Actions capstone. + +--- + +## Expected Output +- A GitHub repo with a working app, Dockerfile, and complete CI/CD pipeline +- At least 3 workflow files working together +- A markdown file: `day-48-actions-project.md` +- Screenshot of your full pipeline in action + +--- + +## Challenge Tasks + +### Task 1: Set Up the Project Repo +1. Create a new repo called `github-actions-capstone` (or use your existing `github-actions-practice`) +2. Add a simple app — pick any one: + - A Python Flask/FastAPI app with one endpoint + - A Node.js Express app with one endpoint + - Your Dockerized app from Day 36 +3. Add a `Dockerfile` and a basic test (even a script that curls the health endpoint counts) +4. Add a `README.md` with a project description + +--- + +### Task 2: Reusable Workflow — Build & Test +Create `.github/workflows/reusable-build-test.yml`: +1. Trigger: `workflow_call` +2. Inputs: `python_version` (or `node_version`), `run_tests` (boolean, default: true) +3. Steps: + - Check out code + - Set up the language runtime + - Install dependencies + - Run tests (only if `run_tests` is true) + - Set output: `test_result` with value `passed` or `failed` + +This workflow does NOT deploy — it only builds and tests. + +--- + +### Task 3: Reusable Workflow — Docker Build & Push +Create `.github/workflows/reusable-docker.yml`: +1. Trigger: `workflow_call` +2. Inputs: `image_name` (string), `tag` (string) +3. Secrets: `docker_username`, `docker_token` +4. Steps: + - Check out code + - Log in to Docker Hub + - Build and push the image with the given tag + - Set output: `image_url` with the full image path + +--- + +### Task 4: PR Pipeline +Create `.github/workflows/pr-pipeline.yml`: +1. Trigger: `pull_request` to `main` (types: `opened`, `synchronize`) +2. Call the reusable build-test workflow: + - Run tests: `true` +3. Add a standalone job `pr-comment` that: + - Runs after the build-test job + - Prints a summary: "PR checks passed for branch: ``" +4. Do **NOT** build or push Docker images on PRs + +**Verify:** Open a PR — does it run tests only (no Docker push)? + +--- + +### Task 5: Main Branch Pipeline +Create `.github/workflows/main-pipeline.yml`: +1. Trigger: `push` to `main` +2. Job 1: Call the reusable build-test workflow +3. Job 2 (depends on Job 1): Call the reusable Docker workflow + - Tag: `latest` and `sha-` +4. Job 3 (depends on Job 2): `deploy` job that: + - Prints "Deploying image: `` to production" + - Uses `environment: production` (set this up in repo Settings → Environments) + - Requires manual approval if you've set up environment protection rules + +**Verify:** Merge a PR to `main` — does it run tests → build Docker → deploy in sequence? + +--- + +### Task 6: Scheduled Health Check +Create `.github/workflows/health-check.yml`: +1. Trigger: `schedule` with cron `'0 */12 * * *'` (every 12 hours) + `workflow_dispatch` for manual testing +2. Steps: + - Pull your latest Docker image + - Run the container in detached mode + - Wait 5 seconds, then curl the health endpoint + - Print pass/fail based on the response + - Stop and remove the container +3. Add a step that creates a summary using `$GITHUB_STEP_SUMMARY`: + ```bash + echo "## Health Check Report" >> $GITHUB_STEP_SUMMARY + echo "- Image: myapp:latest" >> $GITHUB_STEP_SUMMARY + echo "- Status: PASSED" >> $GITHUB_STEP_SUMMARY + echo "- Time: $(date)" >> $GITHUB_STEP_SUMMARY + ``` + +--- + +### Task 7: Add Badges & Documentation +1. Add status badges for all your workflows to the repo `README.md` +2. Add a **pipeline architecture diagram** in your notes — draw (or describe) the flow: + ``` + PR opened → build & test → PR checks pass + Merge to main → build & test → Docker build & push → deploy + Every 12 hours → health check + ``` +3. Fill in your notes: What would you add next? (Slack notifications? Multi-environment? Rollback?) + +--- + +## Brownie Points: Add Security to Your Pipeline +Want to go above and beyond? Add a **DevSecOps** step to your main pipeline: +1. Add `aquasecurity/trivy-action` after the Docker build step to scan your image for vulnerabilities +2. Fail the pipeline if any **CRITICAL** severity CVE is found +3. Upload the scan report as an artifact + +This is a preview of what you'll do in depth on **Day 49**. If you get this working today, you're already thinking like a DevSecOps engineer. + +--- + +## Hints +- Environment protection: Repo Settings → Environments → Add `production` → enable "Required reviewers" +- `$GITHUB_STEP_SUMMARY` renders markdown in the Actions run summary page +- Short SHA for tags: `$(echo ${{ github.sha }} | cut -c1-7)` +- Reusable workflow outputs: accessed via `${{ needs..outputs. }}` +- Use `actions/github-script` if you want to post PR comments programmatically + +--- + +## Documentation +Create `day-48-actions-project.md` with: +- Your pipeline architecture (the flow diagram from Task 7) +- All workflow YAML files +- Screenshot of a PR running the test-only pipeline +- Screenshot of a main branch push running the full pipeline +- Docker Hub link to your pushed image +- What you'd improve next + +--- + +## Submission +1. Add `day-48-actions-project.md` to `2026/day-48/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your complete pipeline architecture on LinkedIn — you just built production-grade CI/CD from scratch using only GitHub Actions. That's serious DevOps skill. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-49/README.md b/2026/day-49/README.md new file mode 100644 index 0000000000..39fabbdeb9 --- /dev/null +++ b/2026/day-49/README.md @@ -0,0 +1,219 @@ +# Day 49 – DevSecOps: Add Security to Your CI/CD Pipeline + +## Task +You can build and deploy automatically. But what if your Docker image has a known vulnerability? What if someone accidentally commits a password? Today you learn **DevSecOps** — adding simple, automated security checks to your pipeline so problems are caught **before** they reach production. + +Don't worry — this isn't a security course. You're just adding a few smart steps to the pipeline you already built. + +--- + +## Expected Output +- Security scanning added to your `github-actions-capstone` repo (from Day 48) +- A markdown file: `day-49-devsecops.md` +- Screenshot of a security scan running in your pipeline + +--- + +## What is DevSecOps? + +Think of it like this: + +**Without DevSecOps:** +> You build the app → deploy it → a security team finds a vulnerability weeks later → you scramble to fix it + +**With DevSecOps:** +> You open a PR → the pipeline automatically checks for vulnerabilities → you fix it before it ever gets merged + +**That's it.** DevSecOps = adding security checks to the pipeline you already have. Not a separate process — just a few extra steps. + +--- + +## Key Principles (Keep These in Mind) + +1. **Catch problems early** — A vulnerability found in a PR takes 5 minutes to fix. The same vulnerability found in production takes days. + +2. **Automate the checks** — Don't rely on someone remembering to check. Let the pipeline do it every time. + +3. **Block on critical issues** — If a scan finds a serious vulnerability, the pipeline should fail — just like a failing test. + +4. **Never put secrets in code** — Use GitHub Secrets (you learned this on Day 44). No `.env` files, no hardcoded API keys. + +5. **Give only the access needed** — Your workflow doesn't need write access to everything. Limit permissions. + +--- + +## Challenge Tasks + +### Task 1: Scan Your Docker Image for Vulnerabilities +Your Docker image might use a base image with known security issues. Let's find out. + +Add this step to your main branch pipeline (after Docker build, before deploy): +```yaml +- name: Scan Docker Image for Vulnerabilities + uses: aquasecurity/trivy-action@master + with: + image-ref: 'your-username/your-app:latest' + format: 'table' + exit-code: '1' + severity: 'CRITICAL,HIGH' +``` + +What this does: +- `trivy` scans your Docker image for known CVEs (Common Vulnerabilities and Exposures) +- `format: 'table'` prints a readable table in the logs +- `exit-code: '1'` means **fail the pipeline** if CRITICAL or HIGH vulnerabilities are found +- If it passes, your image is clean — proceed to push and deploy + +Push and check the Actions tab. Read the scan output. + +**Verify:** Can you see the vulnerability table in the logs? Did it pass or fail? + +Write in your notes: What CVEs (if any) were found? What base image are you using? + +--- + +### Task 2: Enable GitHub's Built-in Secret Scanning +GitHub can automatically detect if someone pushes a secret (API key, token, password) to your repo. + +1. Go to your repo → Settings → **Code security and analysis** +2. Enable **Secret scanning** +3. If available, also enable **Push protection** — this blocks the push entirely if a secret is detected + +That's it — no workflow changes needed. GitHub does this automatically. + +Write in your notes: +- What is the difference between secret scanning and push protection? +- What happens if GitHub detects a leaked AWS key in your repo? + +--- + +### Task 3: Scan Dependencies for Known Vulnerabilities +If your app uses packages (pip, npm, etc.), those packages might have known vulnerabilities. + +Add this to your **PR pipeline** (not the main pipeline): +```yaml +- name: Check Dependencies for Vulnerabilities + uses: actions/dependency-review-action@v4 + with: + fail-on-severity: critical +``` + +This checks any **new** dependencies added in the PR against a vulnerability database. If a dependency has a critical CVE, the PR check fails. + +Test it: +1. Open a PR that adds a package to your app +2. Check the Actions tab — did the dependency review run? + +**Verify:** Does the dependency review show up as a check on your PR? + +--- + +### Task 4: Add Permissions to Your Workflows +By default, workflows get broad permissions. Lock them down. + +Add this block near the top of your workflow files (after `on:`): +```yaml +permissions: + contents: read +``` + +If a workflow needs to comment on PRs, add: +```yaml +permissions: + contents: read + pull-requests: write +``` + +Update at least 2 of your existing workflow files with a `permissions` block. + +Write in your notes: Why is it a good practice to limit workflow permissions? What could go wrong if a compromised action has write access to your repo? + +--- + +### Task 5: See the Full Secure Pipeline +Look at what your pipeline does now: + +``` +PR opened + → build & test + → dependency vulnerability check ← NEW (Day 49) + → PR checks pass or fail + +Merge to main + → build & test + → Docker build + → Trivy image scan (fail on CRITICAL) ← NEW (Day 49) + → Docker push (only if scan passes) + → deploy + +Always active + → GitHub secret scanning ← NEW (Day 49) + → push protection for secrets ← NEW (Day 49) +``` + +Draw this diagram in your notes. You just built a **DevSecOps pipeline** — security is now part of your automation, not an afterthought. + +--- + +## Brownie Points (Optional — For the Curious) + +### Pin Actions to Commit SHAs +Tags like `@v4` can be moved by the action author. For extra security, pin to the exact commit: +```yaml +# Instead of this: +uses: actions/checkout@v4 + +# Use this: +uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1 +``` +This protects against supply chain attacks where a tag is silently changed. + +### Upload Scan Results to GitHub Security Tab +Add SARIF output to Trivy and upload it — your scan results will appear in the repo's **Security** tab: +```yaml +- uses: aquasecurity/trivy-action@master + with: + image-ref: 'your-username/your-app:latest' + format: 'sarif' + output: 'trivy-results.sarif' +- uses: github/codeql-action/upload-sarif@v3 + with: + sarif_file: 'trivy-results.sarif' +``` + +### Learn About OIDC (Keyless Authentication) +Instead of storing cloud credentials as long-lived secrets, GitHub Actions can use OIDC to get short-lived tokens automatically. Research: "GitHub Actions OIDC" — it's how production pipelines authenticate to AWS, GCP, and Azure without storing any keys. + +--- + +## Hints +- Trivy action docs: look up `aquasecurity/trivy-action` on GitHub +- `exit-code: '1'` = fail the step, `exit-code: '0'` = just warn +- Dependency review only works on `pull_request` events (not on push) +- Permissions block goes at the workflow level or the job level +- GitHub secret scanning is free for public repos + +--- + +## Documentation +Create `day-49-devsecops.md` with: +- What DevSecOps means in your own words (2-3 sentences) +- Screenshot of Trivy scan output in your pipeline +- Your updated pipeline diagram with security steps +- What you learned about secret scanning and dependency review + +--- + +## Submission +1. Add `day-49-devsecops.md` to `2026/day-49/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share your pipeline diagram on LinkedIn — "My CI/CD pipeline now scans for vulnerabilities automatically." Simple, powerful, and impressive. + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-50/README.md b/2026/day-50/README.md new file mode 100644 index 0000000000..a066eac943 --- /dev/null +++ b/2026/day-50/README.md @@ -0,0 +1,214 @@ +# Day 50 – Kubernetes Architecture and Cluster Setup + +## Task +You have been building and shipping containers with Docker. But what happens when you need to run hundreds of containers across multiple servers? You need an orchestrator. Today you start your Kubernetes journey — understand the architecture, set up a local cluster, and run your first `kubectl` commands. + +This is where things get real. + +--- + +## Expected Output +- A running local Kubernetes cluster (kind or minikube) +- A markdown file: `day-50-k8s-setup.md` +- Screenshot of `kubectl get nodes` showing your cluster is ready + +--- + +## Challenge Tasks + +### Task 1: Recall the Kubernetes Story +Before touching a terminal, write down from memory: + +1. Why was Kubernetes created? What problem does it solve that Docker alone cannot? +2. Who created Kubernetes and what was it inspired by? +3. What does the name "Kubernetes" mean? + +Do not look anything up yet. Write what you remember from the session, then verify against the official docs. + +--- + +### Task 2: Draw the Kubernetes Architecture +From memory, draw or describe the Kubernetes architecture. Your diagram should include: + +**Control Plane (Master Node):** +- API Server — the front door to the cluster, every command goes through it +- etcd — the database that stores all cluster state +- Scheduler — decides which node a new pod should run on +- Controller Manager — watches the cluster and makes sure the desired state matches reality + +**Worker Node:** +- kubelet — the agent on each node that talks to the API server and manages pods +- kube-proxy — handles networking rules so pods can communicate +- Container Runtime — the engine that actually runs containers (containerd, CRI-O) + +After drawing, verify your understanding: +- What happens when you run `kubectl apply -f pod.yaml`? Trace the request through each component. +- What happens if the API server goes down? +- What happens if a worker node goes down? + +--- + +### Task 3: Install kubectl +`kubectl` is the CLI tool you will use to talk to your Kubernetes cluster. + +Install it: +```bash +# macOS +brew install kubectl + +# Linux (amd64) +curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" +chmod +x kubectl +sudo mv kubectl /usr/local/bin/ + +# Windows (with chocolatey) +choco install kubernetes-cli +``` + +Verify: +```bash +kubectl version --client +``` + +--- + +### Task 4: Set Up Your Local Cluster +Choose **one** of the following. Both give you a fully functional Kubernetes cluster on your machine. + +**Option A: kind (Kubernetes in Docker)** +```bash +# Install kind +# macOS +brew install kind + +# Linux +curl -Lo ./kind https://kind.sigs.k8s.io/dl/latest/kind-linux-amd64 +chmod +x ./kind +sudo mv ./kind /usr/local/bin/kind + +# Create a cluster +kind create cluster --name devops-cluster + +# Verify +kubectl cluster-info +kubectl get nodes +``` + +**Option B: minikube** +```bash +# Install minikube +# macOS +brew install minikube + +# Linux +curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 +sudo install minikube-linux-amd64 /usr/local/bin/minikube + +# Start a cluster +minikube start + +# Verify +kubectl cluster-info +kubectl get nodes +``` + +Write down: Which one did you choose and why? + +--- + +### Task 5: Explore Your Cluster +Now that your cluster is running, explore it: + +```bash +# See cluster info +kubectl cluster-info + +# List all nodes +kubectl get nodes + +# Get detailed info about your node +kubectl describe node + +# List all namespaces +kubectl get namespaces + +# See ALL pods running in the cluster (across all namespaces) +kubectl get pods -A +``` + +Look at the pods running in the `kube-system` namespace: +```bash +kubectl get pods -n kube-system +``` + +You should see pods like `etcd`, `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, `coredns`, and `kube-proxy`. These are the architecture components you drew in Task 2 — running as pods inside the cluster. + +**Verify:** Can you match each running pod in `kube-system` to a component in your architecture diagram? + +--- + +### Task 6: Practice Cluster Lifecycle +Build muscle memory with cluster operations: + +```bash +# Delete your cluster +kind delete cluster --name devops-cluster +# (or: minikube delete) + +# Recreate it +kind create cluster --name devops-cluster +# (or: minikube start) + +# Verify it is back +kubectl get nodes +``` + +Try these useful commands: +```bash +# Check which cluster kubectl is connected to +kubectl config current-context + +# List all available contexts (clusters) +kubectl config get-contexts + +# See the full kubeconfig +kubectl config view +``` + +Write down: What is a kubeconfig? Where is it stored on your machine? + +--- + +## Hints +- kind requires Docker to be running (it creates clusters using containers) +- minikube can use Docker, VirtualBox, or other drivers +- The default kubeconfig file is at `~/.kube/config` +- `kubectl get pods -A` is short for `kubectl get pods --all-namespaces` +- If `kubectl` cannot connect, check if your cluster is running: `kind get clusters` or `minikube status` +- `-o wide` flag gives extra details: `kubectl get nodes -o wide` + +--- + +## Documentation +Create `day-50-k8s-setup.md` with: +- Kubernetes history in your own words (3-4 sentences) +- Your architecture diagram (text-based or image) +- Which tool you chose (kind/minikube) and why +- Screenshot of `kubectl get nodes` and `kubectl get pods -n kube-system` +- What each kube-system pod does + +--- + +## Submission +1. Add `day-50-k8s-setup.md` to `2026/day-50/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Started my Kubernetes journey today. Set up a local cluster, explored the architecture, and saw the control plane components running as actual pods. The orchestration chapter begins." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-51/README.md b/2026/day-51/README.md new file mode 100644 index 0000000000..ee93b6e8db --- /dev/null +++ b/2026/day-51/README.md @@ -0,0 +1,247 @@ +# Day 51 – Kubernetes Manifests and Your First Pods + +## Task +Yesterday you set up a cluster. Today you actually deploy something. You will learn the structure of a Kubernetes manifest file and use it to create Pods — the smallest deployable unit in Kubernetes. By the end of today, you should be able to write a Pod definition from scratch without looking at docs. + +--- + +## Expected Output +- At least 3 Pod manifests written by hand +- A markdown file: `day-51-pods.md` +- Screenshot of `kubectl get pods` showing your running pods + +--- + +## The Anatomy of a Kubernetes Manifest + +Every Kubernetes resource is defined using a YAML manifest with four required top-level fields: + +```yaml +apiVersion: v1 # Which API version to use +kind: Pod # What type of resource +metadata: # Name, labels, namespace + name: my-pod + labels: + app: my-app +spec: # The actual specification (what you want) + containers: + - name: my-container + image: nginx:latest + ports: + - containerPort: 80 +``` + +- `apiVersion` — tells Kubernetes which API group to use. For Pods, it is `v1`. +- `kind` — the resource type. Today it is `Pod`. Later you will use `Deployment`, `Service`, etc. +- `metadata` — the identity of your resource. `name` is required. `labels` are key-value pairs used for organization and selection. +- `spec` — the desired state. For a Pod, this means which containers to run, which images, which ports, etc. + +--- + +## Challenge Tasks + +### Task 1: Create Your First Pod (Nginx) +Create a file called `nginx-pod.yaml`: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: nginx-pod + labels: + app: nginx +spec: + containers: + - name: nginx + image: nginx:latest + ports: + - containerPort: 80 +``` + +Apply it: +```bash +kubectl apply -f nginx-pod.yaml +``` + +Verify: +```bash +kubectl get pods +kubectl get pods -o wide +``` + +Wait until the STATUS shows `Running`. Then explore: +```bash +# Detailed info about the pod +kubectl describe pod nginx-pod + +# Read the logs +kubectl logs nginx-pod + +# Get a shell inside the container +kubectl exec -it nginx-pod -- /bin/bash + +# Inside the container, run: +curl localhost:80 +exit +``` + +**Verify:** Can you see the Nginx welcome page when you curl from inside the pod? + +--- + +### Task 2: Create a Custom Pod (BusyBox) +Write a new manifest `busybox-pod.yaml` from scratch (do not copy-paste the nginx one): + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: busybox-pod + labels: + app: busybox + environment: dev +spec: + containers: + - name: busybox + image: busybox:latest + command: ["sh", "-c", "echo Hello from BusyBox && sleep 3600"] +``` + +Apply and verify: +```bash +kubectl apply -f busybox-pod.yaml +kubectl get pods +kubectl logs busybox-pod +``` + +Notice the `command` field — BusyBox does not run a long-lived server like Nginx. Without a command that keeps it running, the container would exit immediately and the pod would go into `CrashLoopBackOff`. + +**Verify:** Can you see "Hello from BusyBox" in the logs? + +--- + +### Task 3: Imperative vs Declarative +You have been using the declarative approach (writing YAML, then `kubectl apply`). Kubernetes also supports imperative commands: + +```bash +# Create a pod without a YAML file +kubectl run redis-pod --image=redis:latest + +# Check it +kubectl get pods +``` + +Now extract the YAML that Kubernetes generated: +```bash +kubectl get pod redis-pod -o yaml +``` + +Compare this output with your hand-written manifests. Notice how much extra metadata Kubernetes adds automatically (status, timestamps, uid, resource version). + +You can also use dry-run to generate YAML without creating anything: +```bash +kubectl run test-pod --image=nginx --dry-run=client -o yaml +``` + +This is a powerful trick — use it to quickly scaffold a manifest, then customize it. + +**Verify:** Save the dry-run output to a file and compare its structure with your nginx-pod.yaml. What fields are the same? What is different? + +--- + +### Task 4: Validate Before Applying +Before applying a manifest, you can validate it: + +```bash +# Check if the YAML is valid without actually creating the resource +kubectl apply -f nginx-pod.yaml --dry-run=client + +# Validate against the cluster's API (server-side validation) +kubectl apply -f nginx-pod.yaml --dry-run=server +``` + +Now intentionally break your YAML (remove the `image` field or add an invalid field) and run dry-run again. See what error you get. + +**Verify:** What error does Kubernetes give when the image field is missing? + +--- + +### Task 5: Pod Labels and Filtering +Labels are how Kubernetes organizes and selects resources. You added labels in your manifests — now use them: + +```bash +# List all pods with their labels +kubectl get pods --show-labels + +# Filter pods by label +kubectl get pods -l app=nginx +kubectl get pods -l environment=dev + +# Add a label to an existing pod +kubectl label pod nginx-pod environment=production + +# Verify +kubectl get pods --show-labels + +# Remove a label +kubectl label pod nginx-pod environment- +``` + +Write a manifest for a third pod with at least 3 labels (app, environment, team). Apply it and practice filtering. + +--- + +### Task 6: Clean Up +Delete all the pods you created: + +```bash +# Delete by name +kubectl delete pod nginx-pod +kubectl delete pod busybox-pod +kubectl delete pod redis-pod + +# Or delete using the manifest file +kubectl delete -f nginx-pod.yaml + +# Verify everything is gone +kubectl get pods +``` + +Notice that when you delete a standalone Pod, it is gone forever. There is no controller to recreate it. This is why in production you use Deployments (coming on Day 52) instead of bare Pods. + +--- + +## Hints +- `kubectl apply -f` creates or updates a resource from a file +- `kubectl get pods -o wide` shows the node and IP address +- `kubectl describe pod ` shows events — very useful for debugging +- `kubectl logs ` shows container stdout/stderr +- `kubectl exec -it -- /bin/sh` gives you a shell (use `/bin/sh` if `/bin/bash` is not available) +- Labels are just key-value pairs — they have no meaning to Kubernetes itself, only to selectors +- `--dry-run=client -o yaml` is your best friend for generating manifest templates + +--- + +## Documentation +Create `day-51-pods.md` with: +- The four required fields of a Kubernetes manifest and what each does +- Your nginx, busybox, and third pod manifests +- Difference between imperative (`kubectl run`) and declarative (`kubectl apply -f`) +- Screenshot of your pods running +- What happens when you delete a standalone Pod? + +--- + +## Submission +1. Add `day-51-pods.md` and your YAML files to `2026/day-51/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Wrote my first Kubernetes Pod manifests from scratch today. Created pods, got a shell inside them, and learned the difference between imperative and declarative approaches." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-52/README.md b/2026/day-52/README.md new file mode 100644 index 0000000000..e000fb2044 --- /dev/null +++ b/2026/day-52/README.md @@ -0,0 +1,269 @@ +# Day 52 – Kubernetes Namespaces and Deployments + +## Task +Yesterday you created standalone Pods. The problem? Delete a Pod and it is gone forever — no one recreates it. Today you fix that with Deployments, the real way to run applications in Kubernetes. You will also learn Namespaces, which let you organize and isolate resources inside a cluster. + +--- + +## Expected Output +- At least 2 namespaces created and used +- A Deployment running with multiple replicas +- A scaled Deployment and a rolling update performed +- A markdown file: `day-52-namespaces-deployments.md` +- Screenshot of `kubectl get deployments` and `kubectl get pods` across namespaces + +--- + +## Challenge Tasks + +### Task 1: Explore Default Namespaces +Kubernetes comes with built-in namespaces. List them: + +```bash +kubectl get namespaces +``` + +You should see at least: +- `default` — where your resources go if you do not specify a namespace +- `kube-system` — Kubernetes internal components (API server, scheduler, etc.) +- `kube-public` — publicly readable resources +- `kube-node-lease` — node heartbeat tracking + +Check what is running inside `kube-system`: +```bash +kubectl get pods -n kube-system +``` + +These are the control plane components keeping your cluster alive. Do not touch them. + +**Verify:** How many pods are running in `kube-system`? + +--- + +### Task 2: Create and Use Custom Namespaces +Create two namespaces — one for a development environment and one for staging: + +```bash +kubectl create namespace dev +kubectl create namespace staging +``` + +Verify they exist: +```bash +kubectl get namespaces +``` + +You can also create a namespace from a manifest: +```yaml +# namespace.yaml +apiVersion: v1 +kind: Namespace +metadata: + name: production +``` + +```bash +kubectl apply -f namespace.yaml +``` + +Now run a pod in a specific namespace: +```bash +kubectl run nginx-dev --image=nginx:latest -n dev +kubectl run nginx-staging --image=nginx:latest -n staging +``` + +List pods across all namespaces: +```bash +kubectl get pods -A +``` + +Notice that `kubectl get pods` without `-n` only shows the `default` namespace. You must specify `-n ` or use `-A` to see everything. + +**Verify:** Does `kubectl get pods` show these pods? What about `kubectl get pods -A`? + +--- + +### Task 3: Create Your First Deployment +A Deployment tells Kubernetes: "I want X replicas of this Pod running at all times." If a Pod crashes, the Deployment controller recreates it automatically. + +Create a file `nginx-deployment.yaml`: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + namespace: dev + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.24 + ports: + - containerPort: 80 +``` + +Key differences from a standalone Pod: +- `kind: Deployment` instead of `kind: Pod` +- `apiVersion: apps/v1` instead of `v1` +- `replicas: 3` tells Kubernetes to maintain 3 identical pods +- `selector.matchLabels` connects the Deployment to its Pods +- `template` is the Pod template — the Deployment creates Pods using this blueprint + +Apply it: +```bash +kubectl apply -f nginx-deployment.yaml +``` + +Check the result: +```bash +kubectl get deployments -n dev +kubectl get pods -n dev +``` + +You should see 3 pods with names like `nginx-deployment-xxxxx-yyyyy`. + +**Verify:** What do the READY, UP-TO-DATE, and AVAILABLE columns mean in the deployment output? + +--- + +### Task 4: Self-Healing — Delete a Pod and Watch It Come Back +This is the key difference between a Deployment and a standalone Pod. + +```bash +# List pods +kubectl get pods -n dev + +# Delete one of the deployment's pods (use an actual pod name from your output) +kubectl delete pod -n dev + +# Immediately check again +kubectl get pods -n dev +``` + +The Deployment controller detects that only 2 of 3 desired replicas exist and immediately creates a new one. The deleted pod is replaced within seconds. + +**Verify:** Is the replacement pod's name the same as the one you deleted, or different? + +--- + +### Task 5: Scale the Deployment +Change the number of replicas: + +```bash +# Scale up to 5 +kubectl scale deployment nginx-deployment --replicas=5 -n dev +kubectl get pods -n dev + +# Scale down to 2 +kubectl scale deployment nginx-deployment --replicas=2 -n dev +kubectl get pods -n dev +``` + +Watch how Kubernetes creates or terminates pods to match the desired count. + +You can also scale by editing the manifest — change `replicas: 4` in your YAML file and run `kubectl apply -f nginx-deployment.yaml` again. + +**Verify:** When you scaled down from 5 to 2, what happened to the extra pods? + +--- + +### Task 6: Rolling Update +Update the Nginx image version to trigger a rolling update: + +```bash +kubectl set image deployment/nginx-deployment nginx=nginx:1.25 -n dev +``` + +Watch the rollout in real time: +```bash +kubectl rollout status deployment/nginx-deployment -n dev +``` + +Kubernetes replaces pods one by one — old pods are terminated only after new ones are healthy. This means zero downtime. + +Check the rollout history: +```bash +kubectl rollout history deployment/nginx-deployment -n dev +``` + +Now roll back to the previous version: +```bash +kubectl rollout undo deployment/nginx-deployment -n dev +kubectl rollout status deployment/nginx-deployment -n dev +``` + +Verify the image is back to the previous version: +```bash +kubectl describe deployment nginx-deployment -n dev | grep Image +``` + +**Verify:** What image version is running after the rollback? + +--- + +### Task 7: Clean Up +```bash +kubectl delete deployment nginx-deployment -n dev +kubectl delete pod nginx-dev -n dev +kubectl delete pod nginx-staging -n staging +kubectl delete namespace dev staging production +``` + +Deleting a namespace removes everything inside it. Be very careful with this in production. + +```bash +kubectl get namespaces +kubectl get pods -A +``` + +**Verify:** Are all your resources gone? + +--- + +## Hints +- `kubectl get -n ` — target a specific namespace +- `kubectl get -A` — list resources across all namespaces +- `selector.matchLabels` in a Deployment must match `template.metadata.labels` — if they do not match, the Deployment will not manage the Pods +- `kubectl scale deployment --replicas=N` — quick way to scale +- `kubectl set image` updates a container image without editing the YAML +- `kubectl rollout undo` rolls back to the previous revision +- `kubectl rollout history` shows past revisions of a Deployment +- Deployments create ReplicaSets behind the scenes — you can see them with `kubectl get replicasets -n ` + +--- + +## Documentation +Create `day-52-namespaces-deployments.md` with: +- What namespaces are and why you would use them +- Your Deployment manifest and an explanation of each section +- What happens when you delete a Pod managed by a Deployment vs a standalone Pod +- How scaling works (both imperative and declarative) +- How rolling updates and rollbacks work +- Screenshot of your Deployment and Pods running + +--- + +## Submission +1. Add `day-52-namespaces-deployments.md` and your YAML files to `2026/day-52/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Learned Kubernetes Namespaces and Deployments today. Created self-healing deployments, scaled them up and down, and performed a zero-downtime rolling update with rollback." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-53/README.md b/2026/day-53/README.md new file mode 100644 index 0000000000..ba807cef34 --- /dev/null +++ b/2026/day-53/README.md @@ -0,0 +1,316 @@ +# Day 53 – Kubernetes Services + +## Task +You have Deployments running multiple Pods, but how do you actually talk to them? Pods get random IP addresses that change every time they restart. Services solve this by giving your Pods a stable network endpoint. Today you will create different types of Services and understand when to use each one. + +--- + +## Expected Output +- A Deployment exposed using ClusterIP, NodePort, and LoadBalancer services +- Verified Pod-to-Service communication from inside the cluster +- A markdown file: `day-53-services.md` +- Screenshot of `kubectl get services` showing your running services + +--- + +## Why Services? + +Every Pod gets its own IP address. But there are two problems: +1. Pod IPs are **not stable** — when a Pod restarts or gets replaced, it gets a new IP +2. A Deployment runs **multiple Pods** — which IP do you connect to? + +A Service solves both problems. It provides: +- A **stable IP and DNS name** that never changes +- **Load balancing** across all Pods that match its selector + +``` +[Client] --> [Service (stable IP)] --> [Pod 1] + --> [Pod 2] + --> [Pod 3] +``` + +--- + +## Challenge Tasks + +### Task 1: Deploy the Application +First, create a Deployment that you will expose with Services. Create `app-deployment.yaml`: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: web-app + labels: + app: web-app +spec: + replicas: 3 + selector: + matchLabels: + app: web-app + template: + metadata: + labels: + app: web-app + spec: + containers: + - name: nginx + image: nginx:1.25 + ports: + - containerPort: 80 +``` + +```bash +kubectl apply -f app-deployment.yaml +kubectl get pods -o wide +``` + +Note the individual Pod IPs. These will change if pods restart — that is the problem Services fix. + +**Verify:** Are all 3 pods running? Note down their IP addresses. + +--- + +### Task 2: ClusterIP Service (Internal Access) +ClusterIP is the default Service type. It gives your Pods a stable internal IP that is only reachable from within the cluster. + +Create `clusterip-service.yaml`: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: web-app-clusterip +spec: + type: ClusterIP + selector: + app: web-app + ports: + - port: 80 + targetPort: 80 +``` + +Key fields: +- `selector.app: web-app` — this Service routes traffic to all Pods with the label `app: web-app` +- `port: 80` — the port the Service listens on +- `targetPort: 80` — the port on the Pod to forward traffic to + +```bash +kubectl apply -f clusterip-service.yaml +kubectl get services +``` + +You should see `web-app-clusterip` with a CLUSTER-IP address. This IP is stable — it will not change even if Pods restart. + +Now test it from inside the cluster: +```bash +# Run a temporary pod to test connectivity +kubectl run test-client --image=busybox:latest --rm -it --restart=Never -- sh + +# Inside the test pod, run: +wget -qO- http://web-app-clusterip +exit +``` + +You should see the Nginx welcome page. The Service load-balanced your request to one of the 3 Pods. + +**Verify:** Does the Service respond? Try running the wget command multiple times — the Service distributes traffic across all healthy Pods. + +--- + +### Task 3: Discover Services with DNS +Kubernetes has a built-in DNS server. Every Service gets a DNS entry automatically: + +``` +..svc.cluster.local +``` + +Test this: +```bash +kubectl run dns-test --image=busybox:latest --rm -it --restart=Never -- sh + +# Inside the pod: +# Short name (works within the same namespace) +wget -qO- http://web-app-clusterip + +# Full DNS name +wget -qO- http://web-app-clusterip.default.svc.cluster.local + +# Look up the DNS entry +nslookup web-app-clusterip +exit +``` + +Both the short name and the full DNS name resolve to the same ClusterIP. In practice, you use the short name when communicating within the same namespace and the full name when reaching across namespaces. + +**Verify:** What IP does `nslookup` return? Does it match the CLUSTER-IP from `kubectl get services`? + +--- + +### Task 4: NodePort Service (External Access via Node) +A NodePort Service exposes your application on a port on every node in the cluster. This lets you access the Service from outside the cluster. + +Create `nodeport-service.yaml`: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: web-app-nodeport +spec: + type: NodePort + selector: + app: web-app + ports: + - port: 80 + targetPort: 80 + nodePort: 30080 +``` + +- `nodePort: 30080` — the port opened on every node (must be in range 30000-32767) +- Traffic flow: `:30080` -> Service -> Pod:80 + +```bash +kubectl apply -f nodeport-service.yaml +kubectl get services +``` + +Access the service: +```bash +# If using Minikube +minikube service web-app-nodeport --url + +# If using Kind, get the node IP first +kubectl get nodes -o wide +# Then curl :30080 + +# If using Docker Desktop +curl http://localhost:30080 +``` + +**Verify:** Can you see the Nginx welcome page from your browser or terminal using the NodePort? + +--- + +### Task 5: LoadBalancer Service (Cloud External Access) +In a cloud environment (AWS, GCP, Azure), a LoadBalancer Service provisions a real external load balancer that routes traffic to your nodes. + +Create `loadbalancer-service.yaml`: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: web-app-loadbalancer +spec: + type: LoadBalancer + selector: + app: web-app + ports: + - port: 80 + targetPort: 80 +``` + +```bash +kubectl apply -f loadbalancer-service.yaml +kubectl get services +``` + +On a local cluster (Minikube, Kind, Docker Desktop), the EXTERNAL-IP will show `` because there is no cloud provider to create a real load balancer. This is expected. + +If you are using Minikube: +```bash +# Minikube can simulate a LoadBalancer +minikube tunnel +# In another terminal, check again: +kubectl get services +``` + +In a real cloud cluster, the EXTERNAL-IP would be a public IP address or hostname provisioned by the cloud provider. + +**Verify:** What does the EXTERNAL-IP column show? Why is it `` on a local cluster? + +--- + +### Task 6: Understand the Service Types Side by Side +Check all three services: + +```bash +kubectl get services -o wide +``` + +Compare them: + +| Type | Accessible From | Use Case | +|------|----------------|----------| +| ClusterIP | Inside the cluster only | Internal communication between services | +| NodePort | Outside via `:` | Development, testing, direct node access | +| LoadBalancer | Outside via cloud load balancer | Production traffic in cloud environments | + +Each type builds on the previous one: +- LoadBalancer creates a NodePort, which creates a ClusterIP +- So a LoadBalancer service also has a ClusterIP and a NodePort + +Verify this: +```bash +kubectl describe service web-app-loadbalancer +``` + +You should see all three: a ClusterIP, a NodePort, and the LoadBalancer configuration. + +**Verify:** Does the LoadBalancer service also have a ClusterIP and NodePort assigned? + +--- + +### Task 7: Clean Up +```bash +kubectl delete -f app-deployment.yaml +kubectl delete -f clusterip-service.yaml +kubectl delete -f nodeport-service.yaml +kubectl delete -f loadbalancer-service.yaml + +kubectl get pods +kubectl get services +``` + +Only the built-in `kubernetes` service in the default namespace should remain. + +**Verify:** Is everything cleaned up? + +--- + +## Hints +- `selector` in a Service must match `labels` on the Pods — if they do not match, the Service routes traffic to nothing +- `kubectl get endpoints ` shows which Pod IPs a Service is currently routing to +- `port` is what the Service listens on; `targetPort` is what the Pod listens on — they do not have to be the same number +- NodePort range is 30000-32767; if you do not specify `nodePort`, Kubernetes picks one automatically +- Use `kubectl describe service ` to see the full configuration including Endpoints +- `kubectl get services -o wide` shows the selector each service uses +- To test ClusterIP services, you must test from inside the cluster (use a temporary pod) + +--- + +## Documentation +Create `day-53-services.md` with: +- What problem Services solve and how they relate to Pods and Deployments +- Your three Service manifests with an explanation of each type +- The difference between ClusterIP, NodePort, and LoadBalancer +- How Kubernetes DNS works for service discovery +- What Endpoints are and how to inspect them +- Screenshot of your services and the test output + +--- + +## Submission +1. Add `day-53-services.md` and your YAML files to `2026/day-53/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Learned Kubernetes Services today — ClusterIP for internal traffic, NodePort for node-level access, and LoadBalancer for production. Services give Pods a stable identity and load balancing." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-54/README.md b/2026/day-54/README.md new file mode 100644 index 0000000000..29f5f8e18a --- /dev/null +++ b/2026/day-54/README.md @@ -0,0 +1,112 @@ +# Day 54 – Kubernetes ConfigMaps and Secrets + +## Task +Your application needs configuration — database URLs, feature flags, API keys. Hardcoding these into container images means rebuilding every time a value changes. Kubernetes solves this with ConfigMaps for non-sensitive config and Secrets for sensitive data. + +--- + +## Expected Output +- ConfigMaps created from literals and from a file +- Secrets created and consumed in a Pod +- A markdown file: `day-54-configmaps-secrets.md` + +--- + +## Challenge Tasks + +### Task 1: Create a ConfigMap from Literals +1. Use `kubectl create configmap` with `--from-literal` to create a ConfigMap called `app-config` with keys `APP_ENV=production`, `APP_DEBUG=false`, and `APP_PORT=8080` +2. Inspect it with `kubectl describe configmap app-config` and `kubectl get configmap app-config -o yaml` +3. Notice the data is stored as plain text — no encoding, no encryption + +**Verify:** Can you see all three key-value pairs? + +--- + +### Task 2: Create a ConfigMap from a File +1. Write a custom Nginx config file that adds a `/health` endpoint returning "healthy" +2. Create a ConfigMap from this file using `kubectl create configmap nginx-config --from-file=default.conf=` +3. The key name (`default.conf`) becomes the filename when mounted into a Pod + +**Verify:** Does `kubectl get configmap nginx-config -o yaml` show the file contents? + +--- + +### Task 3: Use ConfigMaps in a Pod +1. Write a Pod manifest that uses `envFrom` with `configMapRef` to inject all keys from `app-config` as environment variables. Use a busybox container that prints the values. +2. Write a second Pod manifest that mounts `nginx-config` as a volume at `/etc/nginx/conf.d`. Use the nginx image. +3. Test that the mounted config works: `kubectl exec -- curl -s http://localhost/health` + +Use environment variables for simple key-value settings. Use volume mounts for full config files. + +**Verify:** Does the `/health` endpoint respond? + +--- + +### Task 4: Create a Secret +1. Use `kubectl create secret generic db-credentials` with `--from-literal` to store `DB_USER=admin` and `DB_PASSWORD=s3cureP@ssw0rd` +2. Inspect with `kubectl get secret db-credentials -o yaml` — the values are base64-encoded +3. Decode a value: `echo '' | base64 --decode` + +**base64 is encoding, not encryption.** Anyone with cluster access can decode Secrets. The real advantages are RBAC separation, tmpfs storage on nodes, and optional encryption at rest. + +**Verify:** Can you decode the password back to plaintext? + +--- + +### Task 5: Use Secrets in a Pod +1. Write a Pod manifest that injects `DB_USER` as an environment variable using `secretKeyRef` +2. In the same Pod, mount the entire `db-credentials` Secret as a volume at `/etc/db-credentials` with `readOnly: true` +3. Verify: each Secret key becomes a file, and the content is the decoded plaintext value + +**Verify:** Are the mounted file values plaintext or base64? + +--- + +### Task 6: Update a ConfigMap and Observe Propagation +1. Create a ConfigMap `live-config` with a key `message=hello` +2. Write a Pod that mounts this ConfigMap as a volume and reads the file in a loop every 5 seconds +3. Update the ConfigMap: `kubectl patch configmap live-config --type merge -p '{"data":{"message":"world"}}'` +4. Wait 30-60 seconds — the volume-mounted value updates automatically +5. Environment variables from earlier tasks do NOT update — they are set at pod startup only + +**Verify:** Did the volume-mounted value change without a pod restart? + +--- + +### Task 7: Clean Up +Delete all pods, ConfigMaps, and Secrets you created. + +--- + +## Hints +- `--from-literal=KEY=VALUE` for command-line values, `--from-file=key=filename` for file contents +- `envFrom` injects all keys; `env` with `valueFrom` injects individual keys +- `echo -n 'value' | base64` — always use `-n` to avoid encoding a trailing newline +- Volume-mounted ConfigMaps/Secrets auto-update; environment variables do not +- `kubectl get secret -o jsonpath='{.data.KEY}' | base64 --decode` extracts and decodes a value + +--- + +## Documentation +Create `day-54-configmaps-secrets.md` with: +- What ConfigMaps and Secrets are and when to use each +- The difference between environment variables and volume mounts +- Why base64 is encoding, not encryption +- How ConfigMap updates propagate to volumes but not env vars + +--- + +## Submission +1. Add `day-54-configmaps-secrets.md` to `2026/day-54/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Learned Kubernetes ConfigMaps and Secrets today. Injected config as environment variables and volume mounts, and discovered that base64 encoding is not encryption." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-55/README.md b/2026/day-55/README.md new file mode 100644 index 0000000000..fb47bfb946 --- /dev/null +++ b/2026/day-55/README.md @@ -0,0 +1,118 @@ +# Day 55 – Persistent Volumes (PV) and Persistent Volume Claims (PVC) + +## Task +Containers are ephemeral — when a Pod dies, everything inside it disappears. That is a serious problem for databases and anything that needs to survive a restart. Today you fix this with Persistent Volumes and Persistent Volume Claims. + +--- + +## Expected Output +- Data loss demonstrated with an ephemeral Pod +- A PV and PVC created, bound, and data persisting across Pod deletions +- A markdown file: `day-55-persistent-volumes.md` + +--- + +## Challenge Tasks + +### Task 1: See the Problem — Data Lost on Pod Deletion +1. Write a Pod manifest that uses an `emptyDir` volume and writes a timestamped message to `/data/message.txt` +2. Apply it, verify the data exists with `kubectl exec` +3. Delete the Pod, recreate it, check the file again — the old message is gone + +**Verify:** Is the timestamp the same or different after recreation? + +--- + +### Task 2: Create a PersistentVolume (Static Provisioning) +1. Write a PV manifest with `capacity: 1Gi`, `accessModes: ReadWriteOnce`, `persistentVolumeReclaimPolicy: Retain`, and `hostPath` pointing to `/tmp/k8s-pv-data` +2. Apply it and check `kubectl get pv` — status should be `Available` + +Access modes to know: +- `ReadWriteOnce (RWO)` — read-write by a single node +- `ReadOnlyMany (ROX)` — read-only by many nodes +- `ReadWriteMany (RWX)` — read-write by many nodes + +`hostPath` is fine for learning, not for production. + +**Verify:** What is the STATUS of the PV? + +--- + +### Task 3: Create a PersistentVolumeClaim +1. Write a PVC manifest requesting `500Mi` of storage with `ReadWriteOnce` access +2. Apply it and check both `kubectl get pvc` and `kubectl get pv` +3. Both should show `Bound` — Kubernetes matched them by capacity and access mode + +**Verify:** What does the VOLUME column in `kubectl get pvc` show? + +--- + +### Task 4: Use the PVC in a Pod — Data That Survives +1. Write a Pod manifest that mounts the PVC at `/data` using `persistentVolumeClaim.claimName` +2. Write data to `/data/message.txt`, then delete and recreate the Pod +3. Check the file — it should contain data from both Pods + +**Verify:** Does the file contain data from both the first and second Pod? + +--- + +### Task 5: StorageClasses and Dynamic Provisioning +1. Run `kubectl get storageclass` and `kubectl describe storageclass` +2. Note the provisioner, reclaim policy, and volume binding mode +3. With dynamic provisioning, developers only create PVCs — the StorageClass handles PV creation automatically + +**Verify:** What is the default StorageClass in your cluster? + +--- + +### Task 6: Dynamic Provisioning +1. Write a PVC manifest that includes `storageClassName: standard` (or your cluster's default) +2. Apply it — a PV should appear automatically in `kubectl get pv` +3. Use this PVC in a Pod, write data, verify it works + +**Verify:** How many PVs exist now? Which was manual, which was dynamic? + +--- + +### Task 7: Clean Up +1. Delete all pods first +2. Delete PVCs — check `kubectl get pv` to see what happened +3. The dynamic PV is gone (Delete reclaim policy). The manual PV shows `Released` (Retain policy). +4. Delete the remaining PV manually + +**Verify:** Which PV was auto-deleted and which was retained? Why? + +--- + +## Hints +- PVs are cluster-wide (not namespaced), PVCs are namespaced +- PV status: `Available` -> `Bound` -> `Released` +- If a PVC stays `Pending`, check for matching capacity and access modes +- `hostPath` data is lost if the Pod moves to a different node +- `storageClassName: ""` disables dynamic provisioning +- Reclaim policies: `Retain` (keep data) vs `Delete` (remove data) + +--- + +## Documentation +Create `day-55-persistent-volumes.md` with: +- Why containers need persistent storage +- What PVs and PVCs are and how they relate +- Static vs dynamic provisioning +- Access modes and reclaim policies + +--- + +## Submission +1. Add `day-55-persistent-volumes.md` to `2026/day-55/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Learned Kubernetes Persistent Volumes and PVCs today. Proved container data is ephemeral, then fixed it with PVs. Also explored dynamic provisioning with StorageClasses." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-56/README.md b/2026/day-56/README.md new file mode 100644 index 0000000000..4265b5bc32 --- /dev/null +++ b/2026/day-56/README.md @@ -0,0 +1,135 @@ +# Day 56 – Kubernetes StatefulSets + +## Task +Deployments work great for stateless apps, but what about databases? You need stable pod names, ordered startup, and persistent storage per replica. Today you learn StatefulSets — the workload designed for stateful applications like MySQL, PostgreSQL, and Kafka. + +--- + +## Expected Output +- A StatefulSet with 3 replicas and stable pod names +- DNS resolution tested for individual pods +- Data persistence verified across pod deletion +- A markdown file: `day-56-statefulsets.md` + +--- + +## Challenge Tasks + +### Task 1: Understand the Problem +1. Create a Deployment with 3 replicas using nginx +2. Check the pod names — they are random (`app-xyz-abc`) +3. Delete a pod and notice the replacement gets a different random name + +This is fine for web servers but not for databases where you need stable identity. + +| Feature | Deployment | StatefulSet | +|---|---|---| +| Pod names | Random | Stable, ordered (`app-0`, `app-1`) | +| Startup order | All at once | Ordered: pod-0, then pod-1, then pod-2 | +| Storage | Shared PVC | Each pod gets its own PVC | +| Network identity | No stable hostname | Stable DNS per pod | + +Delete the Deployment before moving on. + +**Verify:** Why would random pod names be a problem for a database cluster? + +--- + +### Task 2: Create a Headless Service +1. Write a Service manifest with `clusterIP: None` — this is a Headless Service +2. Set the selector to match the labels you will use on your StatefulSet pods +3. Apply it and confirm CLUSTER-IP shows `None` + +A Headless Service creates individual DNS entries for each pod instead of load-balancing to one IP. StatefulSets require this. + +**Verify:** What does the CLUSTER-IP column show? + +--- + +### Task 3: Create a StatefulSet +1. Write a StatefulSet manifest with `serviceName` pointing to your Headless Service +2. Set replicas to 3, use the nginx image +3. Add a `volumeClaimTemplates` section requesting 100Mi of ReadWriteOnce storage +4. Apply and watch: `kubectl get pods -l -w` + +Observe ordered creation — `web-0` first, then `web-1` after `web-0` is Ready, then `web-2`. + +Check the PVCs: `kubectl get pvc` — you should see `web-data-web-0`, `web-data-web-1`, `web-data-web-2` (names follow the pattern `-`). + +**Verify:** What are the exact pod names and PVC names? + +--- + +### Task 4: Stable Network Identity +Each StatefulSet pod gets a DNS name: `...svc.cluster.local` + +1. Run a temporary busybox pod and use `nslookup` to resolve `web-0..default.svc.cluster.local` +2. Do the same for `web-1` and `web-2` +3. Confirm the IPs match `kubectl get pods -o wide` + +**Verify:** Does the nslookup IP match the pod IP? + +--- + +### Task 5: Stable Storage — Data Survives Pod Deletion +1. Write unique data to each pod: `kubectl exec web-0 -- sh -c "echo 'Data from web-0' > /usr/share/nginx/html/index.html"` +2. Delete `web-0`: `kubectl delete pod web-0` +3. Wait for it to come back, then check the data — it should still be "Data from web-0" + +The new pod reconnected to the same PVC. + +**Verify:** Is the data identical after pod recreation? + +--- + +### Task 6: Ordered Scaling +1. Scale up to 5: `kubectl scale statefulset web --replicas=5` — pods create in order (web-3, then web-4) +2. Scale down to 3 — pods terminate in reverse order (web-4, then web-3) +3. Check `kubectl get pvc` — all five PVCs still exist. Kubernetes keeps them on scale-down so data is preserved if you scale back up. + +**Verify:** After scaling down, how many PVCs exist? + +--- + +### Task 7: Clean Up +1. Delete the StatefulSet and the Headless Service +2. Check `kubectl get pvc` — PVCs are still there (safety feature) +3. Delete PVCs manually + +**Verify:** Were PVCs auto-deleted with the StatefulSet? + +--- + +## Hints +- `kubectl get sts` is the short name for StatefulSets +- `serviceName` must match an existing Headless Service +- Pod DNS: `...svc.cluster.local` +- PVC naming: `--` +- Pods create in order (0, 1, 2) and terminate in reverse (2, 1, 0) +- Scaling down does not delete PVCs — data is preserved +- Deleting a StatefulSet does not delete PVCs — clean up separately + +--- + +## Documentation +Create `day-56-statefulsets.md` with: +- What StatefulSets are and when to use them vs Deployments +- The comparison table +- How Headless Services, stable DNS, and volumeClaimTemplates work +- Screenshots of pods, PVCs, and DNS resolution + +--- + +## Submission +1. Add `day-56-statefulsets.md` to `2026/day-56/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Learned Kubernetes StatefulSets today. Stable pod names, per-pod DNS, and persistent storage that survives deletion — now I understand why databases need StatefulSets." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-57/README.md b/2026/day-57/README.md new file mode 100644 index 0000000000..5337e1f1d4 --- /dev/null +++ b/2026/day-57/README.md @@ -0,0 +1,125 @@ +# Day 57 – Resource Requests, Limits, and Probes + +## Task +Your Pods are running, but Kubernetes has no idea how much CPU or memory they need — and no way to tell if they are actually healthy. Today you set resource requests and limits for smart scheduling, then add probes so Kubernetes can detect and recover from failures automatically. + +--- + +## Expected Output +- A Pod with CPU and memory requests and limits +- OOMKilled observed when exceeding memory limits +- Liveness, readiness, and startup probes tested +- A markdown file: `day-57-resources-probes.md` + +--- + +## Challenge Tasks + +### Task 1: Resource Requests and Limits +1. Write a Pod manifest with `resources.requests` (cpu: 100m, memory: 128Mi) and `resources.limits` (cpu: 250m, memory: 256Mi) +2. Apply and inspect with `kubectl describe pod` — look for the Requests, Limits, and QoS Class sections +3. Since requests and limits differ, the QoS class is `Burstable`. If equal, it would be `Guaranteed`. If missing, `BestEffort`. + +CPU is in millicores: `100m` = 0.1 CPU. Memory is in mebibytes: `128Mi`. + +**Requests** = guaranteed minimum (scheduler uses this for placement). **Limits** = maximum allowed (kubelet enforces at runtime). + +**Verify:** What QoS class does your Pod have? + +--- + +### Task 2: OOMKilled — Exceeding Memory Limits +1. Write a Pod manifest using the `polinux/stress` image with a memory limit of `100Mi` +2. Set the stress command to allocate 200M of memory: `command: ["stress"] args: ["--vm", "1", "--vm-bytes", "200M", "--vm-hang", "1"]` +3. Apply and watch — the container gets killed immediately + +CPU is throttled when over limit. Memory is killed — no mercy. + +Check `kubectl describe pod` for `Reason: OOMKilled` and `Exit Code: 137` (128 + SIGKILL). + +**Verify:** What exit code does an OOMKilled container have? + +--- + +### Task 3: Pending Pod — Requesting Too Much +1. Write a Pod manifest requesting `cpu: 100` and `memory: 128Gi` +2. Apply and check — STATUS stays `Pending` forever +3. Run `kubectl describe pod` and read the Events — the scheduler says exactly why: insufficient resources + +**Verify:** What event message does the scheduler produce? + +--- + +### Task 4: Liveness Probe +A liveness probe detects stuck containers. If it fails, Kubernetes restarts the container. + +1. Write a Pod manifest with a busybox container that creates `/tmp/healthy` on startup, then deletes it after 30 seconds +2. Add a liveness probe using `exec` that runs `cat /tmp/healthy`, with `periodSeconds: 5` and `failureThreshold: 3` +3. After the file is deleted, 3 consecutive failures trigger a restart. Watch with `kubectl get pod -w` + +**Verify:** How many times has the container restarted? + +--- + +### Task 5: Readiness Probe +A readiness probe controls traffic. Failure removes the Pod from Service endpoints but does NOT restart it. + +1. Write a Pod manifest with nginx and a `readinessProbe` using `httpGet` on path `/` port `80` +2. Expose it as a Service: `kubectl expose pod --port=80 --name=readiness-svc` +3. Check `kubectl get endpoints readiness-svc` — the Pod IP is listed +4. Break the probe: `kubectl exec -- rm /usr/share/nginx/html/index.html` +5. Wait 15 seconds — Pod shows `0/1` READY, endpoints are empty, but the container is NOT restarted + +**Verify:** When readiness failed, was the container restarted? + +--- + +### Task 6: Startup Probe +A startup probe gives slow-starting containers extra time. While it runs, liveness and readiness probes are disabled. + +1. Write a Pod manifest where the container takes 20 seconds to start (e.g., `sleep 20 && touch /tmp/started`) +2. Add a `startupProbe` checking for `/tmp/started` with `periodSeconds: 5` and `failureThreshold: 12` (60 second budget) +3. Add a `livenessProbe` that checks the same file — it only kicks in after startup succeeds + +**Verify:** What would happen if `failureThreshold` were 2 instead of 12? + +--- + +### Task 7: Clean Up +Delete all pods and services you created. + +--- + +## Hints +- CPU is compressible (throttled); memory is incompressible (OOMKilled) +- CPU: `1` = 1 core = `1000m`. Memory: `Mi` (mebibytes), `Gi` (gibibytes) +- QoS: Guaranteed (requests == limits), Burstable (requests < limits), BestEffort (none set) +- Probe types: `httpGet`, `exec`, `tcpSocket` +- Liveness failure = restart. Readiness failure = remove from endpoints. Startup failure = kill. +- `initialDelaySeconds`, `periodSeconds`, `failureThreshold` control probe timing +- Exit code 137 = OOMKilled (128 + SIGKILL) + +--- + +## Documentation +Create `day-57-resources-probes.md` with: +- Requests vs limits (scheduling vs enforcement) +- What happens when CPU or memory limits are exceeded +- Liveness vs readiness vs startup probes +- Screenshots of OOMKilled, Pending, and probe events + +--- + +## Submission +1. Add `day-57-resources-probes.md` to `2026/day-57/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Set resource requests and limits in Kubernetes today, watched a pod get OOMKilled, and added liveness, readiness, and startup probes for self-healing." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-58/README.md b/2026/day-58/README.md new file mode 100644 index 0000000000..60f02e560f --- /dev/null +++ b/2026/day-58/README.md @@ -0,0 +1,119 @@ +# Day 58 – Metrics Server and Horizontal Pod Autoscaler (HPA) + +## Task +Yesterday you set resource requests and limits. Today you put that to work. Install the Metrics Server so Kubernetes can see actual resource usage, then set up a Horizontal Pod Autoscaler that scales your app up under load and back down when things calm down. + +--- + +## Expected Output +- Metrics Server installed and `kubectl top` returning data +- An HPA that auto-scales pods under load +- A markdown file: `day-58-metrics-hpa.md` + +--- + +## Challenge Tasks + +### Task 1: Install the Metrics Server +1. Check if it is already running: `kubectl get pods -n kube-system | grep metrics-server` +2. If not, install it: + - Minikube: `minikube addons enable metrics-server` + - Kind/kubeadm: apply the official manifest from the metrics-server GitHub releases +3. On local clusters, you may need the `--kubelet-insecure-tls` flag (never in production) +4. Wait 60 seconds, then verify: `kubectl top nodes` and `kubectl top pods -A` + +**Verify:** What is the current CPU and memory usage of your node? + +--- + +### Task 2: Explore kubectl top +1. Run `kubectl top nodes`, `kubectl top pods -A`, `kubectl top pods -A --sort-by=cpu` +2. `kubectl top` shows real-time usage, not requests or limits — these are different things +3. Data comes from the Metrics Server, which polls kubelets every 15 seconds + +**Verify:** Which pod is using the most CPU right now? + +--- + +### Task 3: Create a Deployment with CPU Requests +1. Write a Deployment manifest using the `registry.k8s.io/hpa-example` image (a CPU-intensive PHP-Apache server) +2. Set `resources.requests.cpu: 200m` — HPA needs this to calculate utilization percentages +3. Expose it as a Service: `kubectl expose deployment php-apache --port=80` + +Without CPU requests, HPA cannot work — this is the most common HPA setup mistake. + +**Verify:** What is the current CPU usage of the Pod? + +--- + +### Task 4: Create an HPA (Imperative) +1. Run: `kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10` +2. Check: `kubectl get hpa` and `kubectl describe hpa php-apache` +3. TARGETS may show `` initially — wait 30 seconds for metrics to arrive + +This scales up when average CPU exceeds 50% of requests, and down when it drops below. + +**Verify:** What does the TARGETS column show? + +--- + +### Task 5: Generate Load and Watch Autoscaling +1. Start a load generator: `kubectl run load-generator --image=busybox:1.36 --restart=Never -- /bin/sh -c "while true; do wget -q -O- http://php-apache; done"` +2. Watch HPA: `kubectl get hpa php-apache --watch` +3. Over 1-3 minutes, CPU climbs above 50%, replicas increase, CPU stabilizes +4. Stop the load: `kubectl delete pod load-generator` +5. Scale-down is slow (5-minute stabilization window) — you do not need to wait + +**Verify:** How many replicas did HPA scale to under load? + +--- + +### Task 6: Create an HPA from YAML (Declarative) +1. Delete the imperative HPA: `kubectl delete hpa php-apache` +2. Write an HPA manifest using `autoscaling/v2` API with CPU target at 50% utilization +3. Add a `behavior` section to control scale-up speed (no stabilization) and scale-down speed (300 second window) +4. Apply and verify with `kubectl describe hpa` + +`autoscaling/v2` supports multiple metrics and fine-grained scaling behavior that the imperative command cannot configure. + +**Verify:** What does the `behavior` section control? + +--- + +### Task 7: Clean Up +Delete the HPA, Service, Deployment, and load-generator pod. Leave the Metrics Server installed. + +--- + +## Hints +- HPA requires `resources.requests` — without them TARGETS shows `` +- `kubectl top` = actual usage. `kubectl describe pod` = configured requests/limits +- HPA checks every 15 seconds. Scale-up is fast, scale-down has a 5-minute stabilization window +- `autoscaling/v1` = CPU only. `autoscaling/v2` = CPU + memory + custom metrics +- Formula: `desiredReplicas = ceil(currentReplicas * (currentUsage / targetUsage))` +- HPA works with Deployments, StatefulSets, and ReplicaSets + +--- + +## Documentation +Create `day-58-metrics-hpa.md` with: +- What the Metrics Server is and why HPA needs it +- How HPA calculates desired replicas +- The difference between `autoscaling/v1` and `v2` +- Screenshots of `kubectl top`, HPA events, and pod scaling + +--- + +## Submission +1. Add `day-58-metrics-hpa.md` to `2026/day-58/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Set up Kubernetes HPA today. Watched my app auto-scale from 1 to multiple replicas under load, then scale back down. This is how production handles variable traffic." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-59/README.md b/2026/day-59/README.md new file mode 100644 index 0000000000..0aac940c86 --- /dev/null +++ b/2026/day-59/README.md @@ -0,0 +1,129 @@ +# Day 59 – Helm — Kubernetes Package Manager + +## Task +Over the past eight days you have written Deployments, Services, ConfigMaps, Secrets, PVCs, and more — all as individual YAML files. For a real application you might have dozens of these. Helm is the package manager for Kubernetes, like apt for Ubuntu. Today you install charts, customize them, and create your own. + +--- + +## Expected Output +- Helm installed and a chart deployed from Bitnami +- A release customized, upgraded, and rolled back +- A custom chart created and installed +- A markdown file: `day-59-helm.md` + +--- + +## Challenge Tasks + +### Task 1: Install Helm +1. Install Helm (brew, curl script, or chocolatey depending on your OS) +2. Verify with `helm version` and `helm env` + +Three core concepts: +- **Chart** — a package of Kubernetes manifest templates +- **Release** — a specific installation of a chart in your cluster +- **Repository** — a collection of charts (like a package repo) + +**Verify:** What version of Helm is installed? + +--- + +### Task 2: Add a Repository and Search +1. Add the Bitnami repository: `helm repo add bitnami https://charts.bitnami.com/bitnami` +2. Update: `helm repo update` +3. Search: `helm search repo nginx` and `helm search repo bitnami` + +**Verify:** How many charts does Bitnami have? + +--- + +### Task 3: Install a Chart +1. Deploy nginx: `helm install my-nginx bitnami/nginx` +2. Check what was created: `kubectl get all` +3. Inspect the release: `helm list`, `helm status my-nginx`, `helm get manifest my-nginx` + +One command replaced writing a Deployment, Service, and ConfigMap by hand. + +**Verify:** How many Pods are running? What Service type was created? + +--- + +### Task 4: Customize with Values +1. View defaults: `helm show values bitnami/nginx` +2. Install a custom release with `--set replicaCount=3 --set service.type=NodePort` +3. Create a `custom-values.yaml` file with replicaCount, service type, and resource limits +4. Install another release using `-f custom-values.yaml` +5. Check overrides: `helm get values ` + +**Verify:** Does the values file release have the correct replicas and service type? + +--- + +### Task 5: Upgrade and Rollback +1. Upgrade: `helm upgrade my-nginx bitnami/nginx --set replicaCount=5` +2. Check history: `helm history my-nginx` +3. Rollback: `helm rollback my-nginx 1` +4. Check history again — rollback creates a new revision (3), not overwriting revision 2 + +Same concept as Deployment rollouts from Day 52, but at the full stack level. + +**Verify:** How many revisions after the rollback? + +--- + +### Task 6: Create Your Own Chart +1. Scaffold: `helm create my-app` +2. Explore the directory: `Chart.yaml`, `values.yaml`, `templates/deployment.yaml` +3. Look at the Go template syntax in templates: `{{ .Values.replicaCount }}`, `{{ .Chart.Name }}` +4. Edit `values.yaml` — set replicaCount to 3 and image to nginx:1.25 +5. Validate: `helm lint my-app` +6. Preview: `helm template my-release ./my-app` +7. Install: `helm install my-release ./my-app` +8. Upgrade: `helm upgrade my-release ./my-app --set replicaCount=5` + +**Verify:** After installing, 3 replicas? After upgrading, 5? + +--- + +### Task 7: Clean Up +1. Uninstall all releases: `helm uninstall ` for each +2. Remove chart directory and values file +3. Use `--keep-history` if you want to retain release history for auditing + +**Verify:** Does `helm list` show zero releases? + +--- + +## Hints +- `helm show values ` — see what you can customize +- `--set key=value` for single overrides, `-f values.yaml` for files +- Nested values use dots: `--set service.type=NodePort` +- `helm get values ` shows overrides, `--all` for everything +- `helm template` renders without installing — great for debugging +- `helm lint` validates chart structure before installing +- Templates: `{{ .Values.key }}`, `{{ .Chart.Name }}`, `{{ .Release.Name }}` + +--- + +## Documentation +Create `day-59-helm.md` with: +- What Helm is and the three core concepts +- How to install, customize, upgrade, and rollback +- The structure of a Helm chart and how Go templating works +- Your `custom-values.yaml` with explanations + +--- + +## Submission +1. Add `day-59-helm.md` and `custom-values.yaml` to `2026/day-59/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Learned Helm today — deployed charts, customized with values, performed rollbacks, and created my own chart from scratch. One command replaces dozens of YAML files." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-60/README.md b/2026/day-60/README.md new file mode 100644 index 0000000000..8d631d92c0 --- /dev/null +++ b/2026/day-60/README.md @@ -0,0 +1,128 @@ +# Day 60 – Capstone: Deploy WordPress + MySQL on Kubernetes + +## Task +Ten days of Kubernetes — clusters, Pods, Deployments, Services, ConfigMaps, Secrets, storage, StatefulSets, resource management, autoscaling, and Helm. Today you put it all together. Deploy a real WordPress + MySQL application using every major concept you have learned. + +--- + +## Expected Output +- A complete WordPress + MySQL stack in a `capstone` namespace +- Self-healing and data persistence verified +- A markdown file: `day-60-capstone.md` +- Screenshot of the running WordPress site and `kubectl get all -n capstone` + +--- + +## Challenge Tasks + +### Task 1: Create the Namespace (Day 52) +1. Create a `capstone` namespace +2. Set it as your default: `kubectl config set-context --current --namespace=capstone` + +--- + +### Task 2: Deploy MySQL (Days 54-56) +1. Create a Secret with `MYSQL_ROOT_PASSWORD`, `MYSQL_DATABASE`, `MYSQL_USER`, and `MYSQL_PASSWORD` using `stringData` +2. Create a Headless Service (`clusterIP: None`) for MySQL on port 3306 +3. Create a StatefulSet for MySQL with: + - Image: `mysql:8.0` + - `envFrom` referencing the Secret + - Resource requests (cpu: 250m, memory: 512Mi) and limits (cpu: 500m, memory: 1Gi) + - A `volumeClaimTemplates` section requesting 1Gi of storage, mounted at `/var/lib/mysql` +4. Verify MySQL works: `kubectl exec -it mysql-0 -- mysql -u -p -e "SHOW DATABASES;"` + +**Verify:** Can you see the `wordpress` database? + +--- + +### Task 3: Deploy WordPress (Days 52, 54, 57) +1. Create a ConfigMap with `WORDPRESS_DB_HOST` set to `mysql-0.mysql.capstone.svc.cluster.local:3306` and `WORDPRESS_DB_NAME` +2. Create a Deployment with 2 replicas using `wordpress:latest` that: + - Uses `envFrom` for the ConfigMap + - Uses `secretKeyRef` for `WORDPRESS_DB_USER` and `WORDPRESS_DB_PASSWORD` from the MySQL Secret + - Has resource requests and limits + - Has a liveness probe and readiness probe on `/wp-login.php` port 80 +3. Wait until both pods show `1/1 Running` + +**Verify:** Are both WordPress pods running and ready? + +--- + +### Task 4: Expose WordPress (Day 53) +1. Create a NodePort Service on port 30080 targeting the WordPress pods +2. Access WordPress in your browser: + - Minikube: `minikube service wordpress -n capstone` + - Kind: `kubectl port-forward svc/wordpress 8080:80 -n capstone` +3. Complete the setup wizard and create a blog post + +**Verify:** Can you see the WordPress setup page? + +--- + +### Task 5: Test Self-Healing and Persistence +1. Delete a WordPress pod — watch the Deployment recreate it within seconds. Refresh the site. +2. Delete the MySQL pod: `kubectl delete pod mysql-0 -n capstone` — watch the StatefulSet recreate it +3. After MySQL recovers, refresh WordPress — your blog post should still be there + +**Verify:** After deleting both pods, is your blog post still there? + +--- + +### Task 6: Set Up HPA (Day 58) +1. Write an HPA manifest targeting the WordPress Deployment with CPU at 50%, min 2, max 10 replicas +2. Apply and check: `kubectl get hpa -n capstone` +3. Run `kubectl get all -n capstone` for the complete picture + +**Verify:** Does the HPA show correct min/max and target? + +--- + +### Task 7: (Bonus) Compare with Helm (Day 59) +1. Install WordPress using `helm install wp-helm bitnami/wordpress` in a separate namespace +2. Compare: how many resources did each approach create? Which gives more control? +3. Clean up the Helm deployment + +--- + +### Task 8: Clean Up and Reflect +1. Take a final look: `kubectl get all -n capstone` +2. Count the concepts you used: Namespace, Secret, ConfigMap, PVC, StatefulSet, Headless Service, Deployment, NodePort Service, Resource Limits, Probes, HPA, Helm — twelve concepts in one deployment +3. Delete the namespace: `kubectl delete namespace capstone` +4. Reset default: `kubectl config set-context --current --namespace=default` + +**Verify:** Did deleting the namespace remove everything? + +--- + +## Hints +- If MySQL takes long to start, check: `kubectl logs mysql-0 -n capstone` +- `WORDPRESS_DB_HOST` must match the StatefulSet DNS pattern: `...svc.cluster.local` +- WordPress probes may fail initially — `initialDelaySeconds` gives it time to boot +- If PVC stays Pending, check `kubectl get storageclass` +- `nodePort` must be in range 30000-32767 +- The Bitnami chart uses MariaDB instead of MySQL — compatible but not identical + +--- + +## Documentation +Create `day-60-capstone.md` with: +- Architecture of your deployment (which resources connect to which) +- Results of self-healing and persistence tests +- A table mapping each concept to the day you learned it +- Reflection: what was hardest, what clicked, what you would add for production + +--- + +## Submission +1. Add `day-60-capstone.md` to `2026/day-60/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Completed the Kubernetes capstone — deployed WordPress + MySQL using twelve K8s concepts: Namespaces, Deployments, StatefulSets, Services, ConfigMaps, Secrets, PVCs, resource limits, probes, and HPA. Ten days of learning, one real application." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-61/README.md b/2026/day-61/README.md new file mode 100644 index 0000000000..152b4c0b33 --- /dev/null +++ b/2026/day-61/README.md @@ -0,0 +1,177 @@ +# Day 61 -- Introduction to Terraform and Your First AWS Infrastructure + +## Task +You have been deploying containers, writing CI/CD pipelines, and orchestrating workloads on Kubernetes. But who creates the servers, networks, and clusters underneath? Today you start your Infrastructure as Code journey with Terraform -- the tool that lets you define, provision, and manage cloud infrastructure by writing code. + +By the end of today, you will have created real AWS resources using nothing but a `.tf` file and a terminal. + +--- + +## Expected Output +- Terraform installed and working on your machine +- AWS CLI configured with valid credentials +- An S3 bucket and EC2 instance created and destroyed via Terraform +- A markdown file: `day-61-terraform-intro.md` + +--- + +## Challenge Tasks + +### Task 1: Understand Infrastructure as Code +Before touching the terminal, research and write short notes on: + +1. What is Infrastructure as Code (IaC)? Why does it matter in DevOps? +2. What problems does IaC solve compared to manually creating resources in the AWS console? +3. How is Terraform different from AWS CloudFormation, Ansible, and Pulumi? +4. What does it mean that Terraform is "declarative" and "cloud-agnostic"? + +Write this in your own words -- not copy-pasted definitions. + +--- + +### Task 2: Install Terraform and Configure AWS +1. Install Terraform: +```bash +# macOS +brew tap hashicorp/tap +brew install hashicorp/tap/terraform + +# Linux (amd64) +wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg +echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list +sudo apt update && sudo apt install terraform + +# Windows +choco install terraform +``` + +2. Verify: +```bash +terraform -version +``` + +3. Install and configure the AWS CLI: +```bash +aws configure +# Enter your Access Key ID, Secret Access Key, default region (e.g., ap-south-1), output format (json) +``` + +4. Verify AWS access: +```bash +aws sts get-caller-identity +``` + +You should see your AWS account ID and ARN. + +--- + +### Task 3: Your First Terraform Config -- Create an S3 Bucket +Create a project directory and write your first Terraform config: + +```bash +mkdir terraform-basics && cd terraform-basics +``` + +Create a file called `main.tf` with: +1. A `terraform` block with `required_providers` specifying the `aws` provider +2. A `provider "aws"` block with your region +3. A `resource "aws_s3_bucket"` that creates a bucket with a globally unique name + +Run the Terraform lifecycle: +```bash +terraform init # Download the AWS provider +terraform plan # Preview what will be created +terraform apply # Create the bucket (type 'yes' to confirm) +``` + +Go to the AWS S3 console and verify your bucket exists. + +**Document:** What did `terraform init` download? What does the `.terraform/` directory contain? + +--- + +### Task 4: Add an EC2 Instance +In the same `main.tf`, add: +1. A `resource "aws_instance"` using AMI `ami-0f5ee92e2d63afc18` (Amazon Linux 2 in ap-south-1 -- use the correct AMI for your region) +2. Set instance type to `t2.micro` +3. Add a tag: `Name = "TerraWeek-Day1"` + +Run: +```bash +terraform plan # You should see 1 resource to add (bucket already exists) +terraform apply +``` + +Go to the AWS EC2 console and verify your instance is running with the correct name tag. + +**Document:** How does Terraform know the S3 bucket already exists and only the EC2 instance needs to be created? + +--- + +### Task 5: Understand the State File +Terraform tracks everything it creates in a state file. Time to inspect it. + +1. Open `terraform.tfstate` in your editor -- read the JSON structure +2. Run these commands and document what each returns: +```bash +terraform show # Human-readable view of current state +terraform state list # List all resources Terraform manages +terraform state show aws_s3_bucket. # Detailed view of a specific resource +terraform state show aws_instance. +``` + +3. Answer these questions in your notes: + - What information does the state file store about each resource? + - Why should you never manually edit the state file? + - Why should the state file not be committed to Git? + +--- + +### Task 6: Modify, Plan, and Destroy +1. Change the EC2 instance tag from `"TerraWeek-Day1"` to `"TerraWeek-Modified"` in your `main.tf` +2. Run `terraform plan` and read the output carefully: + - What do the `~`, `+`, and `-` symbols mean? + - Is this an in-place update or a destroy-and-recreate? +3. Apply the change +4. Verify the tag changed in the AWS console +5. Finally, destroy everything: +```bash +terraform destroy +``` +6. Verify in the AWS console -- both the S3 bucket and EC2 instance should be gone + +--- + +## Hints +- S3 bucket names must be globally unique -- use something like `terraweek--2026` +- AMI IDs are region-specific -- search "Amazon Linux 2 AMI" in your region's EC2 launch wizard +- `terraform fmt` auto-formats your `.tf` files -- run it before committing +- `terraform validate` checks for syntax errors without connecting to AWS +- The `.terraform/` directory contains downloaded provider plugins +- Add `*.tfstate`, `*.tfstate.backup`, and `.terraform/` to your `.gitignore` + +--- + +## Documentation +Create `day-61-terraform-intro.md` with: +- IaC explanation in your own words (3-4 sentences) +- Screenshot of `terraform apply` creating your S3 bucket and EC2 instance +- Screenshot of the resources in the AWS console +- What each Terraform command does (init, plan, apply, destroy, show, state list) +- What the state file contains and why it matters + +--- + +## Submission +1. Add `day-61-terraform-intro.md` to `2026/day-61/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Started the TerraWeek Challenge -- installed Terraform, created my first S3 bucket and EC2 instance using code, and destroyed it all with one command. Infrastructure as Code just clicked." + +`#90DaysOfDevOps` `#TerraWeek` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-62/README.md b/2026/day-62/README.md new file mode 100644 index 0000000000..ec28c27fdc --- /dev/null +++ b/2026/day-62/README.md @@ -0,0 +1,152 @@ +# Day 62 -- Providers, Resources and Dependencies + +## Task +Yesterday you created standalone resources. But real infrastructure is connected -- a server lives inside a subnet, a subnet lives inside a VPC, a security group controls what traffic gets in. Today you build a complete networking stack on AWS and learn how Terraform figures out what to create first. + +Understanding dependencies is what separates a Terraform beginner from someone who can build production infrastructure. + +--- + +## Expected Output +- A VPC with subnet, internet gateway, route table, security group, and an EC2 instance -- all created via Terraform +- A dependency graph visualized with `terraform graph` +- A markdown file: `day-62-providers-resources.md` + +--- + +## Challenge Tasks + +### Task 1: Explore the AWS Provider +1. Create a new project directory: `terraform-aws-infra` +2. Write a `providers.tf` file: + - Define the `terraform` block with `required_providers` pinning the AWS provider to version `~> 5.0` + - Define the `provider "aws"` block with your region +3. Run `terraform init` and check the output -- what version was installed? +4. Read the provider lock file `.terraform.lock.hcl` -- what does it do? + +**Document:** What does `~> 5.0` mean? How is it different from `>= 5.0` and `= 5.0.0`? + +--- + +### Task 2: Build a VPC from Scratch +Create a `main.tf` and define these resources one by one: + +1. `aws_vpc` -- CIDR block `10.0.0.0/16`, tag it `"TerraWeek-VPC"` +2. `aws_subnet` -- CIDR block `10.0.1.0/24`, reference the VPC ID from step 1, enable public IP on launch, tag it `"TerraWeek-Public-Subnet"` +3. `aws_internet_gateway` -- attach it to the VPC +4. `aws_route_table` -- create it in the VPC, add a route for `0.0.0.0/0` pointing to the internet gateway +5. `aws_route_table_association` -- associate the route table with the subnet + +Run `terraform plan` -- you should see 5 resources to create. + +**Verify:** Apply and check the AWS VPC console. Can you see all five resources connected? + +--- + +### Task 3: Understand Implicit Dependencies +Look at your `main.tf` carefully: + +1. The subnet references `aws_vpc.main.id` -- this is an implicit dependency +2. The internet gateway references the VPC ID -- another implicit dependency +3. The route table association references both the route table and the subnet + +Answer these questions: +- How does Terraform know to create the VPC before the subnet? +- What would happen if you tried to create the subnet before the VPC existed? +- Find all implicit dependencies in your config and list them + +--- + +### Task 4: Add a Security Group and EC2 Instance +Add to your config: + +1. `aws_security_group` in the VPC: + - Ingress rule: allow SSH (port 22) from `0.0.0.0/0` + - Ingress rule: allow HTTP (port 80) from `0.0.0.0/0` + - Egress rule: allow all outbound traffic + - Tag: `"TerraWeek-SG"` + +2. `aws_instance` in the subnet: + - Use Amazon Linux 2 AMI for your region + - Instance type: `t2.micro` + - Associate the security group + - Set `associate_public_ip_address = true` + - Tag: `"TerraWeek-Server"` + +Apply and verify -- your EC2 instance should have a public IP and be reachable. + +--- + +### Task 5: Explicit Dependencies with depends_on +Sometimes Terraform cannot detect a dependency automatically. + +1. Add a second `aws_s3_bucket` resource for application logs +2. Add `depends_on = [aws_instance.main]` to the S3 bucket -- even though there is no direct reference, you want the bucket created only after the instance +3. Run `terraform plan` and observe the order + +Now visualize the entire dependency tree: +```bash +terraform graph | dot -Tpng > graph.png +``` +If you don't have `dot` (Graphviz) installed, use: +```bash +terraform graph +``` +and paste the output into an online Graphviz viewer. + +**Document:** When would you use `depends_on` in real projects? Give two examples. + +--- + +### Task 6: Lifecycle Rules and Destroy +1. Add a `lifecycle` block to your EC2 instance: +```hcl +lifecycle { + create_before_destroy = true +} +``` +2. Change the AMI ID to a different one and run `terraform plan` -- observe that Terraform plans to create the new instance before destroying the old one + +3. Destroy everything: +```bash +terraform destroy +``` +4. Watch the destroy order -- Terraform destroys in reverse dependency order. Verify in the AWS console that everything is cleaned up. + +**Document:** What are the three lifecycle arguments (`create_before_destroy`, `prevent_destroy`, `ignore_changes`) and when would you use each? + +--- + +## Hints +- `aws_vpc.main.id` syntax: `..` +- Use `terraform fmt` to keep your HCL clean +- CIDR `10.0.0.0/16` gives you 65,536 IPs, `10.0.1.0/24` gives you 256 +- If you cannot SSH into the instance, check: security group rules, public IP, route table, internet gateway +- `terraform graph` outputs DOT format -- paste it into webgraphviz.com if you don't have Graphviz +- Always destroy resources when done to avoid AWS charges + +--- + +## Documentation +Create `day-62-providers-resources.md` with: +- Your full `main.tf` with comments explaining each resource +- Screenshot of `terraform apply` output +- Screenshot of the VPC and its resources in the AWS console +- The dependency graph (image or text) +- Explanation of implicit vs explicit dependencies in your own words + +--- + +## Submission +1. Add `day-62-providers-resources.md` to `2026/day-62/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Built a complete AWS networking stack with Terraform today -- VPC, subnets, internet gateway, route tables, security groups, and an EC2 instance. All connected through dependency graphs. Terraform decides the order, you define the desired state." + +`#90DaysOfDevOps` `#TerraWeek` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-63/README.md b/2026/day-63/README.md new file mode 100644 index 0000000000..9749d624c7 --- /dev/null +++ b/2026/day-63/README.md @@ -0,0 +1,219 @@ +# Day 63 -- Variables, Outputs, Data Sources and Expressions + +## Task +Your Day 62 config works, but it is full of hardcoded values -- region, CIDR blocks, AMI IDs, instance types, tags. Change the region and everything breaks. Today you make your Terraform configs dynamic, reusable, and environment-aware. + +This is the difference between a config that works once and a config you can use across projects. + +--- + +## Expected Output +- A fully parameterized Terraform config with no hardcoded values +- Separate `.tfvars` files for different environments +- Outputs printed after every apply +- A markdown file: `day-63-variables-outputs.md` + +--- + +## Challenge Tasks + +### Task 1: Extract Variables +Take your Day 62 infrastructure config and refactor it: + +1. Create a `variables.tf` file with input variables for: + - `region` (string, default: your preferred region) + - `vpc_cidr` (string, default: `"10.0.0.0/16"`) + - `subnet_cidr` (string, default: `"10.0.1.0/24"`) + - `instance_type` (string, default: `"t2.micro"`) + - `project_name` (string, no default -- force the user to provide it) + - `environment` (string, default: `"dev"`) + - `allowed_ports` (list of numbers, default: `[22, 80, 443]`) + - `extra_tags` (map of strings, default: `{}`) + +2. Replace every hardcoded value in `main.tf` with `var.` references +3. Run `terraform plan` -- it should prompt you for `project_name` since it has no default + +**Document:** What are the five variable types in Terraform? (`string`, `number`, `bool`, `list`, `map`) + +--- + +### Task 2: Variable Files and Precedence +1. Create `terraform.tfvars`: +```hcl +project_name = "terraweek" +environment = "dev" +instance_type = "t2.micro" +``` + +2. Create `prod.tfvars`: +```hcl +project_name = "terraweek" +environment = "prod" +instance_type = "t3.small" +vpc_cidr = "10.1.0.0/16" +subnet_cidr = "10.1.1.0/24" +``` + +3. Apply with the default file: +```bash +terraform plan # Uses terraform.tfvars automatically +``` + +4. Apply with the prod file: +```bash +terraform plan -var-file="prod.tfvars" # Uses prod.tfvars +``` + +5. Override with CLI: +```bash +terraform plan -var="instance_type=t2.nano" # CLI overrides everything +``` + +6. Set an environment variable: +```bash +export TF_VAR_environment="staging" +terraform plan # env var overrides default but not tfvars +``` + +**Document:** Write the variable precedence order from lowest to highest priority. + +--- + +### Task 3: Add Outputs +Create an `outputs.tf` file with outputs for: + +1. `vpc_id` -- the VPC ID +2. `subnet_id` -- the public subnet ID +3. `instance_id` -- the EC2 instance ID +4. `instance_public_ip` -- the public IP of the EC2 instance +5. `instance_public_dns` -- the public DNS name +6. `security_group_id` -- the security group ID + +Apply your config and verify the outputs are printed at the end: +```bash +terraform apply + +# After apply, you can also run: +terraform output # Show all outputs +terraform output instance_public_ip # Show a specific output +terraform output -json # JSON format for scripting +``` + +**Verify:** Does `terraform output instance_public_ip` return the correct IP? + +--- + +### Task 4: Use Data Sources +Stop hardcoding the AMI ID. Use a data source to fetch it dynamically. + +1. Add a `data "aws_ami"` block that: + - Filters for Amazon Linux 2 images + - Filters for `hvm` virtualization and `gp2` root device + - Uses `owners = ["amazon"]` + - Sets `most_recent = true` + +2. Replace the hardcoded AMI in your `aws_instance` with `data.aws_ami.amazon_linux.id` + +3. Add a `data "aws_availability_zones"` block to fetch available AZs in your region + +4. Use the first AZ in your subnet: `data.aws_availability_zones.available.names[0]` + +Apply and verify -- your config now works in any region without changing the AMI. + +**Document:** What is the difference between a `resource` and a `data` source? + +--- + +### Task 5: Use Locals for Dynamic Values +1. Add a `locals` block: +```hcl +locals { + name_prefix = "${var.project_name}-${var.environment}" + common_tags = { + Project = var.project_name + Environment = var.environment + ManagedBy = "Terraform" + } +} +``` + +2. Replace all Name tags with `local.name_prefix`: + - VPC: `"${local.name_prefix}-vpc"` + - Subnet: `"${local.name_prefix}-subnet"` + - Instance: `"${local.name_prefix}-server"` + +3. Merge common tags with resource-specific tags: +```hcl +tags = merge(local.common_tags, { + Name = "${local.name_prefix}-server" +}) +``` + +Apply and check the tags in the AWS console -- every resource should have consistent tagging. + +--- + +### Task 6: Built-in Functions and Conditional Expressions +Practice these in `terraform console`: +```bash +terraform console +``` + +1. **String functions:** + - `upper("terraweek")` -> `"TERRAWEEK"` + - `join("-", ["terra", "week", "2026"])` -> `"terra-week-2026"` + - `format("arn:aws:s3:::%s", "my-bucket")` + +2. **Collection functions:** + - `length(["a", "b", "c"])` -> `3` + - `lookup({dev = "t2.micro", prod = "t3.small"}, "dev")` -> `"t2.micro"` + - `toset(["a", "b", "a"])` -> removes duplicates + +3. **Networking function:** + - `cidrsubnet("10.0.0.0/16", 8, 1)` -> `"10.0.1.0/24"` + +4. **Conditional expression** -- add this to your config: +```hcl +instance_type = var.environment == "prod" ? "t3.small" : "t2.micro" +``` + +Apply with `environment = "prod"` and verify the instance type changes. + +**Document:** Pick five functions you find most useful and explain what each does. + +--- + +## Hints +- `terraform.tfvars` is loaded automatically. Any other `.tfvars` file needs `-var-file` +- Variable precedence (low to high): default -> `terraform.tfvars` -> `*.auto.tfvars` -> `-var-file` -> `-var` flag -> `TF_VAR_*` env vars +- `terraform console` is an interactive REPL for testing expressions and functions +- Data sources are read-only -- they fetch information, they don't create resources +- `merge()` combines two maps -- great for tags +- `terraform output -json` is useful when piping output into other scripts + +--- + +## Documentation +Create `day-63-variables-outputs.md` with: +- Your `variables.tf` with all variable types +- Both `.tfvars` files (dev and prod) +- Screenshot of outputs after `terraform apply` +- Explanation of variable precedence with examples +- Five built-in functions you found most useful +- The difference between `variable`, `local`, `output`, and `data` + +--- + +## Submission +1. Add `day-63-variables-outputs.md` to `2026/day-63/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Made my Terraform configs fully dynamic today -- variables for every environment, data sources for AMI lookups, locals for consistent tagging, and conditional expressions for environment-specific sizing. Zero hardcoded values." + +`#90DaysOfDevOps` `#TerraWeek` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-64/README.md b/2026/day-64/README.md new file mode 100644 index 0000000000..397b4c8b96 --- /dev/null +++ b/2026/day-64/README.md @@ -0,0 +1,214 @@ +# Day 64 -- Terraform State Management and Remote Backends + +## Task +The state file is the single most important thing in Terraform. It is the source of truth -- the map between your `.tf` files and what actually exists in the cloud. Lose it and Terraform forgets everything. Corrupt it and your next apply could destroy production. + +Today you learn to manage state like a professional -- remote backends, locking, importing existing resources, and handling drift. + +--- + +## Expected Output +- Terraform state migrated from local to S3 remote backend with DynamoDB locking +- An existing AWS resource imported into Terraform state +- State drift simulated and reconciled +- A markdown file: `day-64-state-management.md` + +--- + +## Challenge Tasks + +### Task 1: Inspect Your Current State +Use your Day 63 config (or create a small config with a VPC and EC2 instance). Apply it and then explore the state: + +```bash +terraform show # Full state in human-readable format +terraform state list # All resources tracked by Terraform +terraform state show aws_instance. # Every attribute of the instance +terraform state show aws_vpc. # Every attribute of the VPC +``` + +Answer: +1. How many resources does Terraform track? +2. What attributes does the state store for an EC2 instance? (hint: way more than what you defined) +3. Open `terraform.tfstate` in an editor -- find the `serial` number. What does it represent? + +--- + +### Task 2: Set Up S3 Remote Backend +Storing state locally is dangerous -- one deleted file and you lose everything. Time to move it to S3. + +1. First, create the backend infrastructure (do this manually or in a separate Terraform config): +```bash +# Create S3 bucket for state storage +aws s3api create-bucket \ + --bucket terraweek-state- \ + --region ap-south-1 \ + --create-bucket-configuration LocationConstraint=ap-south-1 + +# Enable versioning (so you can recover previous state) +aws s3api put-bucket-versioning \ + --bucket terraweek-state- \ + --versioning-configuration Status=Enabled + +# Create DynamoDB table for state locking +aws dynamodb create-table \ + --table-name terraweek-state-lock \ + --attribute-definitions AttributeName=LockID,AttributeType=S \ + --key-schema AttributeName=LockID,KeyType=HASH \ + --billing-mode PAY_PER_REQUEST \ + --region ap-south-1 +``` + +2. Add the backend block to your Terraform config: +```hcl +terraform { + backend "s3" { + bucket = "terraweek-state-" + key = "dev/terraform.tfstate" + region = "ap-south-1" + dynamodb_table = "terraweek-state-lock" + encrypt = true + } +} +``` + +3. Run: +```bash +terraform init +``` +Terraform will ask: "Do you want to copy existing state to the new backend?" -- say yes. + +4. Verify: + - Check the S3 bucket -- you should see `dev/terraform.tfstate` + - Your local `terraform.tfstate` should now be empty or gone + - Run `terraform plan` -- it should show no changes (state migrated correctly) + +--- + +### Task 3: Test State Locking +State locking prevents two people from running `terraform apply` at the same time and corrupting the state. + +1. Open **two terminals** in the same project directory +2. In Terminal 1, run: +```bash +terraform apply +``` +3. While Terminal 1 is waiting for confirmation, in Terminal 2 run: +```bash +terraform plan +``` +4. Terminal 2 should show a **lock error** with a Lock ID + +**Document:** What is the error message? Why is locking critical for team environments? + +5. After the test, if you get stuck with a stale lock: +```bash +terraform force-unlock +``` + +--- + +### Task 4: Import an Existing Resource +Not everything starts with Terraform. Sometimes resources already exist in AWS and you need to bring them under Terraform management. + +1. Manually create an S3 bucket in the AWS console -- name it `terraweek-import-test-` +2. Write a `resource "aws_s3_bucket"` block in your config for this bucket (just the bucket name, nothing else) +3. Import it: +```bash +terraform import aws_s3_bucket.imported terraweek-import-test- +``` +4. Run `terraform plan`: + - If you see "No changes" -- the import was perfect + - If you see changes -- your config does not match reality. Update your config to match, then plan again until you get "No changes" + +5. Run `terraform state list` -- the imported bucket should now appear alongside your other resources + +**Document:** What is the difference between `terraform import` and creating a resource from scratch? + +--- + +### Task 5: State Surgery -- mv and rm +Sometimes you need to rename a resource or remove it from state without destroying it in AWS. + +1. **Rename a resource in state:** +```bash +terraform state list # Note the current resource names +terraform state mv aws_s3_bucket.imported aws_s3_bucket.logs_bucket +``` +Update your `.tf` file to match the new name. Run `terraform plan` -- it should show no changes. + +2. **Remove a resource from state (without destroying it):** +```bash +terraform state rm aws_s3_bucket.logs_bucket +``` +Run `terraform plan` -- Terraform no longer knows about the bucket, but it still exists in AWS. + +3. **Re-import it** to bring it back: +```bash +terraform import aws_s3_bucket.logs_bucket terraweek-import-test- +``` + +**Document:** When would you use `state mv` in a real project? When would you use `state rm`? + +--- + +### Task 6: Simulate and Fix State Drift +State drift happens when someone changes infrastructure outside of Terraform -- through the AWS console, CLI, or another tool. + +1. Apply your full config so everything is in sync +2. Go to the **AWS console** and manually: + - Change the Name tag of your EC2 instance to `"ManuallyChanged"` + - Change the instance type if it's stopped (or add a new tag) +3. Run: +```bash +terraform plan +``` +You should see a **diff** -- Terraform detects that reality no longer matches the desired state. + +4. You have two choices: + - **Option A:** Run `terraform apply` to force reality back to match your config (reconcile) + - **Option B:** Update your `.tf` files to match the manual change (accept the drift) + +5. Choose Option A -- apply and verify the tags are restored. + +6. Run `terraform plan` again -- it should show "No changes." Drift resolved. + +**Document:** How do teams prevent state drift in production? (hint: restrict console access, use CI/CD for all changes) + +--- + +## Hints +- S3 bucket names must be globally unique +- DynamoDB table must have a `LockID` string key -- this is what Terraform uses for locking +- `terraform init -migrate-state` explicitly triggers state migration +- `terraform refresh` (or `terraform apply -refresh-only`) updates state to match real infrastructure without making changes +- State locking only works with backends that support it (S3+DynamoDB, Consul, Terraform Cloud) +- `terraform force-unlock` should only be used when you are sure no other operation is running +- Always version your S3 bucket so you can recover a previous state file if something goes wrong + +--- + +## Documentation +Create `day-64-state-management.md` with: +- Diagram: local state vs remote state setup +- Screenshot of state file in S3 bucket +- Screenshot of the lock error from Task 3 +- Steps you followed for `terraform import` and the result +- Explanation of state drift with your real example +- When to use: `state mv`, `state rm`, `import`, `force-unlock`, `refresh` + +--- + +## Submission +1. Add `day-64-state-management.md` to `2026/day-64/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Mastered Terraform state today -- migrated to S3 remote backend with DynamoDB locking, imported existing AWS resources, performed state surgery, and simulated drift. State management is the foundation of reliable infrastructure as code." + +`#90DaysOfDevOps` `#TerraWeek` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-65/README.md b/2026/day-65/README.md new file mode 100644 index 0000000000..5c9748866e --- /dev/null +++ b/2026/day-65/README.md @@ -0,0 +1,250 @@ +# Day 65 -- Terraform Modules: Build Reusable Infrastructure + +## Task +You have been writing everything in one big `main.tf` file. That works for learning, but in real teams you manage dozens of environments with hundreds of resources. Copy-pasting configs across projects is a recipe for disaster. + +Today you learn Terraform modules -- the way to package, reuse, and share infrastructure code. Think of modules as functions in programming. Write once, call many times. + +--- + +## Expected Output +- A custom EC2 module you built from scratch +- A custom security group module wired into the EC2 module +- A VPC created using the official public registry module +- A markdown file: `day-65-modules.md` + +--- + +## Challenge Tasks + +### Task 1: Understand Module Structure +A Terraform module is just a directory with `.tf` files. Create this structure: + +``` +terraform-modules/ + main.tf # Root module -- calls child modules + variables.tf # Root variables + outputs.tf # Root outputs + providers.tf # Provider config + modules/ + ec2-instance/ + main.tf # EC2 resource definition + variables.tf # Module inputs + outputs.tf # Module outputs + security-group/ + main.tf # Security group resource definition + variables.tf # Module inputs + outputs.tf # Module outputs +``` + +Create all the directories and empty files. This is the standard layout every Terraform project follows. + +**Document:** What is the difference between a "root module" and a "child module"? + +--- + +### Task 2: Build a Custom EC2 Module +Create `modules/ec2-instance/`: + +1. **`variables.tf`** -- define inputs: + - `ami_id` (string) + - `instance_type` (string, default: `"t2.micro"`) + - `subnet_id` (string) + - `security_group_ids` (list of strings) + - `instance_name` (string) + - `tags` (map of strings, default: `{}`) + +2. **`main.tf`** -- define the resource: + - `aws_instance` using all the variables + - Merge the Name tag with additional tags + +3. **`outputs.tf`** -- expose: + - `instance_id` + - `public_ip` + - `private_ip` + +Do NOT apply yet -- just write the module. + +--- + +### Task 3: Build a Custom Security Group Module +Create `modules/security-group/`: + +1. **`variables.tf`** -- define inputs: + - `vpc_id` (string) + - `sg_name` (string) + - `ingress_ports` (list of numbers, default: `[22, 80]`) + - `tags` (map of strings, default: `{}`) + +2. **`main.tf`** -- define the resource: + - `aws_security_group` in the given VPC + - Use `dynamic "ingress"` block to create rules from the `ingress_ports` list + - Allow all egress + +3. **`outputs.tf`** -- expose: + - `sg_id` + +This is your first time using a `dynamic` block -- it loops over a list to generate repeated nested blocks. + +--- + +### Task 4: Call Your Modules from Root +In the root `main.tf`, wire everything together: + +1. Create a VPC and subnet directly (or reuse your Day 62 config) +2. Call the security group module: +```hcl +module "web_sg" { + source = "./modules/security-group" + vpc_id = aws_vpc.main.id + sg_name = "terraweek-web-sg" + ingress_ports = [22, 80, 443] + tags = local.common_tags +} +``` + +3. Call the EC2 module -- deploy **two instances** with different names using the same module: +```hcl +module "web_server" { + source = "./modules/ec2-instance" + ami_id = data.aws_ami.amazon_linux.id + instance_type = "t2.micro" + subnet_id = aws_subnet.public.id + security_group_ids = [module.web_sg.sg_id] + instance_name = "terraweek-web" + tags = local.common_tags +} + +module "api_server" { + source = "./modules/ec2-instance" + ami_id = data.aws_ami.amazon_linux.id + instance_type = "t2.micro" + subnet_id = aws_subnet.public.id + security_group_ids = [module.web_sg.sg_id] + instance_name = "terraweek-api" + tags = local.common_tags +} +``` + +4. Add root outputs that reference module outputs: +```hcl +output "web_server_ip" { + value = module.web_server.public_ip +} + +output "api_server_ip" { + value = module.api_server.public_ip +} +``` + +5. Apply: +```bash +terraform init # Downloads/links the local modules +terraform plan # Should show all resources from both module calls +terraform apply +``` + +**Verify:** Two EC2 instances running, same security group, different names. Check the AWS console. + +--- + +### Task 5: Use a Public Registry Module +Instead of building your own VPC from scratch, use the official module from the Terraform Registry. + +1. Replace your hand-written VPC resources with: +```hcl +module "vpc" { + source = "terraform-aws-modules/vpc/aws" + version = "~> 5.0" + + name = "terraweek-vpc" + cidr = "10.0.0.0/16" + + azs = ["ap-south-1a", "ap-south-1b"] + public_subnets = ["10.0.1.0/24", "10.0.2.0/24"] + private_subnets = ["10.0.3.0/24", "10.0.4.0/24"] + + enable_nat_gateway = false + enable_dns_hostnames = true + + tags = local.common_tags +} +``` + +2. Update your EC2 and SG module calls to reference `module.vpc.vpc_id` and `module.vpc.public_subnets[0]` + +3. Run: +```bash +terraform init # Downloads the registry module +terraform plan +terraform apply +``` + +4. Compare: how many resources did the VPC module create vs your hand-written VPC from Day 62? + +**Document:** Where does Terraform download registry modules to? Check `.terraform/modules/`. + +--- + +### Task 6: Module Versioning and Best Practices +1. Pin your registry module version explicitly: + - `version = "5.1.0"` -- exact version + - `version = "~> 5.0"` -- any 5.x version + - `version = ">= 5.0, < 6.0"` -- range + +2. Run `terraform init -upgrade` to check for newer versions + +3. Check the state to see how modules appear: +```bash +terraform state list +``` +Notice the `module.vpc.`, `module.web_server.`, `module.web_sg.` prefixes. + +4. Destroy everything: +```bash +terraform destroy +``` + +**Document:** Write down five module best practices: +- Always pin versions for registry modules +- Keep modules focused -- one concern per module +- Use variables for everything, hardcode nothing +- Always define outputs so callers can reference resources +- Add a README.md to every custom module + +--- + +## Hints +- `terraform init` must be re-run after adding a new module source +- Module outputs are accessed as `module..` +- `dynamic` blocks use `content {}` inside to define the repeated block +- Registry modules document all inputs and outputs on registry.terraform.io +- Local modules use `source = "./modules/"`, registry modules use `source = "//"` +- `terraform get` downloads modules without full init + +--- + +## Documentation +Create `day-65-modules.md` with: +- Your custom module structure (directory tree) +- The `variables.tf`, `main.tf`, and `outputs.tf` for your EC2 module +- Root `main.tf` showing how you call both custom and registry modules +- Screenshot of both EC2 instances running from the same module +- Comparison: hand-written VPC vs registry VPC module (resources created) +- Five module best practices in your own words + +--- + +## Submission +1. Add `day-65-modules.md` to `2026/day-65/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Built my first custom Terraform modules today -- EC2 and security group modules called multiple times with different configs. Then replaced 50 lines of VPC code with one registry module. Modules are the key to scalable infrastructure as code." + +`#90DaysOfDevOps` `#TerraWeek` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-66/README.md b/2026/day-66/README.md new file mode 100644 index 0000000000..9b68645cd9 --- /dev/null +++ b/2026/day-66/README.md @@ -0,0 +1,285 @@ +# Day 66 -- Provision an EKS Cluster with Terraform Modules + +## Task +You built Kubernetes clusters manually in the Kubernetes week. Today you provision one the DevOps way -- fully automated, repeatable, and destroyable with a single command. You will use Terraform registry modules to create an AWS EKS cluster with a managed node group, connect kubectl, and deploy a workload. + +This is what infrastructure teams do every day in production. + +--- + +## Expected Output +- A running EKS cluster on AWS provisioned entirely through Terraform +- kubectl connected to the cluster with nodes visible +- An Nginx deployment running on the cluster +- A markdown file: `day-66-eks-terraform.md` +- Everything destroyed cleanly after the exercise + +--- + +## Challenge Tasks + +### Task 1: Project Setup +Create a new project directory with proper file structure: + +``` +terraform-eks/ + providers.tf # Provider and backend config + vpc.tf # VPC module call + eks.tf # EKS module call + variables.tf # All input variables + outputs.tf # Cluster outputs + terraform.tfvars # Variable values +``` + +In `providers.tf`: +1. Pin the AWS provider to `~> 5.0` +2. Pin the Kubernetes provider (you will need it later) +3. Set your region + +In `variables.tf`, define: +- `region` (string) +- `cluster_name` (string, default: `"terraweek-eks"`) +- `cluster_version` (string, default: `"1.31"`) +- `node_instance_type` (string, default: `"t3.medium"`) +- `node_desired_count` (number, default: `2`) +- `vpc_cidr` (string, default: `"10.0.0.0/16"`) + +--- + +### Task 2: Create the VPC with Registry Module +EKS requires a VPC with both public and private subnets across multiple availability zones. + +In `vpc.tf`, use the `terraform-aws-modules/vpc/aws` module: +1. CIDR: `var.vpc_cidr` +2. At least 2 availability zones +3. 2 public subnets and 2 private subnets +4. Enable NAT gateway (single NAT to save cost): `enable_nat_gateway = true`, `single_nat_gateway = true` +5. Enable DNS hostnames: `enable_dns_hostnames = true` +6. Add the required EKS tags on subnets: +```hcl +public_subnet_tags = { + "kubernetes.io/role/elb" = 1 +} + +private_subnet_tags = { + "kubernetes.io/role/internal-elb" = 1 +} +``` + +Run `terraform init` and `terraform plan` to verify the VPC config before moving on. + +**Document:** Why does EKS need both public and private subnets? What do the subnet tags do? + +--- + +### Task 3: Create the EKS Cluster with Registry Module +In `eks.tf`, use the `terraform-aws-modules/eks/aws` module: + +```hcl +module "eks" { + source = "terraform-aws-modules/eks/aws" + version = "~> 20.0" + + cluster_name = var.cluster_name + cluster_version = var.cluster_version + + vpc_id = module.vpc.vpc_id + subnet_ids = module.vpc.private_subnets + + cluster_endpoint_public_access = true + + eks_managed_node_groups = { + terraweek_nodes = { + ami_type = "AL2_x86_64" + instance_types = [var.node_instance_type] + + min_size = 1 + max_size = 3 + desired_size = var.node_desired_count + } + } + + tags = { + Environment = "dev" + Project = "TerraWeek" + ManagedBy = "Terraform" + } +} +``` + +Run: +```bash +terraform init # Download EKS module and its dependencies +terraform plan # Review -- this will create 30+ resources +``` + +Review the plan carefully before applying. You should see: EKS cluster, IAM roles, node group, security groups, and more. + +--- + +### Task 4: Apply and Connect kubectl +1. Apply the config: +```bash +terraform apply +``` +This will take 10-15 minutes. EKS cluster creation is slow -- be patient. + +2. Add outputs in `outputs.tf`: +```hcl +output "cluster_name" { + value = module.eks.cluster_name +} + +output "cluster_endpoint" { + value = module.eks.cluster_endpoint +} + +output "cluster_region" { + value = var.region +} +``` + +3. Update your kubeconfig: +```bash +aws eks update-kubeconfig --name terraweek-eks --region +``` + +4. Verify: +```bash +kubectl get nodes +kubectl get pods -A +kubectl cluster-info +``` + +**Verify:** Do you see 2 nodes in `Ready` state? Can you see the kube-system pods running? + +--- + +### Task 5: Deploy a Workload on the Cluster +Your Terraform-provisioned cluster is live. Deploy something on it. + +1. Create a file `k8s/nginx-deployment.yaml`: +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-terraweek + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:latest + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: nginx-service +spec: + type: LoadBalancer + selector: + app: nginx + ports: + - port: 80 + targetPort: 80 +``` + +2. Apply: +```bash +kubectl apply -f k8s/nginx-deployment.yaml +``` + +3. Wait for the LoadBalancer to get an external IP: +```bash +kubectl get svc nginx-service -w +``` + +4. Access the Nginx page via the LoadBalancer URL + +5. Verify the full picture: +```bash +kubectl get nodes +kubectl get deployments +kubectl get pods +kubectl get svc +``` + +**Verify:** Can you access the Nginx welcome page through the LoadBalancer URL? + +--- + +### Task 6: Destroy Everything +This is the most important step. EKS clusters cost money. Clean up completely. + +1. First, remove the Kubernetes resources (so the AWS LoadBalancer gets deleted): +```bash +kubectl delete -f k8s/nginx-deployment.yaml +``` + +2. Wait for the LoadBalancer to be fully removed (check EC2 > Load Balancers in AWS console) + +3. Destroy all Terraform resources: +```bash +terraform destroy +``` +This will take 10-15 minutes. + +4. Verify in the AWS console: + - EKS clusters: empty + - EC2 instances: no node group instances + - VPC: the terraweek VPC should be gone + - NAT Gateways: deleted + - Elastic IPs: released + +**Verify:** Is your AWS account completely clean? No leftover resources? + +--- + +## Hints +- EKS creation takes 10-15 minutes, destruction takes about the same -- plan your time +- Always delete Kubernetes LoadBalancer services before `terraform destroy`, otherwise the ELB will block VPC deletion +- If `terraform destroy` gets stuck, check for leftover ENIs or security groups in the VPC +- `t3.medium` is the minimum recommended instance type for EKS nodes +- The EKS module creates IAM roles automatically -- you don't need to create them manually +- If you see `Unauthorized` with kubectl, re-run the `aws eks update-kubeconfig` command +- Use `kubectl get events --sort-by=.metadata.creationTimestamp` to debug pod issues +- Cost warning: NAT Gateway charges ~$0.045/hour. Destroy when done. + +--- + +## Documentation +Create `day-66-eks-terraform.md` with: +- Your complete file structure and key config files +- Screenshot of `terraform apply` completing +- Screenshot of `kubectl get nodes` showing the managed node group +- Screenshot of Nginx running on the cluster +- How many resources Terraform created in total (check the apply output) +- The destroy process and verification +- Reflection: compare this to manually setting up a cluster with kind/minikube (Day 50) + +--- + +## Submission +1. Add `day-66-eks-terraform.md` to `2026/day-66/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Provisioned a full AWS EKS cluster with Terraform modules today -- VPC, subnets, NAT gateway, IAM roles, node groups, the works. 30+ resources created with one command, deployed Nginx on it, and destroyed everything cleanly. This is real-world infrastructure as code." + +`#90DaysOfDevOps` `#TerraWeek` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-67/README.md b/2026/day-67/README.md new file mode 100644 index 0000000000..589e586729 --- /dev/null +++ b/2026/day-67/README.md @@ -0,0 +1,326 @@ +# Day 67 -- TerraWeek Capstone: Multi-Environment Infrastructure with Workspaces and Modules + +## Task +Seven days of Terraform -- HCL, providers, resources, dependencies, variables, outputs, data sources, state management, remote backends, custom modules, registry modules, and a full EKS cluster. Today you put it all together in one production-grade project. + +Build a multi-environment AWS infrastructure using custom modules and Terraform workspaces. One codebase, three environments -- dev, staging, and prod. This is how infrastructure teams operate at scale. + +--- + +## Expected Output +- A complete Terraform project with custom modules and proper file structure +- Three separate environments (dev, staging, prod) deployed using workspaces +- Each environment with its own VPC, security group, and EC2 instance with different sizing +- A markdown file: `day-67-terraweek-capstone.md` +- Everything destroyed cleanly after verification + +--- + +## Challenge Tasks + +### Task 1: Learn Terraform Workspaces +Before building the project, understand workspaces: + +```bash +mkdir terraweek-capstone && cd terraweek-capstone +terraform init + +# See current workspace +terraform workspace show # default + +# Create new workspaces +terraform workspace new dev +terraform workspace new staging +terraform workspace new prod + +# List all workspaces +terraform workspace list + +# Switch between them +terraform workspace select dev +terraform workspace select staging +terraform workspace select prod +``` + +Answer: +1. What does `terraform.workspace` return inside a config? +2. Where does each workspace store its state file? +3. How is this different from using separate directories per environment? + +--- + +### Task 2: Set Up the Project Structure +Create this layout: + +``` +terraweek-capstone/ + main.tf # Root module -- calls child modules + variables.tf # Root variables + outputs.tf # Root outputs + providers.tf # AWS provider and backend + locals.tf # Local values using workspace + dev.tfvars # Dev environment values + staging.tfvars # Staging environment values + prod.tfvars # Prod environment values + .gitignore # Ignore state, .terraform, tfvars with secrets + modules/ + vpc/ + main.tf + variables.tf + outputs.tf + security-group/ + main.tf + variables.tf + outputs.tf + ec2-instance/ + main.tf + variables.tf + outputs.tf +``` + +Create the `.gitignore`: +``` +.terraform/ +*.tfstate +*.tfstate.backup +*.tfvars +.terraform.lock.hcl +``` + +**Document:** Why is this file structure considered best practice? + +--- + +### Task 3: Build the Custom Modules +Create three focused modules: + +**Module 1: `modules/vpc/`** +- Input: `cidr`, `public_subnet_cidr`, `environment`, `project_name` +- Resources: VPC, public subnet, internet gateway, route table, route table association +- Output: `vpc_id`, `subnet_id` +- All resources tagged with environment and project name + +**Module 2: `modules/security-group/`** +- Input: `vpc_id`, `ingress_ports`, `environment`, `project_name` +- Resources: Security group with dynamic ingress rules, allow all egress +- Output: `sg_id` + +**Module 3: `modules/ec2-instance/`** +- Input: `ami_id`, `instance_type`, `subnet_id`, `security_group_ids`, `environment`, `project_name` +- Resources: EC2 instance with tags +- Output: `instance_id`, `public_ip` + +Write and validate each module: +```bash +terraform validate +``` + +--- + +### Task 4: Wire It All Together with Workspace-Aware Config +In the root module, use `terraform.workspace` to drive environment-specific behavior. + +**`locals.tf`:** +```hcl +locals { + environment = terraform.workspace + name_prefix = "${var.project_name}-${local.environment}" + + common_tags = { + Project = var.project_name + Environment = local.environment + ManagedBy = "Terraform" + Workspace = terraform.workspace + } +} +``` + +**`variables.tf`:** +```hcl +variable "project_name" { + type = string + default = "terraweek" +} + +variable "vpc_cidr" { + type = string +} + +variable "subnet_cidr" { + type = string +} + +variable "instance_type" { + type = string +} + +variable "ingress_ports" { + type = list(number) + default = [22, 80] +} +``` + +**`main.tf`** -- call all three modules, passing workspace-aware names and variables. + +**Environment-specific tfvars:** + +`dev.tfvars`: +```hcl +vpc_cidr = "10.0.0.0/16" +subnet_cidr = "10.0.1.0/24" +instance_type = "t2.micro" +ingress_ports = [22, 80] +``` + +`staging.tfvars`: +```hcl +vpc_cidr = "10.1.0.0/16" +subnet_cidr = "10.1.1.0/24" +instance_type = "t2.small" +ingress_ports = [22, 80, 443] +``` + +`prod.tfvars`: +```hcl +vpc_cidr = "10.2.0.0/16" +subnet_cidr = "10.2.1.0/24" +instance_type = "t3.small" +ingress_ports = [80, 443] +``` + +Notice: dev allows SSH, prod does not. Different CIDRs prevent overlap. Instance types scale up per environment. + +--- + +### Task 5: Deploy All Three Environments +Deploy each environment using its workspace and tfvars file: + +**Dev:** +```bash +terraform workspace select dev +terraform plan -var-file="dev.tfvars" +terraform apply -var-file="dev.tfvars" +``` + +**Staging:** +```bash +terraform workspace select staging +terraform plan -var-file="staging.tfvars" +terraform apply -var-file="staging.tfvars" +``` + +**Prod:** +```bash +terraform workspace select prod +terraform plan -var-file="prod.tfvars" +terraform apply -var-file="prod.tfvars" +``` + +After all three are deployed, verify: +```bash +# Check each workspace's resources +terraform workspace select dev && terraform output +terraform workspace select staging && terraform output +terraform workspace select prod && terraform output +``` + +Go to the AWS console and verify: +- Three separate VPCs with different CIDR ranges +- Three EC2 instances with different instance types +- Different Name tags per environment: `terraweek-dev-server`, `terraweek-staging-server`, `terraweek-prod-server` + +**Verify:** Are all three environments completely isolated from each other? + +--- + +### Task 6: Document Best Practices +Write down everything you have learned this week as a Terraform best practices guide: + +1. **File structure** -- separate files for providers, variables, outputs, main, locals +2. **State management** -- always use remote backend, enable locking, enable versioning +3. **Variables** -- never hardcode, use tfvars per environment, validate with `validation` blocks +4. **Modules** -- one concern per module, always define inputs/outputs, pin registry module versions +5. **Workspaces** -- use for environment isolation, reference `terraform.workspace` in configs +6. **Security** -- .gitignore for state and tfvars, encrypt state at rest, restrict backend access +7. **Commands** -- always run `plan` before `apply`, use `fmt` and `validate` before committing +8. **Tagging** -- tag every resource with project, environment, and managed-by +9. **Naming** -- consistent prefix pattern: `--` +10. **Cleanup** -- always `terraform destroy` non-production environments when not in use + +--- + +### Task 7: Destroy All Environments +Clean up all three environments in reverse order: + +```bash +terraform workspace select prod +terraform destroy -var-file="prod.tfvars" + +terraform workspace select staging +terraform destroy -var-file="staging.tfvars" + +terraform workspace select dev +terraform destroy -var-file="dev.tfvars" +``` + +Verify in the AWS console -- all VPCs, instances, security groups, and gateways should be gone. + +Delete the workspaces: +```bash +terraform workspace select default +terraform workspace delete dev +terraform workspace delete staging +terraform workspace delete prod +``` + +**Verify:** Is your AWS account completely clean? + +--- + +## Hints +- Each workspace has its own state file -- `terraform.tfstate.d//terraform.tfstate` +- `terraform.workspace` is a built-in variable available in any config +- You cannot delete a workspace you are currently on -- switch to `default` first +- Different VPC CIDRs per environment prevent accidental peering conflicts +- `terraform plan -var-file` does NOT auto-load `terraform.tfvars` when you specify `-var-file` +- If you forget which workspace you are on: `terraform workspace show` +- Workspaces work with remote backends too -- S3 key becomes `env://terraform.tfstate` + +--- + +## Documentation +Create `day-67-terraweek-capstone.md` with: +- Your complete project structure (directory tree) +- All three custom module configs +- Root `main.tf` showing workspace-aware module calls +- All three tfvars files with the differences highlighted +- Screenshot of all three environments running simultaneously in AWS +- Screenshot of `terraform output` from each workspace +- Your Terraform best practices guide (Task 6) +- A table mapping each TerraWeek day to the concepts learned: + +| Day | Concepts | +|-----|----------| +| 61 | IaC, HCL, init/plan/apply/destroy, state basics | +| 62 | Providers, resources, dependencies, lifecycle | +| 63 | Variables, outputs, data sources, locals, functions | +| 64 | Remote backend, locking, import, drift | +| 65 | Custom modules, registry modules, versioning | +| 66 | EKS with modules, real-world provisioning | +| 67 | Workspaces, multi-env, capstone project | + +--- + +## Submission +1. Add `day-67-terraweek-capstone.md` to `2026/day-67/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Completed the TerraWeek Challenge -- seven days from terraform init to a full multi-environment infrastructure project. Custom modules for VPC, security groups, and EC2. Three environments deployed with workspaces. One codebase, three isolated environments, zero console clicks." + +`#90DaysOfDevOps` `#TerraWeek` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-68/README.md b/2026/day-68/README.md new file mode 100644 index 0000000000..10cbc5c279 --- /dev/null +++ b/2026/day-68/README.md @@ -0,0 +1,242 @@ +# Day 68 -- Introduction to Ansible and Inventory Setup + +## Task +Terraform provisions infrastructure. But who installs packages, configures services, manages users, and keeps servers in the desired state after they exist? That is the job of a configuration management tool, and Ansible is the industry standard. + +Today you install Ansible, set up an inventory of servers, and run your first ad-hoc commands -- all without installing a single agent on the target machines. Ansible is agentless. SSH is all it needs. + +--- + +## Expected Output +- Ansible installed on your control node +- 2-3 EC2 instances running as managed nodes +- A working inventory file with grouped hosts +- Successful ad-hoc commands run against remote servers +- A markdown file: `day-68-ansible-intro.md` + +--- + +## Challenge Tasks + +### Task 1: Understand Ansible +Research and write short notes on: + +1. What is configuration management? Why do we need it? +2. How is Ansible different from Chef, Puppet, and Salt? +3. What does "agentless" mean? How does Ansible connect to managed nodes? +4. Draw or describe the Ansible architecture: + - **Control Node** -- the machine where Ansible runs (your laptop or a jump server) + - **Managed Nodes** -- the servers Ansible configures (your EC2 instances) + - **Inventory** -- the list of managed nodes + - **Modules** -- units of work Ansible executes (install a package, copy a file, start a service) + - **Playbooks** -- YAML files that define what to do on which hosts + +--- + +### Task 2: Set Up Your Lab Environment +You need 2-3 EC2 instances to practice on. Choose one approach: + +**Option A: Use Terraform (recommended -- you just learned this)** +Use your TerraWeek skills to provision 3 EC2 instances with: +- Amazon Linux 2 or Ubuntu 22.04 +- `t2.micro` instance type +- A security group allowing SSH (port 22) +- A key pair for SSH access + +**Option B: Launch manually from AWS Console** +Create 3 instances with the same specs above. + +Label them mentally: +- **Instance 1:** web server +- **Instance 2:** app server +- **Instance 3:** db server + +Verify you can SSH into each one from your control node: +```bash +ssh -i ~/your-key.pem ec2-user@ +ssh -i ~/your-key.pem ec2-user@ +ssh -i ~/your-key.pem ec2-user@ +``` + +--- + +### Task 3: Install Ansible +Install Ansible on your **control node** (your laptop or one dedicated EC2 instance): + +```bash +# macOS +brew install ansible + +# Ubuntu/Debian +sudo apt update +sudo apt install ansible -y + +# Amazon Linux / RHEL +sudo yum install ansible -y +# or +pip3 install ansible + +# Verify +ansible --version +``` + +Confirm the output shows the Ansible version, config file path, and Python version. + +**Document:** On which machine did you install Ansible? Why is it only needed on the control node? + +--- + +### Task 4: Create Your Inventory File +The inventory tells Ansible which servers to manage. Create a project directory and your first inventory: + +```bash +mkdir ansible-practice && cd ansible-practice +``` + +Create a file called `inventory.ini`: +```ini +[web] +web-server ansible_host= + +[app] +app-server ansible_host= + +[db] +db-server ansible_host= + +[all:vars] +ansible_user=ec2-user +ansible_ssh_private_key_file=~/your-key.pem +``` + +Verify Ansible can reach all hosts: +```bash +ansible all -i inventory.ini -m ping +``` + +You should see green `SUCCESS` with `"ping": "pong"` for each host. + +**Troubleshoot:** If ping fails: +- Check the SSH key path and permissions (`chmod 400 your-key.pem`) +- Check the security group allows SSH from your IP +- Check the `ansible_user` matches your AMI (ec2-user for Amazon Linux, ubuntu for Ubuntu) + +--- + +### Task 5: Run Ad-Hoc Commands +Ad-hoc commands let you run quick one-off tasks without writing a playbook. + +1. **Check uptime on all servers:** +```bash +ansible all -i inventory.ini -m command -a "uptime" +``` + +2. **Check free memory on web servers only:** +```bash +ansible web -i inventory.ini -m command -a "free -h" +``` + +3. **Check disk space on all servers:** +```bash +ansible all -i inventory.ini -m command -a "df -h" +``` + +4. **Install a package on the web group:** +```bash +ansible web -i inventory.ini -m yum -a "name=git state=present" --become +``` +(Use `apt` instead of `yum` if running Ubuntu) + +5. **Copy a file to all servers:** +```bash +echo "Hello from Ansible" > hello.txt +ansible all -i inventory.ini -m copy -a "src=hello.txt dest=/tmp/hello.txt" +``` + +6. **Verify the file was copied:** +```bash +ansible all -i inventory.ini -m command -a "cat /tmp/hello.txt" +``` + +**Document:** What does `--become` do? When do you need it? + +--- + +### Task 6: Explore Inventory Groups and Patterns +1. **Create a group of groups** -- add this to your `inventory.ini`: +```ini +[application:children] +web +app + +[all_servers:children] +application +db +``` + +2. Run commands against different groups: +```bash +ansible application -i inventory.ini -m ping # web + app servers +ansible db -i inventory.ini -m ping # only db server +ansible all_servers -i inventory.ini -m ping # everything +``` + +3. **Use patterns:** +```bash +ansible 'web:app' -i inventory.ini -m ping # OR: web or app +ansible 'all:!db' -i inventory.ini -m ping # NOT: all except db +``` + +4. **Create an `ansible.cfg`** to avoid typing `-i inventory.ini` every time: +```ini +[defaults] +inventory = inventory.ini +host_key_checking = False +remote_user = ec2-user +private_key_file = ~/your-key.pem +``` + +Now you can simply run: +```bash +ansible all -m ping +``` + +**Verify:** Does `ansible all -m ping` work without specifying the inventory file? + +--- + +## Hints +- Ansible uses SSH by default -- no agent installation needed on managed nodes +- `ansible.cfg` is read from the current directory first, then `~/.ansible.cfg`, then `/etc/ansible/ansible.cfg` +- `-m` specifies the module, `-a` specifies the module arguments +- `--become` escalates to root (like `sudo`) -- needed for package installation and service management +- `command` module runs simple commands, `shell` module supports pipes and redirects +- Host key checking can cause issues on first connection -- `host_key_checking = False` in config helps during practice +- Ad-hoc commands are great for quick tasks, but playbooks are better for anything repeatable + +--- + +## Documentation +Create `day-68-ansible-intro.md` with: +- Ansible architecture in your own words +- How you set up your lab (Terraform or manual, with instance details) +- Your `inventory.ini` file (redact IPs if sharing publicly) +- Screenshot of `ansible all -m ping` with all green results +- Five ad-hoc commands you ran and their outputs +- Difference between `command` and `shell` modules + +--- + +## Submission +1. Add `day-68-ansible-intro.md` to `2026/day-68/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Started the Ansible journey today -- set up a control node, created an inventory with three EC2 instances, and ran ad-hoc commands to manage all servers from one terminal. No agents installed anywhere. Ansible just works over SSH." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-69/README.md b/2026/day-69/README.md new file mode 100644 index 0000000000..916700b64d --- /dev/null +++ b/2026/day-69/README.md @@ -0,0 +1,349 @@ +# Day 69 -- Ansible Playbooks and Modules + +## Task +Ad-hoc commands are useful for quick checks, but real automation lives in playbooks. A playbook is a YAML file that describes the desired state of your servers -- which packages to install, which services to run, which files to place where. You write it once, run it a hundred times, and get the same result every time. + +Today you write your first playbooks and learn the modules that you will use on every project. + +--- + +## Expected Output +- Multiple playbooks that install packages, manage services, and configure files +- A clear understanding of plays, tasks, modules, and handlers +- A markdown file: `day-69-playbooks.md` + +--- + +## Challenge Tasks + +### Task 1: Your First Playbook +Create `install-nginx.yml`: + +```yaml +--- +- name: Install and start Nginx on web servers + hosts: web + become: true + + tasks: + - name: Install Nginx + yum: + name: nginx + state: present + + - name: Start and enable Nginx + service: + name: nginx + state: started + enabled: true + + - name: Create a custom index page + copy: + content: "

Deployed by Ansible - TerraWeek Server

" + dest: /usr/share/nginx/html/index.html +``` + +(Use `apt` instead of `yum` if your instances run Ubuntu) + +Run it: +```bash +ansible-playbook install-nginx.yml +``` + +Read the output carefully -- every task shows `changed`, `ok`, or `failed`. + +Now run it **again**. Notice that tasks show `ok` instead of `changed`. This is **idempotency** -- Ansible only makes changes when needed. + +**Verify:** Curl the web server's public IP. Do you see your custom page? + +--- + +### Task 2: Understand the Playbook Structure +Open your playbook and annotate each part in your notes: + +```yaml +--- # YAML document start +- name: Play name # PLAY -- targets a group of hosts + hosts: web # Which inventory group to run on + become: true # Run tasks as root (sudo) + + tasks: # List of TASKS in this play + - name: Task name # TASK -- one unit of work + module_name: # MODULE -- what Ansible does + key: value # Module arguments +``` + +Answer: +1. What is the difference between a play and a task? +2. Can you have multiple plays in one playbook? +3. What does `become: true` do at the play level vs the task level? +4. What happens if a task fails -- do remaining tasks still run? + +--- + +### Task 3: Learn the Essential Modules +Practice each of these modules by writing a playbook called `essential-modules.yml` with multiple tasks: + +1. **`yum`/`apt`** -- Install and remove packages: +```yaml +- name: Install multiple packages + yum: + name: + - git + - curl + - wget + - tree + state: present +``` + +2. **`service`** -- Manage services: +```yaml +- name: Ensure Nginx is running + service: + name: nginx + state: started + enabled: true +``` + +3. **`copy`** -- Copy files from control node to managed nodes: +```yaml +- name: Copy config file + copy: + src: files/app.conf + dest: /etc/app.conf + owner: root + group: root + mode: '0644' +``` + +4. **`file`** -- Create directories and manage permissions: +```yaml +- name: Create application directory + file: + path: /opt/myapp + state: directory + owner: ec2-user + mode: '0755' +``` + +5. **`command`** -- Run a command (no shell features): +```yaml +- name: Check disk space + command: df -h + register: disk_output + +- name: Print disk space + debug: + var: disk_output.stdout_lines +``` + +6. **`shell`** -- Run a command with shell features (pipes, redirects): +```yaml +- name: Count running processes + shell: ps aux | wc -l + register: process_count + +- name: Show process count + debug: + msg: "Total processes: {{ process_count.stdout }}" +``` + +7. **`lineinfile`** -- Add or modify a single line in a file: +```yaml +- name: Set timezone in environment + lineinfile: + path: /etc/environment + line: 'TZ=Asia/Kolkata' + create: true +``` + +Create a `files/` directory with a sample `app.conf` file for the copy task. Run the playbook against all servers. + +**Document:** What is the difference between `command` and `shell`? When should you use each? + +--- + +### Task 4: Handlers -- Restart Services Only When Needed +Handlers are tasks that run only when triggered by a `notify`. This avoids unnecessary service restarts. + +Create `nginx-config.yml`: +```yaml +--- +- name: Configure Nginx with a custom config + hosts: web + become: true + + tasks: + - name: Install Nginx + yum: + name: nginx + state: present + + - name: Deploy Nginx config + copy: + src: files/nginx.conf + dest: /etc/nginx/nginx.conf + owner: root + mode: '0644' + notify: Restart Nginx + + - name: Deploy custom index page + copy: + content: "

Managed by Ansible

Server: {{ inventory_hostname }}

" + dest: /usr/share/nginx/html/index.html + + - name: Ensure Nginx is running + service: + name: nginx + state: started + enabled: true + + handlers: + - name: Restart Nginx + service: + name: nginx + state: restarted +``` + +Create `files/nginx.conf` with a basic Nginx config. + +Run the playbook: +- First run: handler triggers because the config file is new +- Second run: handler does NOT trigger because nothing changed + +**Verify:** Run it twice and compare the output. Does the handler run both times? + +--- + +### Task 5: Dry Run, Diff, and Verbosity +Before running playbooks on production, always preview changes first. + +1. **Dry run (check mode)** -- shows what would change without changing anything: +```bash +ansible-playbook install-nginx.yml --check +``` + +2. **Diff mode** -- shows the actual file differences: +```bash +ansible-playbook nginx-config.yml --check --diff +``` + +3. **Verbosity** -- increase output detail for debugging: +```bash +ansible-playbook install-nginx.yml -v # verbose +ansible-playbook install-nginx.yml -vv # more verbose +ansible-playbook install-nginx.yml -vvv # connection debugging +``` + +4. **Limit to specific hosts:** +```bash +ansible-playbook install-nginx.yml --limit web-server +``` + +5. **List what would be affected without running:** +```bash +ansible-playbook install-nginx.yml --list-hosts +ansible-playbook install-nginx.yml --list-tasks +``` + +**Document:** Why is `--check --diff` the most important flag combination for production use? + +--- + +### Task 6: Multiple Plays in One Playbook +Write `multi-play.yml` with separate plays for each server group: + +```yaml +--- +- name: Configure web servers + hosts: web + become: true + tasks: + - name: Install Nginx + yum: + name: nginx + state: present + - name: Start Nginx + service: + name: nginx + state: started + enabled: true + +- name: Configure app servers + hosts: app + become: true + tasks: + - name: Install Node.js dependencies + yum: + name: + - gcc + - make + state: present + - name: Create app directory + file: + path: /opt/app + state: directory + mode: '0755' + +- name: Configure database servers + hosts: db + become: true + tasks: + - name: Install MySQL client + yum: + name: mysql + state: present + - name: Create data directory + file: + path: /var/lib/appdata + state: directory + mode: '0700' +``` + +Run it: +```bash +ansible-playbook multi-play.yml +``` + +Watch the output -- each play targets a different group, and tasks run only on the relevant hosts. + +**Verify:** Is Nginx only installed on web servers? Is MySQL only on db servers? + +--- + +## Hints +- YAML indentation matters -- use 2 spaces, never tabs +- `state: present` means "install if not already installed", `state: absent` means "remove" +- `state: started` means "start if not running", `state: restarted` means "always restart" +- Handlers run once at the end of all tasks, even if notified multiple times +- `register` saves a task's output to a variable, `debug` prints it +- `{{ inventory_hostname }}` is a built-in variable that returns the current host's name +- `ansible-playbook --syntax-check playbook.yml` validates YAML syntax before running +- Always test with `--check --diff` before applying to production + +--- + +## Documentation +Create `day-69-playbooks.md` with: +- Your first playbook with annotations explaining each section +- All seven module examples with what each does +- Screenshot of the playbook run showing changed vs ok tasks +- Screenshot proving idempotency (second run shows all ok) +- How handlers work with a before/after comparison +- Difference between `--check`, `--diff`, and `-v` + +--- + +## Submission +1. Add `day-69-playbooks.md` to `2026/day-69/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Wrote my first Ansible playbooks today -- installed Nginx, managed services, copied files, and learned handlers. Ran the same playbook twice and it made zero changes the second time. Idempotency is beautiful." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-70/README.md b/2026/day-70/README.md new file mode 100644 index 0000000000..9e6d12869f --- /dev/null +++ b/2026/day-70/README.md @@ -0,0 +1,407 @@ +# Day 70 -- Variables, Facts, Conditionals and Loops + +## Task +Your playbooks work, but they are static -- same packages, same config, same behavior on every server. Real infrastructure is not like that. Web servers need Nginx, app servers need Node.js, production gets more memory than dev. Today you make your playbooks smart. + +Variables, facts, conditionals, and loops turn a rigid script into flexible automation that adapts to each host, each group, and each environment. + +--- + +## Expected Output +- Playbooks using variables from multiple sources +- Conditional tasks that run only on specific OS or groups +- Loops that install packages and create users dynamically +- A markdown file: `day-70-variables-loops.md` + +--- + +## Challenge Tasks + +### Task 1: Variables in Playbooks +Create `variables-demo.yml`: + +```yaml +--- +- name: Variable demo + hosts: all + become: true + + vars: + app_name: terraweek-app + app_port: 8080 + app_dir: "/opt/{{ app_name }}" + packages: + - git + - curl + - wget + + tasks: + - name: Print app details + debug: + msg: "Deploying {{ app_name }} on port {{ app_port }} to {{ app_dir }}" + + - name: Create application directory + file: + path: "{{ app_dir }}" + state: directory + mode: '0755' + + - name: Install required packages + yum: + name: "{{ packages }}" + state: present +``` + +Run it and verify the variables resolve correctly. + +Now, override a variable from the command line: +```bash +ansible-playbook variables-demo.yml -e "app_name=my-custom-app app_port=9090" +``` + +**Verify:** Does the CLI variable override the playbook variable? + +--- + +### Task 2: group_vars and host_vars +Variables should not live inside playbooks. Move them to dedicated files. + +Create this structure: +``` +ansible-practice/ + inventory.ini + ansible.cfg + group_vars/ + all.yml + web.yml + db.yml + host_vars/ + web-server.yml + playbooks/ + site.yml +``` + +**`group_vars/all.yml`** -- applies to every host: +```yaml +--- +ntp_server: pool.ntp.org +app_env: development +common_packages: + - vim + - htop + - tree +``` + +**`group_vars/web.yml`** -- applies only to the web group: +```yaml +--- +http_port: 80 +max_connections: 1000 +web_packages: + - nginx +``` + +**`group_vars/db.yml`** -- applies only to the db group: +```yaml +--- +db_port: 3306 +db_packages: + - mysql-server +``` + +**`host_vars/web-server.yml`** -- applies only to this specific host: +```yaml +--- +max_connections: 2000 +custom_message: "This is the primary web server" +``` + +Write a playbook `site.yml` that uses these variables: +```yaml +--- +- name: Apply common config + hosts: all + become: true + tasks: + - name: Install common packages + yum: + name: "{{ common_packages }}" + state: present + - name: Show environment + debug: + msg: "Environment: {{ app_env }}" + +- name: Configure web servers + hosts: web + become: true + tasks: + - name: Show web config + debug: + msg: "HTTP port: {{ http_port }}, Max connections: {{ max_connections }}" + - name: Show host-specific message + debug: + msg: "{{ custom_message }}" +``` + +Run it and observe which variables apply to which hosts. + +**Document:** What is the variable precedence? (hint: host_vars > group_vars > playbook vars, and `-e` overrides everything) + +--- + +### Task 3: Ansible Facts -- Gathering System Information +Ansible automatically collects "facts" about each managed node -- OS, IP, memory, CPU, disks, and hundreds more. + +1. **See all facts for a host:** +```bash +ansible web-server -m setup +``` + +2. **Filter specific facts:** +```bash +ansible web-server -m setup -a "filter=ansible_os_family" +ansible web-server -m setup -a "filter=ansible_distribution*" +ansible web-server -m setup -a "filter=ansible_memtotal_mb" +ansible web-server -m setup -a "filter=ansible_default_ipv4" +``` + +3. **Use facts in a playbook** -- create `facts-demo.yml`: +```yaml +--- +- name: Facts demo + hosts: all + tasks: + - name: Show OS info + debug: + msg: > + Hostname: {{ ansible_hostname }}, + OS: {{ ansible_distribution }} {{ ansible_distribution_version }}, + RAM: {{ ansible_memtotal_mb }}MB, + IP: {{ ansible_default_ipv4.address }} + + - name: Show all network interfaces + debug: + var: ansible_interfaces +``` + +Run it and observe the facts printed for each host. + +**Document:** Name five facts you would use in real playbooks and why. + +--- + +### Task 4: Conditionals with when +Tasks should not always run on every host. Use `when` to control execution. + +Create `conditional-demo.yml`: + +```yaml +--- +- name: Conditional tasks demo + hosts: all + become: true + + tasks: + - name: Install Nginx (only on web servers) + yum: + name: nginx + state: present + when: "'web' in group_names" + + - name: Install MySQL (only on db servers) + yum: + name: mysql-server + state: present + when: "'db' in group_names" + + - name: Show warning on low memory hosts + debug: + msg: "WARNING: This host has less than 1GB RAM" + when: ansible_memtotal_mb < 1024 + + - name: Run only on Amazon Linux + debug: + msg: "This is an Amazon Linux machine" + when: ansible_distribution == "Amazon" + + - name: Run only on Ubuntu + debug: + msg: "This is an Ubuntu machine" + when: ansible_distribution == "Ubuntu" + + - name: Run only in production + debug: + msg: "Production settings applied" + when: app_env == "production" + + - name: Multiple conditions (AND) + debug: + msg: "Web server with enough memory" + when: + - "'web' in group_names" + - ansible_memtotal_mb >= 512 + + - name: OR condition + debug: + msg: "Either web or app server" + when: "'web' in group_names or 'app' in group_names" +``` + +Run it and observe which tasks are skipped on which hosts. + +**Verify:** Are tasks correctly skipping on hosts that don't match the condition? + +--- + +### Task 5: Loops +Create `loops-demo.yml`: + +```yaml +--- +- name: Loops demo + hosts: all + become: true + + vars: + users: + - name: deploy + groups: wheel + - name: monitor + groups: wheel + - name: appuser + groups: users + + directories: + - /opt/app/logs + - /opt/app/config + - /opt/app/data + - /opt/app/tmp + + tasks: + - name: Create multiple users + user: + name: "{{ item.name }}" + groups: "{{ item.groups }}" + state: present + loop: "{{ users }}" + + - name: Create multiple directories + file: + path: "{{ item }}" + state: directory + mode: '0755' + loop: "{{ directories }}" + + - name: Install multiple packages + yum: + name: "{{ item }}" + state: present + loop: + - git + - curl + - unzip + - jq + + - name: Print each user created + debug: + msg: "Created user {{ item.name }} in group {{ item.groups }}" + loop: "{{ users }}" +``` + +Run it and observe the loop output -- each iteration is shown separately. + +**Document:** What is the difference between `loop` and the older `with_items`? (hint: `loop` is the modern recommended syntax) + +--- + +### Task 6: Register, Debug, and Combine Everything +Build a real-world playbook `server-report.yml` that combines variables, facts, conditionals, and register: + +```yaml +--- +- name: Server Health Report + hosts: all + + tasks: + - name: Check disk space + command: df -h / + register: disk_result + + - name: Check memory + command: free -m + register: memory_result + + - name: Check running services + shell: systemctl list-units --type=service --state=running | head -20 + register: services_result + + - name: Generate report + debug: + msg: + - "========== {{ inventory_hostname }} ==========" + - "OS: {{ ansible_distribution }} {{ ansible_distribution_version }}" + - "IP: {{ ansible_default_ipv4.address }}" + - "RAM: {{ ansible_memtotal_mb }}MB" + - "Disk: {{ disk_result.stdout_lines[1] }}" + - "Running services (first 20): {{ services_result.stdout_lines | length }}" + + - name: Flag if disk is critically low + debug: + msg: "ALERT: Check disk space on {{ inventory_hostname }}" + when: "'9[0-9]%' in disk_result.stdout or '100%' in disk_result.stdout" + + - name: Save report to file + copy: + content: | + Server: {{ inventory_hostname }} + OS: {{ ansible_distribution }} {{ ansible_distribution_version }} + IP: {{ ansible_default_ipv4.address }} + RAM: {{ ansible_memtotal_mb }}MB + Disk: {{ disk_result.stdout }} + Checked at: {{ ansible_date_time.iso8601 }} + dest: "/tmp/server-report-{{ inventory_hostname }}.txt" + become: true +``` + +Run it and verify the report file is created on each server. + +**Verify:** SSH into a server and read `/tmp/server-report-*.txt`. Does it contain accurate information? + +--- + +## Hints +- Variable precedence (simplified, low to high): role defaults -> group_vars/all -> group_vars/ -> host_vars/ -> playbook vars -> task vars -> extra vars (`-e`) +- `group_names` is a built-in variable containing the groups the current host belongs to +- `inventory_hostname` is the name of the host as defined in the inventory +- `when` conditions do not need `{{ }}` -- you reference variables directly: `when: app_env == "production"` +- `register` stores the entire result object including `stdout`, `stderr`, `rc` (return code), and `stdout_lines` +- `loop` replaces `with_items`, `with_dict`, `with_file` from older Ansible versions +- Use `ansible -m setup -a "filter="` to quickly find fact names +- `debug` with `var` shows the raw variable, `msg` shows a formatted string + +--- + +## Documentation +Create `day-70-variables-loops.md` with: +- Your `group_vars/` and `host_vars/` directory structure +- How variable precedence works with examples from your test +- Five useful Ansible facts and where you would use them +- Conditional playbook with screenshot showing skipped vs executed tasks +- Loop playbook with screenshot showing multiple iterations +- The server report output from Task 6 + +--- + +## Submission +1. Add `day-70-variables-loops.md` to `2026/day-70/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Made Ansible playbooks smart today -- variables from group_vars and host_vars, OS-based conditionals, loops for bulk operations, and facts-driven server reports. Same playbook, different behavior per host. This is how real configuration management works." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-71/README.md b/2026/day-71/README.md new file mode 100644 index 0000000000..4e9bcd08f1 --- /dev/null +++ b/2026/day-71/README.md @@ -0,0 +1,428 @@ +# Day 71 -- Roles, Galaxy, Templates and Vault + +## Task +Your playbooks are getting bigger. Tasks, variables, handlers, files -- all living in one YAML file that grows longer every day. In real projects, you manage dozens of servers with different roles -- web servers, databases, monitoring agents, load balancers. You need a way to organize, reuse, and share automation. + +Today you learn Ansible Roles (the standard way to structure automation), Jinja2 Templates (dynamic config files), Ansible Galaxy (the community marketplace), and Ansible Vault (secrets management). + +--- + +## Expected Output +- A custom Ansible role built from scratch +- A Jinja2 template rendering dynamic config files +- A role installed from Ansible Galaxy +- Secrets encrypted with Ansible Vault +- A markdown file: `day-71-roles-templates-vault.md` + +--- + +## Challenge Tasks + +### Task 1: Jinja2 Templates +Templates let you generate config files dynamically using variables and facts. + +1. Create `templates/nginx-vhost.conf.j2`: +```jinja2 +# Managed by Ansible -- do not edit manually +server { + listen {{ http_port | default(80) }}; + server_name {{ ansible_hostname }}; + + root /var/www/{{ app_name }}; + index index.html; + + location / { + try_files $uri $uri/ =404; + } + + access_log /var/log/nginx/{{ app_name }}_access.log; + error_log /var/log/nginx/{{ app_name }}_error.log; +} +``` + +2. Create a playbook `template-demo.yml`: +```yaml +--- +- name: Deploy Nginx with template + hosts: web + become: true + vars: + app_name: terraweek-app + http_port: 80 + + tasks: + - name: Install Nginx + yum: + name: nginx + state: present + + - name: Create web root + file: + path: "/var/www/{{ app_name }}" + state: directory + mode: '0755' + + - name: Deploy vhost config from template + template: + src: templates/nginx-vhost.conf.j2 + dest: "/etc/nginx/conf.d/{{ app_name }}.conf" + owner: root + mode: '0644' + notify: Restart Nginx + + - name: Deploy index page + copy: + content: "

{{ app_name }}

Host: {{ ansible_hostname }} | IP: {{ ansible_default_ipv4.address }}

" + dest: "/var/www/{{ app_name }}/index.html" + + handlers: + - name: Restart Nginx + service: + name: nginx + state: restarted +``` + +Run it with `--diff` to see the rendered template: +```bash +ansible-playbook template-demo.yml --diff +``` + +**Verify:** SSH into the web server and read the generated config. Are the variables replaced with actual values? + +--- + +### Task 2: Understand the Role Structure +An Ansible role has a fixed directory structure. Each directory has a specific purpose: + +``` +roles/ + webserver/ + tasks/ + main.yml # The main task list + handlers/ + main.yml # Handlers (restart services, etc.) + templates/ + nginx.conf.j2 # Jinja2 templates + files/ + index.html # Static files to copy + vars/ + main.yml # Role variables (high priority) + defaults/ + main.yml # Default variables (low priority, easily overridden) + meta/ + main.yml # Role metadata and dependencies +``` + +Every directory contains a `main.yml` that Ansible loads automatically. You only create the directories you need. + +Generate a skeleton with: +```bash +ansible-galaxy init roles/webserver +``` + +Explore the generated directory. Read the README.md that Galaxy creates. + +**Document:** What is the difference between `vars/main.yml` and `defaults/main.yml`? + +--- + +### Task 3: Build a Custom Webserver Role +Build a complete `webserver` role from scratch: + +**`roles/webserver/defaults/main.yml`:** +```yaml +--- +http_port: 80 +app_name: myapp +max_connections: 512 +``` + +**`roles/webserver/tasks/main.yml`:** +```yaml +--- +- name: Install Nginx + yum: + name: nginx + state: present + +- name: Deploy Nginx config + template: + src: nginx.conf.j2 + dest: /etc/nginx/nginx.conf + owner: root + mode: '0644' + notify: Restart Nginx + +- name: Deploy vhost config + template: + src: vhost.conf.j2 + dest: "/etc/nginx/conf.d/{{ app_name }}.conf" + owner: root + mode: '0644' + notify: Restart Nginx + +- name: Create web root + file: + path: "/var/www/{{ app_name }}" + state: directory + mode: '0755' + +- name: Deploy index page + template: + src: index.html.j2 + dest: "/var/www/{{ app_name }}/index.html" + mode: '0644' + +- name: Start and enable Nginx + service: + name: nginx + state: started + enabled: true +``` + +**`roles/webserver/handlers/main.yml`:** +```yaml +--- +- name: Restart Nginx + service: + name: nginx + state: restarted +``` + +**`roles/webserver/templates/index.html.j2`:** +```html +

{{ app_name }}

+

Server: {{ ansible_hostname }}

+

IP: {{ ansible_default_ipv4.address }}

+

Environment: {{ app_env | default('development') }}

+

Managed by Ansible

+``` + +Create the `vhost.conf.j2` and `nginx.conf.j2` templates yourself based on what you learned in Task 1. + +Now call the role from a playbook `site.yml`: +```yaml +--- +- name: Configure web servers + hosts: web + become: true + roles: + - role: webserver + vars: + app_name: terraweek + http_port: 80 +``` + +Run it: +```bash +ansible-playbook site.yml +``` + +**Verify:** Curl the web server. Does the custom page load? + +--- + +### Task 4: Ansible Galaxy -- Use Community Roles +Ansible Galaxy is a marketplace of pre-built roles. + +1. **Search for roles:** +```bash +ansible-galaxy search nginx --platforms EL +ansible-galaxy search mysql +``` + +2. **Install a role from Galaxy:** +```bash +ansible-galaxy install geerlingguy.docker +``` + +3. **Check where it was installed:** +```bash +ansible-galaxy list +``` + +4. **Use the installed role** -- create `docker-setup.yml`: +```yaml +--- +- name: Install Docker using Galaxy role + hosts: app + become: true + roles: + - geerlingguy.docker +``` + +Run it -- Docker gets installed with a single role call. + +5. **Use a requirements file** for managing multiple roles. Create `requirements.yml`: +```yaml +--- +roles: + - name: geerlingguy.docker + version: "7.4.1" + - name: geerlingguy.ntp +``` + +Install all at once: +```bash +ansible-galaxy install -r requirements.yml +``` + +**Document:** Why use a `requirements.yml` instead of installing roles manually? + +--- + +### Task 5: Ansible Vault -- Encrypt Secrets +Never put passwords, API keys, or tokens in plain text. Ansible Vault encrypts sensitive data. + +1. **Create an encrypted file:** +```bash +ansible-vault create group_vars/db/vault.yml +``` +It will ask for a vault password, then open an editor. Add: +```yaml +vault_db_password: SuperSecretP@ssw0rd +vault_db_root_password: R00tP@ssw0rd123 +vault_api_key: sk-abc123xyz789 +``` +Save and exit. Open the file with `cat` -- it is fully encrypted. + +2. **Edit an encrypted file:** +```bash +ansible-vault edit group_vars/db/vault.yml +``` + +3. **View without editing:** +```bash +ansible-vault view group_vars/db/vault.yml +``` + +4. **Encrypt an existing file:** +```bash +ansible-vault encrypt group_vars/db/secrets.yml +``` + +5. **Use vault variables in a playbook** -- create `db-setup.yml`: +```yaml +--- +- name: Configure database + hosts: db + become: true + + tasks: + - name: Show DB password (never do this in production) + debug: + msg: "DB password is set: {{ vault_db_password | length > 0 }}" +``` + +Run with the vault password: +```bash +ansible-playbook db-setup.yml --ask-vault-pass +``` + +6. **Use a password file** (better for CI/CD): +```bash +echo "YourVaultPassword" > .vault_pass +chmod 600 .vault_pass +echo ".vault_pass" >> .gitignore + +ansible-playbook db-setup.yml --vault-password-file .vault_pass +``` + +Or set it in `ansible.cfg`: +```ini +[defaults] +vault_password_file = .vault_pass +``` + +**Document:** Why is `--vault-password-file` better than `--ask-vault-pass` for automated pipelines? + +--- + +### Task 6: Combine Roles, Templates, and Vault +Write a complete `site.yml` that uses everything you learned today: + +```yaml +--- +- name: Configure web servers + hosts: web + become: true + roles: + - role: webserver + vars: + app_name: terraweek + http_port: 80 + +- name: Configure app servers with Docker + hosts: app + become: true + roles: + - geerlingguy.docker + +- name: Configure database servers + hosts: db + become: true + tasks: + - name: Create DB config with secrets + template: + src: templates/db-config.j2 + dest: /etc/db-config.env + owner: root + mode: '0600' +``` + +Create `templates/db-config.j2`: +```jinja2 +# Database Configuration -- Managed by Ansible +DB_HOST={{ ansible_default_ipv4.address }} +DB_PORT={{ db_port | default(3306) }} +DB_PASSWORD={{ vault_db_password }} +DB_ROOT_PASSWORD={{ vault_db_root_password }} +``` + +Run: +```bash +ansible-playbook site.yml +``` + +**Verify:** SSH into the db server and check `/etc/db-config.env`. Are the secrets rendered correctly? Is the file permission `600`? + +--- + +## Hints +- Templates use `.j2` extension by convention (Jinja2) +- In templates, `{{ variable }}` renders a value, `{% if %}` is a conditional, `{% for %}` is a loop +- `| default(value)` is a Jinja2 filter that provides a fallback if the variable is undefined +- Role `defaults/` has the lowest priority -- callers can easily override these values +- Role `vars/` has high priority -- use it for values that should not be overridden +- `ansible-galaxy init` creates the full skeleton, but you can delete directories you don't use +- Vault-encrypted files are normal YAML after decryption -- Ansible handles it transparently +- Never commit `.vault_pass` to Git -- always add it to `.gitignore` +- Use `ansible-vault encrypt_string` to encrypt a single value inline instead of a whole file + +--- + +## Documentation +Create `day-71-roles-templates-vault.md` with: +- Your webserver role directory structure +- The Jinja2 templates you created and the rendered output +- Screenshot of the role running successfully +- How you installed and used a Galaxy role +- Vault workflow: create, edit, view, encrypt, decrypt +- Screenshot of the encrypted vault file contents +- When to use roles vs playbooks vs ad-hoc commands + +--- + +## Submission +1. Add `day-71-roles-templates-vault.md` to `2026/day-71/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Built my first Ansible role today -- organized tasks, templates, handlers, and defaults into a reusable structure. Used Galaxy to install community roles, Jinja2 for dynamic configs, and Vault to encrypt secrets. This is production-grade automation." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-72/README.md b/2026/day-72/README.md new file mode 100644 index 0000000000..59608f5cb5 --- /dev/null +++ b/2026/day-72/README.md @@ -0,0 +1,436 @@ +# Day 72 -- Ansible Project: Automate Docker and Nginx Deployment + +## Task +Five days of Ansible -- inventory, ad-hoc commands, playbooks, modules, handlers, variables, facts, conditionals, loops, roles, templates, Galaxy, and Vault. Today you put it all together and build what you would actually do on the job. + +Automate a complete deployment: install Docker, pull and run a containerized application, set up Nginx as a reverse proxy in front of it, and manage everything through Ansible roles. One command to go from a fresh server to a fully running, production-style setup. + +--- + +## Expected Output +- A complete Ansible project with custom roles for Docker and Nginx +- Docker containers running on managed nodes, deployed entirely through Ansible +- Nginx configured as a reverse proxy to the container +- Vault-encrypted Docker Hub credentials +- A markdown file: `day-72-ansible-project.md` +- A running app accessible through Nginx on port 80 + +--- + +## Challenge Tasks + +### Task 1: Plan the Project Structure +Create the complete project layout: + +``` +ansible-docker-project/ + ansible.cfg + inventory.ini + site.yml # Master playbook + group_vars/ + all.yml # Common variables + web/ + vars.yml # Nginx variables + vault.yml # Encrypted Docker Hub credentials + roles/ + common/ # Shared setup for all servers + tasks/main.yml + docker/ # Docker installation and container management + tasks/main.yml + templates/ + docker-compose.yml.j2 + handlers/main.yml + defaults/main.yml + nginx/ # Nginx reverse proxy + tasks/main.yml + templates/ + nginx.conf.j2 + app-proxy.conf.j2 + handlers/main.yml + defaults/main.yml +``` + +Generate the role skeletons: +```bash +mkdir -p ansible-docker-project/roles +cd ansible-docker-project +ansible-galaxy init roles/common +ansible-galaxy init roles/docker +ansible-galaxy init roles/nginx +``` + +Set up your `ansible.cfg` and `inventory.ini` using what you built on Day 68. + +--- + +### Task 2: Build the Common Role +The `common` role runs on every server -- baseline packages and setup. + +**`roles/common/tasks/main.yml`:** +```yaml +--- +- name: Update package cache + yum: + update_cache: true + tags: common + +- name: Install common packages + yum: + name: "{{ common_packages }}" + state: present + tags: common + +- name: Set hostname + hostname: + name: "{{ inventory_hostname }}" + tags: common + +- name: Set timezone + timezone: + name: "{{ timezone }}" + tags: common + +- name: Create deploy user + user: + name: deploy + groups: wheel + shell: /bin/bash + state: present + tags: common +``` + +(Use `apt` instead of `yum` if your instances run Ubuntu) + +**`group_vars/all.yml`:** +```yaml +--- +timezone: Asia/Kolkata +project_name: devops-app +app_env: development +common_packages: + - vim + - curl + - wget + - git + - htop + - tree + - jq + - unzip +``` + +--- + +### Task 3: Build the Docker Role +This role installs Docker, starts the service, pulls images, and runs containers. + +**`roles/docker/defaults/main.yml`:** +```yaml +--- +docker_app_image: nginx +docker_app_tag: latest +docker_app_name: myapp +docker_app_port: 8080 +docker_container_port: 80 +``` + +**`roles/docker/tasks/main.yml`:** +Write tasks that: +1. Install Docker dependencies (`yum-utils`, `device-mapper-persistent-data`, `lvm2`) +2. Add the Docker CE repository +3. Install Docker CE +4. Start and enable the Docker service +5. Add the `deploy` user to the `docker` group +6. Install Docker Compose (via pip or direct download) +7. Log in to Docker Hub using vault-encrypted credentials: +```yaml +- name: Log in to Docker Hub + community.docker.docker_login: + username: "{{ vault_docker_username }}" + password: "{{ vault_docker_password }}" + become_user: deploy + when: vault_docker_username is defined +``` +8. Pull the application image: +```yaml +- name: Pull application image + community.docker.docker_image: + name: "{{ docker_app_image }}" + tag: "{{ docker_app_tag }}" + source: pull +``` +9. Run the container: +```yaml +- name: Run application container + community.docker.docker_container: + name: "{{ docker_app_name }}" + image: "{{ docker_app_image }}:{{ docker_app_tag }}" + state: started + restart_policy: always + ports: + - "{{ docker_app_port }}:{{ docker_container_port }}" +``` +10. Verify the container is running: +```yaml +- name: Wait for container to be healthy + uri: + url: "http://localhost:{{ docker_app_port }}" + status_code: 200 + retries: 5 + delay: 3 + register: health_check + until: health_check.status == 200 +``` + +Tag all tasks with `docker`. + +**`roles/docker/handlers/main.yml`:** +```yaml +--- +- name: Restart Docker + service: + name: docker + state: restarted +``` + +**Install the required Ansible collection** (needed for `community.docker` modules): +```bash +ansible-galaxy collection install community.docker +``` + +--- + +### Task 4: Build the Nginx Role +This role installs Nginx and configures it as a reverse proxy to the Docker container. + +**`roles/nginx/defaults/main.yml`:** +```yaml +--- +nginx_http_port: 80 +nginx_upstream_port: 8080 +nginx_server_name: "_" +``` + +**`roles/nginx/tasks/main.yml`:** +Write tasks that: +1. Install Nginx +2. Remove the default Nginx site config +3. Deploy the main Nginx config from a template +4. Deploy the reverse proxy config from a template +5. Test Nginx config before reloading: +```yaml +- name: Test Nginx configuration + command: nginx -t + changed_when: false +``` +6. Start and enable Nginx +7. Use a handler to reload Nginx when any config changes + +Tag all tasks with `nginx`. + +**`roles/nginx/templates/app-proxy.conf.j2`:** +```nginx +# Reverse Proxy to Docker Container -- Managed by Ansible +upstream docker_app { + server 127.0.0.1:{{ nginx_upstream_port }}; +} + +server { + listen {{ nginx_http_port }}; + server_name {{ nginx_server_name }}; + + location / { + proxy_pass http://docker_app; + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + } + + location /health { + access_log off; + return 200 'OK'; + add_header Content-Type text/plain; + } + +{% if app_env == 'production' %} + access_log /var/log/nginx/{{ project_name }}_access.log; + error_log /var/log/nginx/{{ project_name }}_error.log; +{% else %} + access_log /var/log/nginx/{{ project_name }}_access.log; + error_log /var/log/nginx/{{ project_name }}_error.log debug; +{% endif %} +} +``` + +**`roles/nginx/handlers/main.yml`:** +```yaml +--- +- name: Reload Nginx + service: + name: nginx + state: reloaded + +- name: Restart Nginx + service: + name: nginx + state: restarted +``` + +--- + +### Task 5: Encrypt Docker Hub Credentials with Vault +1. Create the vault file: +```bash +ansible-vault create group_vars/web/vault.yml +``` +Add: +```yaml +vault_docker_username: your-dockerhub-username +vault_docker_password: your-dockerhub-token +``` + +2. Create a vault password file for convenience: +```bash +echo "YourVaultPassword" > .vault_pass +chmod 600 .vault_pass +echo ".vault_pass" >> .gitignore +``` + +3. Reference it in `ansible.cfg`: +```ini +[defaults] +inventory = inventory.ini +host_key_checking = False +vault_password_file = .vault_pass +``` + +--- + +### Task 6: Write the Master Playbook and Deploy +**`site.yml`:** +```yaml +--- +- name: Apply common configuration + hosts: all + become: true + roles: + - common + tags: common + +- name: Install Docker and run containers + hosts: web + become: true + roles: + - docker + tags: docker + +- name: Configure Nginx reverse proxy + hosts: web + become: true + roles: + - nginx + tags: nginx +``` + +Deploy the full stack: +```bash +# Dry run first -- always +ansible-playbook site.yml --check --diff + +# Full deploy +ansible-playbook site.yml +``` + +Use tags for selective execution: +```bash +# Only set up Docker and containers +ansible-playbook site.yml --tags docker + +# Only update Nginx config +ansible-playbook site.yml --tags nginx + +# Skip common setup +ansible-playbook site.yml --skip-tags common +``` + +**Verify:** +1. Curl the server on port 8080 -- does the Docker container respond directly? +2. Curl the server on port 80 -- does Nginx reverse proxy the request to the container? +3. Check `docker ps` on the server -- is the container running with the correct port mapping? + +--- + +### Task 7: Bonus -- Deploy a Different App and Re-Run +Change the Docker image to something else. Update `group_vars/all.yml` or pass extra vars: + +```bash +ansible-playbook site.yml --tags docker \ + -e "docker_app_image=httpd docker_app_tag=latest docker_app_name=apache-app" +``` + +The old container should be replaced with the new one. Nginx still proxies traffic -- no config change needed. + +Now run the full playbook one more time: +```bash +ansible-playbook site.yml +``` + +The output should show mostly `ok` with zero or minimal `changed`. This proves your entire setup is **idempotent**. + +**Reflect and document:** +1. How many total tasks ran? +2. Map each Ansible concept to the day you learned it: + +| Day | Concept Used | +|-----|-------------| +| 68 | Inventory, ad-hoc commands, SSH setup | +| 69 | Playbooks, modules, handlers | +| 70 | Variables, facts, conditionals, loops | +| 71 | Roles, templates, Galaxy, Vault | +| 72 | Everything combined in one project | + +3. What would you add for production? (SSL with certbot, monitoring, log rotation, multi-container Compose) +4. Clean up your EC2 instances when done. If you used Terraform: `terraform destroy`. If manual: terminate from the console. + +--- + +## Hints +- Install `community.docker` collection before running: `ansible-galaxy collection install community.docker` +- If `community.docker` modules are not available, you can use `command` or `shell` with `docker run` as a fallback +- Nginx and the Docker container run on the same server -- Nginx listens on port 80, container on port 8080 +- `nginx -t` tests the config without reloading -- always run this before a reload +- `restart_policy: always` ensures the container restarts after a server reboot +- Tags let you update just Docker containers or just Nginx config independently +- `--check --diff` is your best friend before any deployment +- If the container port conflicts with another service, change `docker_app_port` in defaults +- The `uri` module is a clean way to health-check without installing curl on the managed node + +--- + +## Documentation +Create `day-72-ansible-project.md` with: +- Your complete project directory structure +- Key files: `site.yml`, each role's `tasks/main.yml`, the Nginx reverse proxy template +- Screenshot of `ansible-playbook site.yml` running end-to-end +- Screenshot proving idempotency (second run with all ok) +- Screenshot of `docker ps` on the server showing the running container +- Screenshot of curling port 80 through Nginx +- How you used tags for selective deployment +- How Vault protected Docker Hub credentials +- Architecture: Ansible -> Server [Nginx:80 -> Docker Container:8080] + +--- + +## Submission +1. Add `day-72-ansible-project.md` to `2026/day-72/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Completed the Ansible block -- automated a full Docker + Nginx deployment with custom roles. Docker installed, container running, Nginx reverse-proxying, secrets encrypted with Vault. One command sets up the entire server. Five days from zero to production-grade automation." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-73/README.md b/2026/day-73/README.md new file mode 100644 index 0000000000..4e6cf9b708 --- /dev/null +++ b/2026/day-73/README.md @@ -0,0 +1,296 @@ +# Day 73 -- Introduction to Observability and Prometheus + +## Task +You have built infrastructure with Terraform, configured servers with Ansible, and containerized applications with Docker. But once everything is running -- how do you know it is healthy? How do you find out why something broke at 3 AM? + +That is where observability comes in. Today you learn the three pillars of observability -- metrics, logs, and traces -- and set up Prometheus, the most widely used metrics collection tool in the DevOps ecosystem. + +--- + +## Expected Output +- Clear understanding of observability vs traditional monitoring +- Prometheus running in a Docker container +- A working `prometheus.yml` with scrape targets +- Prometheus scraping its own metrics and responding to PromQL queries +- A markdown file: `day-73-observability-prometheus.md` + +--- + +## Challenge Tasks + +### Task 1: Understand Observability +Research and write short notes on: + +1. What is observability? How is it different from traditional monitoring? + - **Monitoring** tells you _when_ something is wrong (alerts, thresholds) + - **Observability** tells you _why_ something is wrong (explore, query, correlate) + +2. The three pillars of observability: + - **Metrics** -- numerical measurements over time (CPU usage, request count, error rate). Tools: Prometheus, Datadog, CloudWatch + - **Logs** -- timestamped text records of events (application output, error messages). Tools: Loki, ELK Stack, Fluentd + - **Traces** -- the journey of a single request across multiple services. Tools: OpenTelemetry, Jaeger, Zipkin + +3. Why do DevOps engineers need all three? + - Metrics tell you _what_ is broken (high error rate on `/api/users`) + - Logs tell you _why_ it broke (stack trace showing a database timeout) + - Traces tell you _where_ it broke (the payment service call took 12 seconds) + +4. Draw or describe this architecture -- this is what you will build over the next 5 days: + ``` + [Your App] --> metrics --> [Prometheus] --> [Grafana Dashboards] + [Your App] --> logs --> [Promtail] --> [Loki] --> [Grafana] + [Your App] --> traces --> [OTEL Collector] --> [Grafana/Debug] + [Host] --> metrics --> [Node Exporter] --> [Prometheus] + [Docker] --> metrics --> [cAdvisor] --> [Prometheus] + ``` + +--- + +### Task 2: Set Up Prometheus with Docker +Create a project directory for this entire observability block -- you will keep adding to it over the next 5 days. + +```bash +mkdir observability-stack && cd observability-stack +``` + +Create a `prometheus.yml` configuration file: +```yaml +global: + scrape_interval: 15s + evaluation_interval: 15s + +scrape_configs: + - job_name: "prometheus" + static_configs: + - targets: ["localhost:9090"] +``` + +This tells Prometheus to scrape its own metrics every 15 seconds. + +Create a `docker-compose.yml` to run Prometheus: +```yaml +services: + prometheus: + image: prom/prometheus:latest + container_name: prometheus + ports: + - "9090:9090" + volumes: + - ./prometheus.yml:/etc/prometheus/prometheus.yml + - prometheus_data:/prometheus + command: + - '--config.file=/etc/prometheus/prometheus.yml' + restart: unless-stopped + +volumes: + prometheus_data: +``` + +Start Prometheus: +```bash +docker compose up -d +``` + +Open `http://localhost:9090` in your browser. You should see the Prometheus web UI. + +**Verify:** Go to Status > Targets. You should see one target (`prometheus`) with state `UP`. + +--- + +### Task 3: Understand Prometheus Concepts +Explore the Prometheus UI and understand these concepts: + +1. **Scrape targets** -- endpoints that Prometheus pulls metrics from at regular intervals (pull-based model) +2. **Metrics types:** + - `Counter` -- only goes up (total requests served, total errors) + - `Gauge` -- goes up and down (current CPU usage, memory in use, active connections) + - `Histogram` -- distribution of values in buckets (request duration: how many took <100ms, <500ms, <1s) + - `Summary` -- similar to histogram but calculates percentiles on the client side +3. **Labels** -- key-value pairs that add dimensions to metrics (e.g., `http_requests_total{method="GET", status="200"}`) +4. **Time series** -- a unique combination of metric name + labels + +Go to the Prometheus UI graph page (`http://localhost:9090/graph`) and run these queries: + +``` +# How many metrics is Prometheus collecting about itself? +count({__name__=~".+"}) + +# How much memory is Prometheus using? +process_resident_memory_bytes + +# Total HTTP requests to the Prometheus server +prometheus_http_requests_total + +# Break it down by handler +prometheus_http_requests_total{handler="/api/v1/query"} +``` + +**Document:** What is the difference between a counter and a gauge? Give one real-world example of each. + +--- + +### Task 4: Learn PromQL Basics +PromQL (Prometheus Query Language) is how you ask questions about your metrics. Run these queries in the Prometheus UI: + +1. **Instant vector** -- current value of a metric: +```promql +up +``` +This returns 1 (up) or 0 (down) for each scrape target. + +2. **Range vector** -- values over a time window: +```promql +prometheus_http_requests_total[5m] +``` +Returns all values from the last 5 minutes. + +3. **Rate** -- per-second rate of a counter over a time window: +```promql +rate(prometheus_http_requests_total[5m]) +``` +This is the most common function you will use. Counters always go up -- `rate()` converts them to a useful per-second speed. + +4. **Aggregation** -- sum across all label combinations: +```promql +sum(rate(prometheus_http_requests_total[5m])) +``` + +5. **Filter by label:** +```promql +prometheus_http_requests_total{code="200"} +prometheus_http_requests_total{code!="200"} +``` + +6. **Arithmetic:** +```promql +process_resident_memory_bytes / 1024 / 1024 +``` +This converts bytes to megabytes. + +7. **Top-K:** +```promql +topk(5, prometheus_http_requests_total) +``` + +**Try this exercise:** Write a PromQL query that shows the per-second rate of non-200 HTTP requests to Prometheus over the last 5 minutes. (Hint: use `rate()` with a label filter on `code!="200"`) + +--- + +### Task 5: Add a Sample Application as a Scrape Target +Prometheus needs something to monitor. Add a simple metrics-generating service. + +Update your `docker-compose.yml` to include a sample app that exposes Prometheus metrics: +```yaml +services: + prometheus: + image: prom/prometheus:latest + container_name: prometheus + ports: + - "9090:9090" + volumes: + - ./prometheus.yml:/etc/prometheus/prometheus.yml + - prometheus_data:/prometheus + command: + - '--config.file=/etc/prometheus/prometheus.yml' + restart: unless-stopped + + notes-app: + image: trainwithshubham/notes-app:latest + container_name: notes-app + ports: + - "8000:8000" + restart: unless-stopped + +volumes: + prometheus_data: +``` + +Update `prometheus.yml` to scrape the app: +```yaml +global: + scrape_interval: 15s + evaluation_interval: 15s + +scrape_configs: + - job_name: "prometheus" + static_configs: + - targets: ["localhost:9090"] + + - job_name: "notes-app" + static_configs: + - targets: ["notes-app:8000"] +``` + +Restart the stack: +```bash +docker compose up -d +``` + +Go back to Status > Targets. You should now see two targets. Generate some traffic to the app: +```bash +curl http://localhost:8000 +curl http://localhost:8000 +curl http://localhost:8000 +``` + +**Note:** Not all applications expose Prometheus metrics natively. In later days you will learn how Node Exporter, cAdvisor, and OTEL Collector act as metric exporters for systems that do not have built-in Prometheus support. + +--- + +### Task 6: Explore Data Retention and Storage +Understand how Prometheus stores data: + +1. Check how much disk space Prometheus is using: +```bash +docker exec prometheus du -sh /prometheus +``` + +2. Prometheus stores data in a local time-series database (TSDB). Default retention is 15 days. You can change it: +```yaml +command: + - '--config.file=/etc/prometheus/prometheus.yml' + - '--storage.tsdb.retention.time=30d' + - '--storage.tsdb.retention.size=1GB' +``` + +3. Check the TSDB status in the UI: Status > TSDB Status + +**Document:** What happens when retention is exceeded? Why is a volume mount important for Prometheus data? + +--- + +## Hints +- Prometheus uses a **pull model** -- it scrapes targets at regular intervals, unlike push-based systems +- The `up` metric is automatically created for every scrape target -- 1 means healthy, 0 means the target is unreachable +- `rate()` only works on counters, not gauges -- applying rate to a gauge gives meaningless results +- Always use `rate()` before `sum()` when aggregating counters: `sum(rate(...))` not `rate(sum(...))` +- If a target shows as DOWN in Status > Targets, check: is the container running? Is the port correct? Are they on the same Docker network? +- `prometheus.yml` changes require a restart or a POST to `/-/reload` (if `--web.enable-lifecycle` flag is set) +- Reference repo for the full stack: https://github.com/LondheShubham153/observability-for-devops + +--- + +## Documentation +Create `day-73-observability-prometheus.md` with: +- The three pillars of observability in your own words +- Your `prometheus.yml` and `docker-compose.yml` +- Screenshot of Prometheus Targets page showing all targets UP +- Five PromQL queries you ran and what they returned +- Explanation of counter vs gauge with examples +- Architecture diagram of what you will build over days 73-77 + +--- + +## Submission +1. Add `day-73-observability-prometheus.md` to `2026/day-73/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Started the observability block today -- learned the three pillars (metrics, logs, traces), set up Prometheus in Docker, wrote my first PromQL queries, and started monitoring a sample app. Observability is what separates running services from actually understanding them." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-74/README.md b/2026/day-74/README.md new file mode 100644 index 0000000000..2606ef49b4 --- /dev/null +++ b/2026/day-74/README.md @@ -0,0 +1,350 @@ +# Day 74 -- Node Exporter, cAdvisor, and Grafana Dashboards + +## Task +Prometheus is running and you can query metrics. But right now it is only monitoring itself. In production, you need to monitor two critical things: the **host machine** (CPU, memory, disk, network) and the **Docker containers** running on it. + +Today you add Node Exporter for host metrics, cAdvisor for container metrics, and set up Grafana to visualize everything in dashboards instead of raw PromQL. + +--- + +## Expected Output +- Node Exporter running and scraped by Prometheus +- cAdvisor running and scraped by Prometheus +- Grafana running with Prometheus configured as a datasource +- At least one custom Grafana dashboard with CPU, memory, and container panels +- A markdown file: `day-74-exporters-grafana.md` + +--- + +## Challenge Tasks + +### Task 1: Add Node Exporter for Host Metrics +Node Exporter exposes Linux system metrics (CPU, memory, disk, filesystem, network) in Prometheus format. + +Update your `docker-compose.yml` from Day 73 -- add the Node Exporter service: +```yaml + node-exporter: + image: prom/node-exporter:latest + container_name: node-exporter + ports: + - "9100:9100" + volumes: + - /proc:/host/proc:ro + - /sys:/host/sys:ro + - /:/rootfs:ro + command: + - '--path.procfs=/host/proc' + - '--path.sysfs=/host/sys' + - '--path.rootfs=/rootfs' + - '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)' + restart: unless-stopped +``` + +**Why these volume mounts?** +- `/proc` -- kernel and process information (CPU stats, memory info) +- `/sys` -- hardware and driver details +- `/` -- filesystem usage (disk space) + +All mounted read-only (`ro`) -- Node Exporter only reads, never modifies. + +Add it as a scrape target in `prometheus.yml`: +```yaml +scrape_configs: + - job_name: "prometheus" + static_configs: + - targets: ["localhost:9090"] + + - job_name: "node-exporter" + static_configs: + - targets: ["node-exporter:9100"] +``` + +Restart the stack: +```bash +docker compose up -d +``` + +Verify Node Exporter is healthy: +```bash +curl http://localhost:9100/metrics | head -20 +``` + +Check Prometheus Targets page -- `node-exporter` should show as `UP`. + +Run these queries in Prometheus to see host metrics: +```promql +# CPU: percentage of time spent idle (per core) +node_cpu_seconds_total{mode="idle"} + +# Memory: total vs available +node_memory_MemTotal_bytes +node_memory_MemAvailable_bytes + +# Memory usage percentage +(1 - node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100 + +# Disk: filesystem usage percentage +(1 - node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100 + +# Network: bytes received per second +rate(node_network_receive_bytes_total[5m]) +``` + +--- + +### Task 2: Add cAdvisor for Container Metrics +cAdvisor (Container Advisor) monitors resource usage and performance of running Docker containers. + +Add it to your `docker-compose.yml`: +```yaml + cadvisor: + image: gcr.io/cadvisor/cadvisor:latest + container_name: cadvisor + ports: + - "8080:8080" + volumes: + - /var/run/docker.sock:/var/run/docker.sock:ro + - /sys:/sys:ro + - /var/lib/docker/:/var/lib/docker:ro + restart: unless-stopped +``` + +**Why these volume mounts?** +- Docker socket (`docker.sock`) -- lets cAdvisor discover and query running containers +- `/sys` -- kernel-level container stats (cgroups) +- `/var/lib/docker/` -- container filesystem information + +Add cAdvisor as a Prometheus scrape target: +```yaml + - job_name: "cadvisor" + static_configs: + - targets: ["cadvisor:8080"] +``` + +Restart and verify: +```bash +docker compose up -d +``` + +Open `http://localhost:8080` to see the cAdvisor web UI. Click on Docker Containers to see per-container stats. + +Run these queries in Prometheus: +```promql +# CPU usage per container (in seconds) +rate(container_cpu_usage_seconds_total{name!=""}[5m]) + +# Memory usage per container +container_memory_usage_bytes{name!=""} + +# Network received bytes per container +rate(container_network_receive_bytes_total{name!=""}[5m]) + +# Which container is using the most memory? +topk(3, container_memory_usage_bytes{name!=""}) +``` + +The `{name!=""}` filter removes aggregated/system-level entries and shows only named containers. + +**Document:** What is the difference between Node Exporter and cAdvisor? When would you use each? + +--- + +### Task 3: Set Up Grafana +Grafana is the visualization layer. It connects to Prometheus (and later Loki) and lets you build dashboards, set alerts, and share views with your team. + +Add Grafana to your `docker-compose.yml`: +```yaml + grafana: + image: grafana/grafana-enterprise:latest + container_name: grafana + ports: + - "3000:3000" + volumes: + - grafana_data:/var/lib/grafana + environment: + - GF_SECURITY_ADMIN_USER=admin + - GF_SECURITY_ADMIN_PASSWORD=admin123 + restart: unless-stopped +``` + +Add the volume at the bottom of your compose file: +```yaml +volumes: + prometheus_data: + grafana_data: +``` + +Restart: +```bash +docker compose up -d +``` + +Open `http://localhost:3000`. Log in with `admin` / `admin123`. + +**Add Prometheus as a datasource:** +1. Go to Connections > Data Sources > Add data source +2. Select Prometheus +3. Set URL to `http://prometheus:9090` (use the container name, not localhost -- they are on the same Docker network) +4. Click Save & Test -- you should see "Successfully queried the Prometheus API" + +--- + +### Task 4: Build Your First Dashboard +Create a dashboard that shows the health of your system at a glance. + +1. Go to Dashboards > New Dashboard > Add Visualization +2. Select Prometheus as the datasource + +**Panel 1 -- CPU Usage (Gauge):** +```promql +100 - (avg(rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) +``` +- Visualization: Gauge +- Title: "CPU Usage %" +- Set thresholds: green < 60, yellow < 80, red >= 80 + +**Panel 2 -- Memory Usage (Gauge):** +```promql +(1 - node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100 +``` +- Visualization: Gauge +- Title: "Memory Usage %" + +**Panel 3 -- Container CPU Usage (Time Series):** +```promql +rate(container_cpu_usage_seconds_total{name!=""}[5m]) * 100 +``` +- Visualization: Time series +- Title: "Container CPU Usage" +- Legend: `{{name}}` + +**Panel 4 -- Container Memory Usage (Bar Chart):** +```promql +container_memory_usage_bytes{name!=""} / 1024 / 1024 +``` +- Visualization: Bar chart +- Title: "Container Memory (MB)" +- Legend: `{{name}}` + +**Panel 5 -- Disk Usage (Stat):** +```promql +(1 - node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}) * 100 +``` +- Visualization: Stat +- Title: "Disk Usage %" + +Save the dashboard as "DevOps Observability Overview". + +--- + +### Task 5: Auto-Provision Datasources with YAML +In production, you do not click through the UI to add datasources. You provision them with configuration files so the setup is repeatable. + +Create the provisioning directory structure: +```bash +mkdir -p grafana/provisioning/datasources +mkdir -p grafana/provisioning/dashboards +``` + +Create `grafana/provisioning/datasources/datasources.yml`: +```yaml +apiVersion: 1 + +datasources: + - name: Prometheus + type: prometheus + access: proxy + url: http://prometheus:9090 + isDefault: true + editable: false +``` + +Update the Grafana service in `docker-compose.yml` to mount the provisioning directory: +```yaml + grafana: + image: grafana/grafana-enterprise:latest + container_name: grafana + ports: + - "3000:3000" + volumes: + - grafana_data:/var/lib/grafana + - ./grafana/provisioning:/etc/grafana/provisioning + environment: + - GF_SECURITY_ADMIN_USER=admin + - GF_SECURITY_ADMIN_PASSWORD=admin123 + restart: unless-stopped +``` + +Restart Grafana: +```bash +docker compose up -d grafana +``` + +Check Connections > Data Sources -- Prometheus should already be there without any manual setup. + +**Document:** Why is provisioning datasources via YAML better than configuring them manually through the UI? + +--- + +### Task 6: Import a Community Dashboard +The Grafana community maintains thousands of pre-built dashboards. Import one for Node Exporter: + +1. Go to Dashboards > New > Import +2. Enter dashboard ID: **1860** (Node Exporter Full) +3. Select your Prometheus datasource +4. Click Import + +Explore the imported dashboard. It has dozens of panels covering CPU, memory, disk, network, and more -- all built on the same Node Exporter metrics you queried manually. + +**Try another one:** Import dashboard ID **193** (Docker monitoring via cAdvisor). Select Prometheus as the datasource and explore container-level stats. + +**Your full `docker-compose.yml` should now have these services:** +- `prometheus` +- `node-exporter` +- `cadvisor` +- `grafana` +- `notes-app` (from Day 73) + +Verify all are running: +```bash +docker compose ps +``` + +--- + +## Hints +- Node Exporter metrics start with `node_` -- use this prefix to explore in Prometheus +- cAdvisor metrics start with `container_` -- filter with `{name!=""}` to skip aggregated entries +- Grafana uses `http://prometheus:9090` (container name) not `http://localhost:9090` because containers communicate over Docker's internal network +- If Grafana panels show "No data", check: is the datasource configured? Is the PromQL query valid? Try the same query in Prometheus UI first +- Dashboard ID 1860 is the gold standard Node Exporter dashboard -- almost every team uses it +- On macOS with Docker Desktop, some Node Exporter metrics may be limited because Docker runs in a Linux VM, not directly on the host +- Reference repo: https://github.com/LondheShubham153/observability-for-devops -- check `grafana/provisioning/` for provisioning examples + +--- + +## Documentation +Create `day-74-exporters-grafana.md` with: +- Your updated `docker-compose.yml` and `prometheus.yml` with all services +- Difference between Node Exporter and cAdvisor (when to use which) +- Screenshot of Prometheus Targets page with all 3+ targets UP +- Screenshot of your custom Grafana dashboard +- Screenshot of the imported Node Exporter Full dashboard (ID 1860) +- PromQL queries for CPU, memory, disk, and container metrics +- How datasource provisioning works via YAML + +--- + +## Submission +1. Add `day-74-exporters-grafana.md` to `2026/day-74/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Added Node Exporter for host metrics and cAdvisor for container metrics to my observability stack. Built my first Grafana dashboard from scratch -- CPU, memory, disk, and per-container resource usage all in one view. Imported the community Node Exporter dashboard (ID 1860) and it is packed with insights." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-75/README.md b/2026/day-75/README.md new file mode 100644 index 0000000000..1c6c8b355c --- /dev/null +++ b/2026/day-75/README.md @@ -0,0 +1,349 @@ +# Day 75 -- Log Management with Loki and Promtail + +## Task +Metrics tell you _what_ is broken. Logs tell you _why_. Yesterday you built the metrics pipeline with Prometheus, Node Exporter, cAdvisor, and Grafana. Today you add the second pillar of observability -- logs. + +You will set up Grafana Loki (a log aggregation system built by the Grafana team) and Promtail (the agent that ships logs to Loki). By the end of today, your Grafana instance will show both metrics and logs side by side. + +--- + +## Expected Output +- Loki running as a log storage backend +- Promtail collecting Docker container logs and shipping them to Loki +- Loki added as a datasource in Grafana +- LogQL queries running in Grafana Explore +- A markdown file: `day-75-loki-promtail.md` + +--- + +## Challenge Tasks + +### Task 1: Understand the Logging Pipeline +Before writing any config, understand how the pieces fit together: + +``` +[Docker Containers] + | + | (write JSON logs to /var/lib/docker/containers/) + v + [Promtail] + | + | (reads log files, adds labels, pushes to Loki) + v + [Loki] + | + | (stores logs, indexes by labels) + v + [Grafana] + | + | (queries Loki with LogQL, displays logs) + v + [You] +``` + +Key differences from the ELK stack: +- Loki does **not** index the full text of logs -- it only indexes labels (like container name, job, filename) +- This makes Loki much cheaper to run and simpler to operate +- Think of it as "Prometheus, but for logs" -- same label-based approach + +**Document:** Why does Loki only index labels instead of full text? What is the trade-off? + +--- + +### Task 2: Add Loki to the Stack +Create the Loki configuration file. + +```bash +mkdir -p loki +``` + +Create `loki/loki-config.yml`: +```yaml +auth_enabled: false + +server: + http_listen_port: 3100 + +common: + ring: + instance_addr: 127.0.0.1 + kvstore: + store: inmemory + replication_factor: 1 + path_prefix: /loki + +schema_config: + configs: + - from: 2020-10-24 + store: tsdb + object_store: filesystem + schema: v13 + index: + prefix: index_ + period: 24h + +storage_config: + filesystem: + directory: /loki/chunks +``` + +**What this config does:** +- `auth_enabled: false` -- single-tenant mode, no authentication needed +- `store: tsdb` -- uses Loki's time-series database for indexing +- `object_store: filesystem` -- stores log chunks on local disk +- `replication_factor: 1` -- single instance, no replication (fine for learning) + +Add Loki to your `docker-compose.yml`: +```yaml + loki: + image: grafana/loki:latest + container_name: loki + ports: + - "3100:3100" + volumes: + - ./loki/loki-config.yml:/etc/loki/loki-config.yml + - loki_data:/loki + command: -config.file=/etc/loki/loki-config.yml + restart: unless-stopped +``` + +Add `loki_data` to your volumes section: +```yaml +volumes: + prometheus_data: + grafana_data: + loki_data: +``` + +Start Loki: +```bash +docker compose up -d loki +``` + +Verify Loki is running: +```bash +curl http://localhost:3100/ready +``` + +You should see `ready`. + +--- + +### Task 3: Add Promtail to Collect Container Logs +Promtail is the log collection agent. It reads Docker container log files from the host and pushes them to Loki. + +```bash +mkdir -p promtail +``` + +Create `promtail/promtail-config.yml`: +```yaml +server: + http_listen_port: 9080 + grpc_listen_port: 0 + +positions: + filename: /tmp/positions.yaml + +clients: + - url: http://loki:3100/loki/api/v1/push + +scrape_configs: + - job_name: docker + static_configs: + - targets: + - localhost + labels: + job: docker + __path__: /var/lib/docker/containers/*/*-json.log + pipeline_stages: + - docker: {} +``` + +**What this config does:** +- `positions` -- tracks which log lines have already been shipped (like a bookmark) +- `clients` -- where to send logs (Loki endpoint) +- `__path__` -- the glob pattern to find Docker JSON log files on the host +- `pipeline_stages: docker: {}` -- parses the Docker JSON log format and extracts timestamp, stream (stdout/stderr), and the log message + +Add Promtail to your `docker-compose.yml`: +```yaml + promtail: + image: grafana/promtail:latest + container_name: promtail + volumes: + - ./promtail/promtail-config.yml:/etc/promtail/promtail-config.yml + - /var/lib/docker/containers:/var/lib/docker/containers:ro + - /var/run/docker.sock:/var/run/docker.sock + command: -config.file=/etc/promtail/promtail-config.yml + restart: unless-stopped +``` + +**Why these volume mounts?** +- `/var/lib/docker/containers` -- where Docker stores container log files (read-only) +- `/var/run/docker.sock` -- lets Promtail discover container metadata (names, labels) + +Restart the stack: +```bash +docker compose up -d +``` + +Generate some logs by hitting the notes app: +```bash +for i in $(seq 1 20); do curl -s http://localhost:8000 > /dev/null; done +``` + +--- + +### Task 4: Add Loki as a Grafana Datasource +You can add it manually through the UI or auto-provision it with YAML. + +**Option A -- Provision via YAML (recommended):** + +Update `grafana/provisioning/datasources/datasources.yml`: +```yaml +apiVersion: 1 + +datasources: + - name: Prometheus + type: prometheus + access: proxy + url: http://prometheus:9090 + isDefault: true + editable: false + + - name: Loki + type: loki + access: proxy + url: http://loki:3100 + editable: false +``` + +Restart Grafana to pick up the new datasource: +```bash +docker compose restart grafana +``` + +**Option B -- Manual UI setup:** +1. Go to Connections > Data Sources > Add data source +2. Select Loki +3. URL: `http://loki:3100` +4. Save & Test + +Either way, you should now have two datasources in Grafana: Prometheus and Loki. + +--- + +### Task 5: Query Logs with LogQL +LogQL is Loki's query language -- similar to PromQL but for logs. + +Go to Grafana > Explore (compass icon). Select Loki as the datasource. + +1. **Stream selector** -- filter logs by labels: +```logql +{job="docker"} +``` +This shows all Docker container logs. + +2. **Filter by container name:** +```logql +{container_name="prometheus"} +``` + +3. **Keyword search** -- filter log lines by content: +```logql +{job="docker"} |= "error" +``` +`|=` means "line contains". This finds all log lines with the word "error". + +4. **Negative filter:** +```logql +{job="docker"} != "health" +``` +Excludes lines containing "health" (useful to filter out health check noise). + +5. **Regex filter:** +```logql +{job="docker"} |~ "status=[45]\\d{2}" +``` +Finds lines with HTTP 4xx or 5xx status codes. + +6. **Log metric queries** -- count log lines over time: +```logql +count_over_time({job="docker"}[5m]) +``` + +7. **Rate of logs per second:** +```logql +rate({job="docker"}[5m]) +``` + +8. **Top containers by log volume:** +```logql +topk(5, sum by (container_name) (rate({job="docker"}[5m]))) +``` + +**Exercise:** Write a LogQL query that finds all error logs from the notes-app container in the last 1 hour. Then write another query that counts how many error lines per minute. + +--- + +### Task 6: Correlate Metrics and Logs in Grafana +The real power of observability is correlation -- seeing metrics and logs together. + +1. **Add a logs panel to your dashboard:** + - Open the dashboard you built on Day 74 + - Add a new panel + - Select Loki as the datasource + - Query: `{job="docker"}` + - Visualization: Logs + - Title: "Container Logs" + +2. **Use the Explore split view:** + - Go to Explore + - Click the split button (two panels side by side) + - Left panel: Prometheus -- `rate(container_cpu_usage_seconds_total{name="notes-app"}[5m])` + - Right panel: Loki -- `{container_name="notes-app"}` + - Now you can see CPU spikes and the corresponding log output at the same time + +3. **Time sync:** Click on a spike in the metrics graph and both panels will zoom to that time range. This is how you debug in production -- you see a metric anomaly and immediately check the logs from that exact moment. + +**Document:** How does having metrics and logs in the same tool (Grafana) help during incident response compared to checking separate systems? + +--- + +## Hints +- Loki labels are like Prometheus labels -- keep cardinality low (container name and job are good; user ID or request ID as labels would kill performance) +- `|=` is case-sensitive. Use `|~ "(?i)error"` for case-insensitive matching +- If you see no logs in Grafana, check: is Promtail running? Is it reading from the correct path? Check Promtail targets at `http://localhost:9080/targets` +- On macOS with Docker Desktop, the Docker log path (`/var/lib/docker/containers/`) is inside the Docker VM -- Promtail needs to run as a container to access it +- Loki is not a replacement for full-text search engines (Elasticsearch). It trades search power for simplicity and cost +- `positions.yaml` tracks read progress -- if you delete it, Promtail re-reads all logs +- Reference repo: https://github.com/LondheShubham153/observability-for-devops -- check `loki/` and `promtail/` directories + +--- + +## Documentation +Create `day-75-loki-promtail.md` with: +- Architecture diagram: Docker containers -> Promtail -> Loki -> Grafana +- Your `loki-config.yml` and `promtail-config.yml` with explanations +- Updated `docker-compose.yml` with all services so far +- Screenshot of Grafana Explore showing logs from Loki +- Five LogQL queries you ran and what they returned +- Screenshot showing metrics and logs side by side in Grafana +- Comparison: Loki vs ELK stack (when would you use each?) + +--- + +## Submission +1. Add `day-75-loki-promtail.md` to `2026/day-75/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Added the second pillar of observability today -- logs. Set up Loki and Promtail to collect all Docker container logs, queried them with LogQL in Grafana, and correlated metrics with logs side by side. When a CPU spike happens, I can now instantly see the exact log lines from that moment." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-76/README.md b/2026/day-76/README.md new file mode 100644 index 0000000000..39ccdc164e --- /dev/null +++ b/2026/day-76/README.md @@ -0,0 +1,444 @@ +# Day 76 -- OpenTelemetry and Alerting + +## Task +You have metrics (Prometheus) and logs (Loki). Today you add the third pillar -- traces -- using OpenTelemetry, the industry-standard framework for collecting telemetry data. Then you set up alerting so your system notifies you when something goes wrong, instead of you staring at dashboards all day. + +By the end of today, your observability stack covers all three pillars and actively alerts on problems. + +--- + +## Expected Output +- OpenTelemetry Collector running and exporting metrics to Prometheus +- OTLP traces sent to the collector and visible in debug output +- Prometheus alerting rules configured for critical conditions +- Grafana alert rules with notification contacts +- A markdown file: `day-76-otel-alerting.md` + +--- + +## Challenge Tasks + +### Task 1: Understand OpenTelemetry +Research and write notes on: + +1. **What is OpenTelemetry (OTEL)?** + - A vendor-neutral, open-source framework for generating, collecting, and exporting telemetry data (metrics, logs, traces) + - It is not a backend -- it collects and ships data to backends like Prometheus, Jaeger, Loki, Datadog + +2. **What is the OTEL Collector?** + - A standalone service that receives, processes, and exports telemetry + - Three components in the pipeline: + - **Receivers** -- accept data (OTLP, Prometheus, Jaeger formats) + - **Processors** -- transform data (batching, filtering, sampling) + - **Exporters** -- send data to backends (Prometheus, debug console, Jaeger) + +3. **What is OTLP?** + - OpenTelemetry Protocol -- the standard wire format for sending telemetry + - Supports gRPC (port 4317) and HTTP (port 4318) + +4. **What are distributed traces?** + - A trace tracks a single request as it travels through multiple services + - Each step in the trace is called a **span** + - Spans have: trace ID, span ID, parent span ID, start time, duration, attributes + - Example: User request -> API Gateway (span 1) -> Auth Service (span 2) -> Database (span 3) + +--- + +### Task 2: Add the OpenTelemetry Collector +Create the collector configuration: + +```bash +mkdir -p otel-collector +``` + +Create `otel-collector/otel-collector-config.yml`: +```yaml +receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + +processors: + batch: + +exporters: + prometheus: + endpoint: "0.0.0.0:8889" + debug: + verbosity: detailed + +service: + pipelines: + metrics: + receivers: [otlp] + processors: [batch] + exporters: [prometheus] + traces: + receivers: [otlp] + processors: [batch] + exporters: [debug] + logs: + receivers: [otlp] + processors: [batch] + exporters: [debug] +``` + +**What this config does:** +- **Receivers:** Accepts OTLP data via gRPC (4317) and HTTP (4318) +- **Processors:** Batches data before exporting (reduces overhead) +- **Exporters:** + - Metrics go to a Prometheus-compatible endpoint on port 8889 (Prometheus scrapes this) + - Traces and logs go to debug output (console) -- in production you would send these to Jaeger or Tempo + +Add the collector to your `docker-compose.yml`: +```yaml + otel-collector: + image: otel/opentelemetry-collector-contrib:latest + container_name: otel-collector + ports: + - "4317:4317" # OTLP gRPC + - "4318:4318" # OTLP HTTP + - "8889:8889" # Prometheus exporter + volumes: + - ./otel-collector/otel-collector-config.yml:/etc/otelcol-contrib/config.yaml + restart: unless-stopped +``` + +Add the OTEL Collector as a Prometheus scrape target in `prometheus.yml`: +```yaml + - job_name: "otel-collector" + static_configs: + - targets: ["otel-collector:8889"] +``` + +Restart everything: +```bash +docker compose up -d +``` + +Verify the collector is running: +```bash +docker logs otel-collector 2>&1 | tail -5 +``` + +Check Prometheus Targets -- you should now see `otel-collector` as UP. + +--- + +### Task 3: Send Test Traces to the Collector +Send a sample OTLP trace using curl: + +```bash +curl -X POST http://localhost:4318/v1/traces \ + -H "Content-Type: application/json" \ + -d '{ + "resourceSpans": [{ + "resource": { + "attributes": [{ + "key": "service.name", + "value": { "stringValue": "my-test-service" } + }] + }, + "scopeSpans": [{ + "spans": [{ + "traceId": "5b8efff798038103d269b633813fc60c", + "spanId": "eee19b7ec3c1b174", + "name": "test-span", + "kind": 1, + "startTimeUnixNano": "1544712660000000000", + "endTimeUnixNano": "1544712661000000000", + "attributes": [{ + "key": "http.method", + "value": { "stringValue": "GET" } + }, + { + "key": "http.status_code", + "value": { "intValue": "200" } + }] + }] + }] + }] + }' +``` + +Check the collector debug output to see the trace: +```bash +docker logs otel-collector 2>&1 | grep -A 10 "test-span" +``` + +You should see the span details printed to the console. In a production setup, you would send these to a trace backend like Jaeger or Grafana Tempo for storage and visualization. + +**Send OTLP metrics too:** +```bash +curl -X POST http://localhost:4318/v1/metrics \ + -H "Content-Type: application/json" \ + -d '{ + "resourceMetrics": [{ + "resource": { + "attributes": [{ + "key": "service.name", + "value": { "stringValue": "my-test-service" } + }] + }, + "scopeMetrics": [{ + "metrics": [{ + "name": "test_requests_total", + "sum": { + "dataPoints": [{ + "asInt": "42", + "startTimeUnixNano": "1544712660000000000", + "timeUnixNano": "1544712661000000000" + }], + "aggregationTemporality": 2, + "isMonotonic": true + } + }] + }] + }] + }' +``` + +Now query it in Prometheus: +```promql +test_requests_total +``` + +The metric traveled: your curl command -> OTEL Collector (OTLP receiver) -> Prometheus exporter -> Prometheus scraped it. This is how OTEL bridges different telemetry formats. + +--- + +### Task 4: Set Up Prometheus Alerting Rules +Alerts notify you when something is wrong. Prometheus evaluates alerting rules and fires alerts when conditions are met. + +Create an alerting rules file `alert-rules.yml`: +```yaml +groups: + - name: system-alerts + rules: + - alert: HighCPUUsage + expr: 100 - (avg(rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 80 + for: 2m + labels: + severity: warning + annotations: + summary: "High CPU usage detected" + description: "CPU usage has been above 80% for more than 2 minutes. Current value: {{ $value }}%" + + - alert: HighMemoryUsage + expr: (1 - node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100 > 85 + for: 2m + labels: + severity: warning + annotations: + summary: "High memory usage detected" + description: "Memory usage is above 85%. Current value: {{ $value }}%" + + - alert: ContainerDown + expr: absent(container_last_seen{name="notes-app"}) + for: 1m + labels: + severity: critical + annotations: + summary: "Container is down" + description: "The notes-app container has not been seen for over 1 minute" + + - alert: TargetDown + expr: up == 0 + for: 1m + labels: + severity: critical + annotations: + summary: "Scrape target is down" + description: "{{ $labels.job }} target {{ $labels.instance }} is unreachable" + + - alert: HighDiskUsage + expr: (1 - node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}) * 100 > 90 + for: 5m + labels: + severity: critical + annotations: + summary: "Disk space running low" + description: "Root filesystem usage is above 90%. Current value: {{ $value }}%" +``` + +**What each alert does:** +- `expr` -- the PromQL condition that triggers the alert +- `for` -- how long the condition must be true before firing (avoids flapping) +- `labels` -- metadata for routing (severity: warning vs critical) +- `annotations` -- human-readable description + +Update `prometheus.yml` to load the rules: +```yaml +global: + scrape_interval: 15s + evaluation_interval: 15s + +rule_files: + - /etc/prometheus/alert-rules.yml + +scrape_configs: + - job_name: "prometheus" + static_configs: + - targets: ["localhost:9090"] + + - job_name: "node-exporter" + static_configs: + - targets: ["node-exporter:9100"] + + - job_name: "cadvisor" + static_configs: + - targets: ["cadvisor:8080"] + + - job_name: "otel-collector" + static_configs: + - targets: ["otel-collector:8889"] +``` + +Mount the rules file in `docker-compose.yml` under the Prometheus service: +```yaml + prometheus: + image: prom/prometheus:latest + container_name: prometheus + ports: + - "9090:9090" + volumes: + - ./prometheus.yml:/etc/prometheus/prometheus.yml + - ./alert-rules.yml:/etc/prometheus/alert-rules.yml + - prometheus_data:/prometheus + command: + - '--config.file=/etc/prometheus/prometheus.yml' + restart: unless-stopped +``` + +Restart Prometheus: +```bash +docker compose up -d prometheus +``` + +Check the rules in the Prometheus UI: go to Status > Rules. You should see all five alert rules listed. + +Go to Alerts -- they should be in `inactive` state (green). If any condition is true, the alert moves to `pending`, then `firing` after the `for` duration. + +**Test it:** Stop the notes-app container and watch the `TargetDown` alert fire: +```bash +docker compose stop notes-app +``` + +Wait 1-2 minutes, then check Alerts in the Prometheus UI. Start it back up when done: +```bash +docker compose start notes-app +``` + +--- + +### Task 5: Set Up Grafana Alerts +Grafana can also evaluate alerts and send notifications to Slack, email, PagerDuty, and more. + +1. **Create a contact point:** + - Go to Alerting > Contact points > Add contact point + - Name: "DevOps Team" + - Integration: Choose email (or Slack webhook if you have one) + - For email: just enter your email address + - Save + +2. **Create an alert rule in Grafana:** + - Go to Alerting > Alert rules > New alert rule + - Name: "High Container Memory" + - Query: `container_memory_usage_bytes{name="notes-app"} / 1024 / 1024` + - Condition: IS ABOVE 100 (fire if container uses more than 100MB) + - Evaluation: every 1m, for 2m + - Add label: severity = warning + - Link to the "DevOps Team" contact point + - Save + +3. **Create a notification policy:** + - Go to Alerting > Notification policies + - Set the default contact point to "DevOps Team" + - Add a nested policy: match label `severity=critical` -> route to a different contact point (or the same one with different settings) + +4. **View alert state:** + - Go to Alerting > Alert rules + - You should see your rule in Normal, Pending, or Firing state + +**Document:** What is the difference between Prometheus alerts and Grafana alerts? When would you use each? + +--- + +### Task 6: Review the Full Stack Architecture +Your observability stack now covers all three pillars. Map out what you have built: + +``` + METRICS PIPELINE +[Node Exporter] -----> [Prometheus] -----> [Grafana Dashboards] +[cAdvisor] ----------> [Prometheus] -----> [Grafana Dashboards] +[OTEL Collector:8889]> [Prometheus] -----> [Grafana Dashboards] + -----> [Alert Rules -> Notifications] + + LOGS PIPELINE +[Docker Containers] -> [Promtail] -> [Loki] -> [Grafana Explore/Dashboards] + + TRACES PIPELINE +[curl/App OTLP] -----> [OTEL Collector] -> [Debug Output / Future: Jaeger/Tempo] +``` + +**Services running:** + +| Service | Port | Purpose | +|---------|------|---------| +| Prometheus | 9090 | Metrics storage and querying | +| Node Exporter | 9100 | Host system metrics | +| cAdvisor | 8080 | Container metrics | +| Grafana | 3000 | Visualization and alerting | +| Loki | 3100 | Log storage | +| Promtail | 9080 | Log collection agent | +| OTEL Collector | 4317/4318/8889 | Telemetry collection | +| Notes App | 8000 | Sample application | + +Verify all services are running: +```bash +docker compose ps +``` + +All 8 containers should be healthy and running. + +--- + +## Hints +- The OTEL Collector contrib image (`otel/opentelemetry-collector-contrib`) includes more receivers and exporters than the core image -- always use contrib for learning +- Prometheus alerts without Alertmanager will show in the UI but will not send notifications -- Grafana alerting is simpler for getting started with notifications +- `for: 2m` in alert rules prevents alerts from firing on brief spikes -- this is called the pending period +- `absent()` in PromQL fires when a time series disappears entirely -- useful for detecting dead containers +- OTLP JSON format is verbose -- in production, applications use OTEL SDKs (Python, Go, Java) that handle serialization automatically +- The debug exporter prints to the collector's stdout -- use `docker logs otel-collector` to see trace output +- Reference repo: https://github.com/LondheShubham153/observability-for-devops -- check `otel-collector/` for the collector config + +--- + +## Documentation +Create `day-76-otel-alerting.md` with: +- OpenTelemetry architecture: receivers, processors, exporters +- Your `otel-collector-config.yml` with explanations +- Screenshot of a trace appearing in the collector debug logs +- Your `alert-rules.yml` with explanations for each alert +- Screenshot of Prometheus Alerts page showing alert states +- Screenshot of Grafana Alerting showing your custom alert rule +- The full architecture diagram with all three pillars + +--- + +## Submission +1. Add `day-76-otel-alerting.md` to `2026/day-76/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Added OpenTelemetry and alerting to the observability stack today. Sent OTLP traces and metrics through the OTEL Collector, set up Prometheus alerting rules for CPU, memory, disk, and container health, and configured Grafana notifications. All three pillars of observability -- metrics, logs, and traces -- are now wired up." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham** diff --git a/2026/day-77/README.md b/2026/day-77/README.md new file mode 100644 index 0000000000..94157362fa --- /dev/null +++ b/2026/day-77/README.md @@ -0,0 +1,345 @@ +# Day 77 -- Observability Project: Full Stack with Docker Compose + +## Task +Four days of building -- Prometheus, Node Exporter, cAdvisor, Grafana, Loki, Promtail, OpenTelemetry Collector, and alerting. Today you put it all together using a production-ready reference architecture. + +You will clone the observability-for-devops reference repo, spin up the complete 8-service stack in one command, validate every data flow end to end, build a unified dashboard, and document the entire setup as if you were handing it off to a teammate. + +--- + +## Expected Output +- The full observability stack running from the reference repository +- All Prometheus targets UP and healthy +- Grafana showing metrics dashboards and log panels from a single interface +- Traces flowing through the OTEL Collector +- A unified "Production Overview" dashboard in Grafana +- A markdown file: `day-77-observability-project.md` + +--- + +## Challenge Tasks + +### Task 1: Clone and Launch the Reference Stack +Clone the reference repository that contains the complete observability setup: + +```bash +git clone https://github.com/LondheShubham153/observability-for-devops.git +cd observability-for-devops +``` + +Examine the project structure: +```bash +tree -I 'node_modules|build|staticfiles|__pycache__' +``` + +``` +observability-for-devops/ + docker-compose.yml # 8 services orchestrated together + prometheus.yml # Prometheus scrape configuration + alert-rules.yml # (you will add this) + grafana/ + provisioning/ + datasources/datasources.yml # Auto-provisioned: Prometheus + Loki + dashboards/dashboards.yml # Dashboard provisioning config + loki/ + loki-config.yml # Loki storage and schema config + promtail/ + promtail-config.yml # Docker log collection config + otel-collector/ + otel-collector-config.yml # OTLP receivers, processors, exporters + notes-app/ # Sample Django + React application +``` + +Launch the entire stack: +```bash +docker compose up -d +``` + +Wait for all containers to start: +```bash +docker compose ps +``` + +All 8 services should show as running: + +| Service | Port | Check | +|---------|------|-------| +| Prometheus | 9090 | `http://localhost:9090` | +| Node Exporter | 9100 | `curl http://localhost:9100/metrics \| head -5` | +| cAdvisor | 8080 | `http://localhost:8080` | +| Grafana | 3000 | `http://localhost:3000` (admin/admin) | +| Loki | 3100 | `curl http://localhost:3100/ready` | +| Promtail | 9080 | Internal only | +| OTEL Collector | 4317/4318 | `docker logs otel-collector` | +| Notes App | 8000 | `http://localhost:8000` | + +--- + +### Task 2: Validate the Metrics Pipeline +Confirm Prometheus is scraping all targets: + +1. Open `http://localhost:9090/targets` +2. Verify all 4 scrape jobs are UP: + - `prometheus` (self-monitoring) + - `node-exporter` (host metrics) + - `docker` / `cadvisor` (container metrics) + - `otel-collector` (OTLP metrics) + +Run these validation queries: +```promql +# All targets are healthy +up + +# Host CPU usage +100 - (avg(rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) + +# Memory usage +(1 - node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100 + +# Container CPU per container +rate(container_cpu_usage_seconds_total{name!=""}[5m]) * 100 + +# Top 3 memory-hungry containers +topk(3, container_memory_usage_bytes{name!=""}) +``` + +Compare the `prometheus.yml` from the reference repo with the one you built over days 73-76. Note the scrape jobs and intervals. + +--- + +### Task 3: Validate the Logs Pipeline +Generate traffic so there are logs to see: + +```bash +for i in $(seq 1 50); do + curl -s http://localhost:8000 > /dev/null + curl -s http://localhost:8000/api/ > /dev/null +done +``` + +Open Grafana (`http://localhost:3000`) and go to Explore: + +1. Select Loki as the datasource +2. Run these LogQL queries: + +```logql +# All container logs +{job="docker"} + +# Only notes-app logs +{container_name="notes-app"} + +# Errors across all containers +{job="docker"} |= "error" + +# HTTP request logs from the app +{container_name="notes-app"} |= "GET" + +# Rate of log lines per container +sum by (container_name) (rate({job="docker"}[5m])) +``` + +Check Promtail's targets to see which log files it is watching: +```bash +curl -s http://localhost:9080/targets | head -30 +``` + +Compare `promtail/promtail-config.yml` from the reference repo with yours from Day 75. + +--- + +### Task 4: Validate the Traces Pipeline +Send OTLP traces to the collector: + +```bash +curl -X POST http://localhost:4318/v1/traces \ + -H "Content-Type: application/json" \ + -d '{ + "resourceSpans": [{ + "resource": { + "attributes": [{ + "key": "service.name", + "value": { "stringValue": "notes-app" } + }] + }, + "scopeSpans": [{ + "spans": [{ + "traceId": "aaaabbbbccccdddd1111222233334444", + "spanId": "1111222233334444", + "name": "GET /api/notes", + "kind": 2, + "startTimeUnixNano": "1700000000000000000", + "endTimeUnixNano": "1700000000150000000", + "attributes": [{ + "key": "http.method", + "value": { "stringValue": "GET" } + }, + { + "key": "http.route", + "value": { "stringValue": "/api/notes" } + }, + { + "key": "http.status_code", + "value": { "intValue": "200" } + }], + "status": { "code": 1 } + }, + { + "traceId": "aaaabbbbccccdddd1111222233334444", + "spanId": "5555666677778888", + "parentSpanId": "1111222233334444", + "name": "SELECT notes FROM database", + "kind": 3, + "startTimeUnixNano": "1700000000020000000", + "endTimeUnixNano": "1700000000120000000", + "attributes": [{ + "key": "db.system", + "value": { "stringValue": "sqlite" } + }, + { + "key": "db.statement", + "value": { "stringValue": "SELECT * FROM notes" } + }] + }] + }] + }] + }' +``` + +This simulates a two-span trace: an HTTP request that calls a database query. + +Check the debug output: +```bash +docker logs otel-collector 2>&1 | grep -A 20 "GET /api/notes" +``` + +You should see both spans with their attributes, the parent-child relationship, and timing data. + +Compare `otel-collector/otel-collector-config.yml` from the reference repo with yours from Day 76. + +--- + +### Task 5: Build a Unified "Production Overview" Dashboard +Create a single Grafana dashboard that gives a complete picture of your system. + +Go to Dashboards > New Dashboard. Add these panels: + +**Row 1 -- System Health (Node Exporter + Prometheus):** + +| Panel | Type | Query | +|-------|------|-------| +| CPU Usage | Gauge | `100 - (avg(rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)` | +| Memory Usage | Gauge | `(1 - node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100` | +| Disk Usage | Gauge | `(1 - node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}) * 100` | +| Targets Up | Stat | `sum(up)` / `count(up)` | + +**Row 2 -- Container Metrics (cAdvisor):** + +| Panel | Type | Query | +|-------|------|-------| +| Container CPU | Time series | `rate(container_cpu_usage_seconds_total{name!=""}[5m]) * 100` (legend: `{{name}}`) | +| Container Memory | Bar chart | `container_memory_usage_bytes{name!=""} / 1024 / 1024` (legend: `{{name}}`) | +| Container Count | Stat | `count(container_last_seen{name!=""})` | + +**Row 3 -- Application Logs (Loki):** + +| Panel | Type | Query (Loki datasource) | +|-------|------|-------| +| App Logs | Logs | `{container_name="notes-app"}` | +| Error Rate | Time series | `sum(rate({job="docker"} \|= "error" [5m]))` | +| Log Volume | Time series | `sum by (container_name) (rate({job="docker"}[5m]))` | + +**Row 4 -- Service Overview:** + +| Panel | Type | Query | +|-------|------|-------| +| Prometheus Scrape Duration | Time series | `prometheus_target_interval_length_seconds{quantile="0.99"}` | +| OTEL Metrics Received | Stat | `otelcol_receiver_accepted_metric_points` (if available) | + +Save the dashboard as "Production Overview -- Observability Stack". + +Set the dashboard time range to "Last 30 minutes" and enable auto-refresh (every 10s). + +--- + +### Task 6: Compare Your Stack with the Reference and Document +Now compare what you built over days 73-76 with the reference repository. + +| Component | Your Version | Reference Repo | Differences | +|-----------|-------------|----------------|-------------| +| `prometheus.yml` | Day 73-74 | Root directory | Compare scrape jobs | +| `loki-config.yml` | Day 75 | `loki/` directory | Compare storage config | +| `promtail-config.yml` | Day 75 | `promtail/` directory | Compare scrape configs | +| `otel-collector-config.yml` | Day 76 | `otel-collector/` directory | Compare pipelines | +| `datasources.yml` | Day 74 | `grafana/provisioning/` | Compare provisioned sources | +| `docker-compose.yml` | Days 73-76 | Root directory | Compare all 8 services | + +**Reflect and document:** + +1. Map each observability concept to the day you learned it: + +| Day | What You Built | +|-----|---------------| +| 73 | Prometheus, PromQL, metrics fundamentals | +| 74 | Node Exporter, cAdvisor, Grafana dashboards | +| 75 | Loki, Promtail, LogQL, log-metric correlation | +| 76 | OTEL Collector, traces, alerting rules | +| 77 | Full stack integration, unified dashboard | + +2. What would you add for production? + - Alertmanager for routing alerts to Slack/PagerDuty + - Grafana Tempo for trace storage (replacing debug exporter) + - HTTPS/TLS for all endpoints + - Authentication on Grafana and Prometheus + - Log retention policies and storage limits + - High availability (multiple Prometheus/Loki replicas) + +3. How does this stack compare to managed solutions like Datadog, New Relic, or AWS CloudWatch? + +**Clean up when done:** +```bash +docker compose down -v +``` + +The `-v` flag removes named volumes (Prometheus data, Grafana data, Loki data). Only use this if you are done exploring. + +--- + +## Hints +- If a service fails to start, check logs: `docker compose logs ` +- The reference repo uses a shared `monitoring` network -- all services can communicate by container name +- `restart: unless-stopped` ensures containers come back after a Docker daemon restart +- Grafana dashboard JSON can be exported (Share > Export) and saved as code for dashboard-as-code workflows +- If Grafana shows "No data" for Loki panels, make sure you generated traffic first (`curl` the notes app) and check the time range +- The notes-app is a Django REST API -- browse `http://localhost:8000/api/` for the API endpoints +- Reference repo: https://github.com/LondheShubham153/observability-for-devops + +--- + +## Documentation +Create `day-77-observability-project.md` with: +- Architecture diagram showing all 8 services and their data flows (metrics, logs, traces) +- Screenshot of Prometheus Targets with all jobs UP +- Screenshot of Grafana Explore showing logs from Loki +- Screenshot of your "Production Overview" dashboard +- Screenshot of OTEL trace in collector debug output +- Comparison table: your configs vs reference repo configs +- What you would add for production readiness +- Key takeaways from the 5-day observability block +- All config files: `docker-compose.yml`, `prometheus.yml`, `loki-config.yml`, `promtail-config.yml`, `otel-collector-config.yml` + +--- + +## Submission +1. Add `day-77-observability-project.md` to `2026/day-77/` +2. Commit and push to your fork + +--- + +## Learn in Public +Share on LinkedIn: "Completed the observability block -- 5 days from zero to a full production-style monitoring stack. Prometheus for metrics, Grafana for visualization, Loki and Promtail for logs, OpenTelemetry Collector for traces, Node Exporter and cAdvisor for infrastructure monitoring, plus alerting rules that fire when things go wrong. All running in Docker Compose, all wired into a single unified dashboard. This is what production observability looks like." + +`#90DaysOfDevOps` `#DevOpsKaJosh` `#TrainWithShubham` + +Happy Learning! +**TrainWithShubham**