This project has been created as part of the 42 curriculum by jslusark.
A system administration project that focuses on Docker orchestration and infrastructure management. Done as part of 42 School's core curriculum and known as "Inception".
- Description
- Instructions
- Virtual Machines vs Docker
- Secrets vs Environment Variables
- Docker Network vs Host Network
- Docker Volumes vs Bind Mounts
- Resources
This project builds a multi-service infrastructure using Docker inside a Virtual Machine. It includes a web server, an application, and a database, all running in isolated containers. The goal is to create a secure and efficient environment for hosting a web application while ensuring that the infrastructure can be easily reproduced and maintained. It follows the LEMP stack architecture pattern (Linux, NGINX, MySQL/MariaDB, PHP), which is widely used in open-source web development:
-
Linux: each service container is built on a minimal Linux distribution which provides the underlying operating system environment required to run NGINX, WordPress, and MariaDB.
-
NGINX: a web server and reverse proxy which serves our wordpress website. It acts as the main entrypoint of the infrastructure, receives incoming HTTPS requests and forwards them via TLS (v1.2 or v1.3) to encrypt communications, ensuring that all traffic between the client and the server is secure.
-
WordPress: a content management system (CMS) written in PHP that allows users to create and manage websites through a web interface.
-
MariaDB: an open-source relational database management system that stores information through MYSQL queries. It is responsible for persistently storing all application data used by WordPress (user accounts, posts, comments etc..). Each service is built from its own dedicated
Dockerfileinsidesrcs/requirements/, ensuring full control over every layer of the stack. All services are orchestrated through Docker Compose, communicate securely through a custom Docker network (via port 443 using TLS encryption) and use Docker volumes to ensure that data persists even when containers are restarted or recreated.
Before proceeding ensure to have make, Docker and Docker Compose installed on your machine.
- Clone the repository
git clone https://github.com/JSlusark/inception
cd inception- Set up secrets
Create a
secrets/folder and place one file per secret.
mkdir -p secrets
echo "superroot" > secrets/db_root_password.txt
echo "wp_pass" > secrets/db_password.txt
echo "admin_pass" > secrets/wp_admin_password.txt
echo "user_pass" > secrets/wp_user_password.txt- Create
.envin/srcdirectory
cat > srcs/.env << 'EOF'
# MariaDB configuration (non-sensitive)
MYSQL_DATABASE=xxxx # Database name for WordPress
MYSQL_USER=xxxx # Non-root database user for WordPress
# WordPress configuration
WP_TITLE= xxxx # WordPress site title
DOMAIN_NAME= loginName.42.fr # Domain name used by WordPress and NGINX (must match your /etc/hosts entry)
WP_ADMIN_USER=xxxx # WordPress admin username
WP_ADMIN_EMAIL=xxxx # WordPress admin email
WP_USER=xxxx # Non-admin WordPress username
WP_USER_EMAIL=xxxx # Non-admin WordPress user email
EOF- Configure your domain name to point to your local IP address.
Edit /etc/hosts and map DOMAIN_NAME to your machine/VM IP.
Example:
sudo nano /etc/hostsAdd a line like:
127.0.0.1 loginName.42.fr
If you are accessing a VM from your host, use the VM IP instead of 127.0.0.1.
- Build and run
make(You can also do it in one command: make.)
- Access your WordPress site From the VM or forwarded host:
https://loginName.42.fr
Accept the self-signed TLS certificate warning in the browser.
More resources were added through their specific topics in the README, but here are some general resources to get started with Docker and containerization:
- Docker documentation
- Docker overview (Docker Docs)
- Docker learning basics (Docker)
- Learn docker in 2 hours
- What is docker and how does it work?
- Nginx documentation
- Nginx Tutorial
- Understanding the Nginx Configuration
- Understanding digital certificates
- MariaDB documentation
- How to write a docker compose
- FastCGI Process Manager (FPM)
- Wordpress CLI commands
Ai was used for this project as a brainstorming and facilitation tool used to organise the project workflow, to discuss concepts and challenge decisions and organise my thoughts and notion notes into a more structured and readable format. Additionally, every information received from AI was always proof checked against more reliable sources such as documentation and tutorials from professional developers.
Docker is a software platform that packages software into standardized units called containers.
Containers are isolated environments that run a specific application and include everything needed to run it ( libraries, system tools, code, and runtime).
Containers can also be bundled together into multi-container applications using Docker Compose, which allows developers to define and manage complex applications with ease.
Virtual machines are an emulation of a computer system that provides the same exact functionality of a physical computer. A virtual machine runs its own operating system, applications, and resources, which are isolated from the host machine and other virtual machines.
Docker and virtual machines are both technologies that allow for the creation of isolated environments for running applications, but they do so in completely different ways and in their defined use cases:
-
Each VM runs a complete guest operating system with its own kernel. To make this possible, a hypervisor sits between the host machine and the virtual machines, allocating CPU, memory, storage, and network resources to each VM. Because every VM includes a full operating system, it behaves like an independent computer and can run any OS supported by the hypervisor.
Advantages: As Virtual Machines run on their own kernel, they provide strong isolation between applications and the host system, making them suitable for security-sensitive workloads. They also allow for running different operating systems on the same host machine, which is beneficial for testing and development environments.
Disadvantages: Since Virtual Machines are a virtualization of full operative systems, they are resource-intensive and consume more CPU, memory, and storage compared to containers. Additionally, they take longer to boot.
Use cases: Because of their advantages, Virtual Machines are ideal for testing multiple operating systems on the same hardware, legacy applications that require a specific operating system, infrastructure virtualization and security-sensitive workloads that require stronger isolation.
-
Containers share the host machine’s kernel while isolating applications and their dependencies inside lightweight environments. This means containers do not include a full operating system; they only package the application and everything it needs to run.
Advantages: As they do not include a full operating system, they are lightweight by design, allowing for more efficient use of system resources and faster startup times compared to virtual machines.
Disadvantages: As docker containers all share the kernel of the host machine, they are limited in terms of operating system flexibility and provide a weaker isolation than virtual machines.
Use cases: Their advantages make them ideal for scalable microservice architectures and modern deployments where applications must run and be maintained consistently across different systems.
Often both technologies are used together, where containers run inside virtual machines to combine the strong isolation and infrastructure control of VMs with the lightweight deployment and scalability of containers (for example Kubernetes nodes on cloud VMs, Docker on AWS EC2, or containers on Azure VMs)
This project uses Docker and Docker Compose to build a WordPress infrastructure composed of a web server, an application, and a database. The stack is reproducible and can be deployed locally on any machine that has Docker.
When we are building applications, we often need to manage configuration values and sensitive data such as passwords, API keys, and database credentials. Docker provides two mechanisms for handling this information:
-
Environment variables: are a simple way to pass configuration values to containers at runtime. Values are typically stored in an
.envfile and injected into the container when it starts, since the file is not encrypted, they are typically used for non-sensitive information that does not require strict access control, such as domain names, database names, or application settings. -
Docker secrets: are a more secure way to manage sensitive data, since they are encrypted and stored in the Docker swarm. They are stored in their own files (e.g.
secrets/db_password.txt) and Docker secrets are encrypted at rest and in transit, and they can only be accessed by containers that have been granted permission to use them. This makes them ideal for storing sensitive information such as database passwords, API keys, and other credentials.
In this project, we use Docker secrets (stored in
secrets/*.txt) for sensitive passwords and a.envfile (insrcs/) for non-sensitive configuration like domain names, usernames, and emails. This keeps passwords separate from general configuration and prevents them from appearing in environment variable listings.
The difference between Docker networks and the host network lies in how containers connect to the system network and how isolated they are from the host.
-
Docker Network: is an isolated virtual network where containers can communicate with each other while remaining separated from the host system. Each container receives its own internal IP address and communicates with other containers through Docker's built-in DNS service. In order to access container services from outside, specific ports must be exposed or mapped to the host (e.g.,
443:443).This model enhances security and service separation, as containers are isolated from the host and can only interact with each other through defined network rules.
- Host network: With host networking, there is no isolation between the container and the host since the container shares the host IP address and ports. By using host networking, the container does not run in isolation and can directly access the host’s network interfaces and services, any ports exposed by the container are directly accessible on the host making it less secure and less flexible than using a Docker network.
In this project, we use a dedicated Docker network so containers can communicate over a private, isolated network while keeping internal services unreachable from the host. This lets us control which ports are exposed publicly and prevents conflicts between containers and services already running on the host.
Docker Bind Mounts vs Volumes: What's the Difference?
-
Bind mount: links a specific host directory directly into the container, making the container able to read and write files in that directory. Since the container has direct access to the host file system, it can lead to portability and secury issues (UID/GID mismatches, file permissions, accidental modifications of host files etc..).
-
Docker Volumes: Volumes are persistent data stores for containers, created and managed by Docker. Volumes are stored within a directory on the Docker host. When you mount the volume into a container, this directory is what's mounted into the container.
This is similar to the way that bind mounts work, except that volumes are managed by Docker and are isolated from the host machine.
In this project, we use Docker named volumes to persist data. They avoid host permission issues, reduce the risk of accidentally modifying database files from the host and allow storage to be managed directly by Docker, unlike bind mounts which directly expose host directories and lead portability issues.



