This README contains my personal implementation log (“exam build diary”).
It was written while building the solution to keep milestones, decisions, and commands reproducible.
- 1) Project scaffold + API container baseline
- 2) Authentication test (containerized) - GET /permissions
- 3) Authorization Test (containerized) — verify access to /v1/sentiment vs /v2/sentiment
- 4 Content Test (containerized) - verify sentiment
- 5. Conclusion
- Create a git-tracked project folder for the exam hand-in with the requireed structure
- Validate that the provided API image can be started as api-service via docker-compose and reached on
/statusand/docs.
docker-fastapi-tests/
├── docs/
├── IMPLEMENTATION.md
├── logs/
├── shared/
└── tests/
├── _shared/
├── authentication/
├── authorization/
└── content/
├── .gitignore
├── docker-compose.yml
├── Makefile
├── README.md
└── setup.sh
- FYI: We use a prebuild image for the api container - so we don't build the api-image ourselves - we just run it (the test containers that will be implemented later on will use Dockerfiles of course)
- As per requriments, the API is to be made available on port 8000 of the host machine.
- The test containers don't need port publishing (see.
ports: ...) for the api-container at all - they can reach the API over the internal Docker network via the service name (http://api:8000). But havingportsdefined is convenient for manual checks like curl http://localhost:8000/status or opening/docsin the browser
In docker-compose.yml:
services:
api:
image: datascientest/fastapi:1.0.0
container_name: api
ports:
- "8000:8000"
networks:
- sentiment_net
volumes:
# shared artifacts (test containers will write api_test.log here)
- ./shared:/shared
networks:
sentiment_net:
driver: bridgeThis way ...
- the api gets a stable service name (
api) for Compose DNS (later tests can usehttp://api:8000/...). ./sharedwill be used later for the single mergedapi_test.log, produced by the test runs. A shared bind-mount makes it easy for multiple test containers to append to the same log file deterministically.
Pull the image:
docker image pull datascientest/fastapi:1.0.0Now we could run a docker container on base of the pulled image to play with the api like this:
docker container run -p 8000:8000 datascientest/fastapi:1.0.0But thanks to our docker-compose-config we can start the API via Compose instead:
# Creates and start all services defined in our Compose file
# (right now just the api)
# -p: project name
# -d: detached - run containers in background adn free terminal
docker compose -p docker-exam up -d
# insepct preocess status to list all running containers on teh hostz
# related to the current project / docker-compose file
docker compose -p docker-exam psSmoke tests from the host against the api:
- expected:
/statusreturns1(== healthy api) - and /docs (FastAPI-docs) is reachable in browser:
# API up?
# Call the /status endpoint and print the
# response body (expected: 1)
# -s = “silent”: hide noise / just response body
curl -s "http://localhost:8000/status"; echo
# FastAPI’s UI with Docs reachable?
# Calls /docs but prints only the HTTP status code
# (expected: 200).
# - -s = silent (no progress meter)
# - -o /dev/null = throw away the response body
# - -w "%{http_code}\n" = “write-out”: after the request
# finishes, print the HTTP status code + newline
curl -s -o /dev/null -w "%{http_code}\n" "http://localhost:8000/docs"View logs (optional):
docker compose -p docker-exam logs -f apiStop/cleanup:
docker compose -p docker-exam downWe can also check if we can reach the api-endpoints and teh FastAPI Docs UI via the browser:
- http://localhost:8000/status returns 1 if the API is running
- http://localhost:8000/permissions returns a user's permissions (expected at this point of course: {"detail":"Authentication failed"})
- http://localhost:8000/v1/sentiment returns the sentiment analysis using an old model (expected: {"detail":"Authentication failed"})
- http://localhost:8000/v2/sentiment returns the sentiment analysis using a new template (expected: {"detail":"Authentication failed"})
- http://localhost:8000/docs renders the FastAPI Docs UI showing doc on the mentioned api-endpoints
- docker-compose.yml starts the provided API image
- API reachable on http://localhost:8000/status and http://localhost:8000/docs etc. ...
- As per the requirements, all test scenarios are to be performed via separate containers - i.e. one dedicated container per test scenario
- If an environment variable
LOGis set to1on a test run, then a log should be printed in a log file namedapi_test.log.- To create the tests, a starter tenplate (python) is provided
Run a dedicated container that tests authentication logic via the api-route /permissions:
alice:wonderland-> user exists -> expectedHTTP 200bob:builder-> user exists -> expectedHTTP 200clementine:mandarine-> nope -> expectedHTTP 403
Create tests/authentication/test_authentication.py
(implementation see there)
# Usage:
API_ADDRESS=localhost API_PORT=8000 LOG=1 LOG_PATH=./shared/api_test.log \
python3 tests/authentication/test_authentication.pyWhat the test_authentication.py script does (and why):
- Authentication Tests: It validates the API authentication behavior by calling GET /permissions with 3 sets of known credentials and checking the expected HTTP status codes.
- Readiness Check/Polling: It includes a readiness check (polling GET /status) to ensure the API is up before testing (because
depends_onin docker-compose ensures just order - not readiness) - Shared Logging: If LOG=1, it appends the report to a shared LOG_PATH (default: /shared/api_test.log) so multiple test containers can share one log file later.
- Exits with
0on success,1on failure (important so CI/CD-style pipelines can fail fast). - Calls the API over the docker compose internal DNS name
api:8000(no host IP needed).
Create tests/authentication/Dockerfile:
# Use a minimal python base image
FROM python:3.12-slim
WORKDIR /app
# Install requests - i.e3. only what we need for HTTP calls
# and don't keep pip's download/cache dir on disk
RUN pip install --no-cache-dir requests
# Copy the entire `tests/` package tree (test modules + shared helpers)
# into the image, so `python -m tests...` can import `tests._shared.*`
# and run the suite via module path.
COPY tests /app/tests
# Default command: run the test module on container start
# (so `docker compose up` triggers the test automatically)
# FYI: Environment vars needed for the script execution are
# set by docker-compose
# The test are run as a Python module (`-m`), so `tests.*`
# imports (e.g. `tests._shared...`) work because `tests/`
# is treated as a package tree.
CMD ["python3", "-m", "tests.authentication.test_authentication"]Now we need to update docker-compose.yml to add auth_test:
services:
api:
image: datascientest/fastapi:1.0.0
container_name: api
ports:
- "8000:8000" # host:container (so you can curl localhost:8000 for manual checks)
networks:
- sentiment_net
volumes:
- ./shared:/shared
auth_test:
# Run as host user so /shared/api_test.log is created/appended with correct ownership
# (bind mount: ./shared -> /shared). Prevents root-owned log files (which could cause
# issues when resetting these files on test runs). Vars are exported by setup.sh.
user: "${HOST_UID}:${HOST_GID}"
build:
# IMPORTANT: context is repo root so we can
# `COPY tests /app/tests` (full package tree)
context: .
dockerfile: ./tests/authentication/Dockerfile
container_name: auth_test
depends_on:
- api
networks:
- sentiment_net
environment:
# LOG=1 => append to shared log file (exam requirement)
- LOG=1
# These are optional overrides; defaults in the script are already correct
- API_ADDRESS=api
- API_PORT=8000
- LOG_PATH=/shared/api_test.log
- HTTP_TIMEOUT=5
volumes:
- ./shared:/shared
networks:
sentiment_net:
driver: bridgeTo ensure the test log file is shared, both services mount ./shared:/shared - all tests can append into one shared file /shared/api_test.log.
We still keep ports: "8000:8000" on the API, even if it's not required for inter-container communication - sicne it makes manual debugging fast (curl http://localhost:8000/status and /docs).
We add a small setup.sh script that acts as a pipeline runner for this exam.
What it does (in order):
- Resets to a clean state (stops previous compose runs, removes stray containers, frees host port
8000if needed) - Starts the compose stack (API + test container(s))
- Waits for the
auth_testcontainer to finish (so the run is deterministic) - Prints the
auth_testlogs to the terminal (quick verification) - Copies the aggregated log from
./shared/api_test.logto./log.txt(exam requirement) - Stops the stack again (avoids conflicts on rerun)
FYI: the script calls Makefile targets to keep the runner readable and DRY (details live in Makefile).
Create/update setup.sh (simplified excerpt — see implemented file for the full documented version):
#!/usr/bin/env bash
set -euo pipefail
# Clean start (idempotent)
make reset
# Start stack (detached) + show status
make start-project
make ps
# Wait for test completion + print logs
make wait-auth
make logs-auth
# Create submission snapshot log.txt from shared aggregate log (exam requirement)
if [ -f "./shared/api_test.log" ]; then
cp ./shared/api_test.log ./log.txt
fi
# Shutdown to keep reruns conflict-free
make stop-allMake it executable:
chmod +x setup.shClean previous logs (optional but recommended):
rm -f ./shared/api_test.log ./log.txtRun via script:
./setup.shExam requirement:
- “Authorization” test suite in a separate container.
- Validate that authorization rules work:
bobhas access to v1 onlyalicehas access to v1 and v2
- For each user, call:
GET /v1/sentimentGET /v2/sentiment
- params:
username,password,sentence.
Expected outcomes
alice:/v1/sentiment=> 200/v2/sentiment=> 200
bob:/v1/sentiment=> 200/v2/sentiment=> 403
Create tests/authorization/test_authorization.py
(implementation: see file)
# Usage (host-run dev):
API_ADDRESS=localhost API_PORT=8000 LOG=1 LOG_PATH=./shared/api_test.log \
python3 -m tests.authorization.test_authorizationWhat the test_authorization.py script does (and why):
- Authorization Tests: It validates API authorization by calling the sentiment endpoints and checking expected HTTP status codes for each case:
alicecan access/v1/sentimentand/v2/sentiment→ 200bobcan access/v1/sentiment→ 200bobmust be blocked on/v2/sentiment→ 403
- Readiness Check / Polling: It polls
GET /statusuntil it returns"1"before executing tests (becausedepends_oncontrols startup order, not readiness). - Shared Logging: If
LOG=1, it appends the suite report toLOG_PATH(default:/shared/api_test.log) so multiple test containers can write into one shared log file later. - Exit Codes for Pipelines: Exits with
0only if all cases pass, otherwise1(so CI/CD-style pipelines can fail fast). - Compose DNS by default: Uses Docker Compose internal DNS by default (
api:8000). For host-run dev, override withAPI_ADDRESS=localhost.
Like for the authentication tests, wWe build a separate test container for the authorization suite as well.
python:3.12-slimprovides a minimal Python runtimepip install --no-cache-dir requestsinstalls the only dependency--no-cache-dirtells pip not to store wheel/download caches in the image layer (smaller image)
CMD ["python", "..."]runs the script when the container starts- in most images,
pythonpoints to Python 3 (equivalent topython3)
- in most images,
Create tests/authorization/Dockerfile:
FROM python:3.12-slim
WORKDIR /app
RUN pip install --no-cache-dir requests
COPY tests /app/tests
CMD ["python3", "-m", "tests.authorization.test_authorization"]We add a second test service/container:
- It builds from
tests/authorization/Dockerfile - It uses the same internal API address:
API_ADDRESS=api,API_PORT=8000 - It writes into the same shared log file via the shared volume (
./shared:/shared) depends_onensures the API container starts before the test container starts (readiness is handled by polling in the script)
Add:
authz_test:
user: "${HOST_UID}:${HOST_GID}"
build:
context: .
dockerfile: ./tests/authorization/Dockerfile
container_name: authz_test
environment:
- API_ADDRESS=api
- API_PORT=8000
- LOG=1
- LOG_PATH=/shared/api_test.log
- HTTP_TIMEOUT=5
depends_on:
- api
networks:
- sentiment_net
volumes:
- ./shared:/sharedAdd:
wait-authz:
@echo "# [make wait-authz] Wait until authz_test finishes"
@$(COMPOSE) wait authz_test >/dev/null 2>&1 || true
logs-authz:
@echo "# [make logs-authz] Print authz_test logs (tail=200)"
@$(COMPOSE) logs --no-color --tail=200 authz_test || trueAfter the auth test, run the authorization test the same way:
make wait-authz
make logs-authzYou can run the script locally against the running API (no containerization), to iterate faster:
API_ADDRESS=localhost API_PORT=8000 LOG=1 LOG_PATH=./shared/api_test.log \
python3 tests/authorization/test_authorization.pyThis test verifies the actual model behavior (not just access control):
- It uses the alice account (has access to v1 and v2).
- It sends two known sentences to both endpoints:
life is beautiful→ expected positive sentiment (score > 0)that sucks→ expected negative sentiment (score < 0)
- It checks the sign of the returned score (positive/negative), not only HTTP status codes.
- It prints a readable report to stdout, and if
LOG=1it appends to the shared log file.
Create tests/content/test_content.py
(implementation: see file in repo)
Use module-run convention (recommended, consistent with your package setup):
API_ADDRESS=localhost API_PORT=8000 LOG=1 LOG_PATH=./shared/api_test.log \
python3 -m tests.content.test_content- Content Tests: Calls
GET /v1/sentimentandGET /v2/sentimentusing alice credentials + sentences, then asserts that the score sign matches expectations (positive vs negative). - Readiness Check / Polling: Polls
GET /statusuntil it returns"1"before running checks (because docker-composedepends_onensures start order, not readiness). - Shared Logging: If
LOG=1, appends output toLOG_PATH(default:/shared/api_test.log) so multiple test containers can aggregate into one log later. - CI-friendly Exit Codes: Exits with
0if all checks pass, otherwise1(so a pipeline can fail fast). - Compose DNS by default: By default it targets the API via Docker Compose internal DNS
api:8000(no host IP needed). For host-run dev, setAPI_ADDRESS=localhost.
We build a dedicated container image for the content test, just like for the other test suites.
Create tests/content/Dockerfile:
FROM python:3.12-slim
WORKDIR /app
RUN pip install --no-cache-dir requests
COPY tests /app/tests
CMD ["python3", "-m", "tests.content.test_content"]Add a third test container (content_test) to the compose pipeline.
It shares:
- the same network (
sentiment_net) so it can reachapivia DNS nameapi - the shared volume (
./shared:/shared) so it can append to/shared/api_test.log - the same environment variable conventions (
LOG,API_ADDRESS,API_PORT,LOG_PATH,HTTP_TIMEOUT)
Add this service:
content_test:
user: "${HOST_UID}:${HOST_GID}"
build:
context: .
dockerfile: ./tests/content/Dockerfile
container_name: content_test
depends_on:
- api
networks:
- sentiment_net
environment:
- LOG=1
- API_ADDRESS=api
- API_PORT=8000
- LOG_PATH=/shared/api_test.log
- HTTP_TIMEOUT=5
volumes:
- ./shared:/sharedAdd two Make targets (mirrors auth/authz):
wait-content:
# [make wait-content] Wait until content_test finishes
@docker compose -p $(COMPOSE_PROJECT) wait content_test >/dev/null 2>&1 || true
logs-content:
# [make logs-content] Print content_test logs (tail=200)
@docker compose -p $(COMPOSE_PROJECT) logs --no-color --tail=200 content_test || trueThen call them in setup.sh after auth/authz:
# Run + wait for content tests + show logs
make wait-content
make logs-contentWhen we run ./setup.sh ...
- docker compose will start the
apiservice + all three test containers - upon start, the test containers automatically execute their associated test module
- each test will append logging info into the shared
/shared/api_test.log - and the script will snapshot it to
./log.txtfor submission.
Back to project overview: README.md