Skip to content

Latest commit

 

History

History
104 lines (66 loc) · 5.68 KB

File metadata and controls

104 lines (66 loc) · 5.68 KB

Development and Local Builds

This document covers maintainer-oriented local rebuild workflows. The default runtime path uses compose/base.yml together with a scenario file such as compose/pbx1.yml or compose/pbx2.yml, and published, versioned images remain the recommended path for normal lab usage.

When To Use This

Use the development override when you need to:

  • rebuild one or more service images from build/
  • test changes to Dockerfiles or embedded service configuration
  • work on the testing or attacker toolchain locally

Platform Constraints

All DVRTC images must be built for linux/amd64. Every runtime service in the scenario compose files sets platform: linux/amd64, so Compose enforces the correct platform automatically when the base and scenario files are used together. No DOCKER_DEFAULT_PLATFORM env var is needed.

The testing and attacker services are behind the testing Compose profile. When building either image, include --profile testing so Compose applies the profile-gated service definition.

The repo-root VERSION file is the release source of truth for published runtime images. Local rebuilds should match the current release tag unless you are intentionally testing a version bump in progress.

Use ./scripts/dev-compose.sh for maintainer rebuilds instead of manually composing compose/base.yml, compose/<scenario>.yml, compose/dev.yml, and compose/dev.<scenario>.yml. The wrapper defaults DVRTC_VERSION from VERSION and VCS_REF from git so local rebuild metadata stays aligned with the current repo state.

Published runtime tags are not mutable release scratch space. Once a release image has been pushed for a given VERSION, do not repush changed image contents under that same tag. If a repo-owned runtime image changes and you want to publish it, bump VERSION and publish the full release image set so the runtime compose file remains internally consistent.

Release Identity

DVRTC uses the repo-root VERSION file as the canonical release identifier for published runtime images. Git tags and any release notes should follow that identifier rather than defining a separate version stream.

Use the same version string in all release surfaces:

  • VERSION defines the image tag and stack release identifier
  • each published VERSION should have a matching annotated git tag
  • any release notes or hosted release entries should be created from that matching tag

Local Rebuild Workflow

Rebuild the normal service set:

./scripts/dev-compose.sh build
./scripts/dev-compose.sh --scenario pbx1 up -d
./scripts/dev-compose.sh --scenario pbx2 build
./scripts/dev-compose.sh --scenario pbx2 up -d

Rebuild a single service:

./scripts/dev-compose.sh build kamailio

Rebuild the profile-gated test images:

./scripts/dev-compose.sh --profile testing build testing attacker

If you intentionally want non-release metadata in the rebuilt images, override DVRTC_VERSION explicitly:

DVRTC_VERSION=dev ./scripts/dev-compose.sh build nginx

Avoid running raw multi-file docker compose ... build ... commands without setting DVRTC_VERSION. That path falls back to dev metadata while still tagging the rebuilt image with the runtime image name, which can make later scenario startups look like published release containers even though /__version reports dev.

Verification

After rebuilding, validate the environment and bring the stack up:

./scripts/validate_env.sh
./scripts/dev-compose.sh --scenario pbx1 up -d
./scripts/compose.sh --scenario pbx1 ps
./scripts/compose.sh --scenario pbx1 logs [service]
./scripts/testing-smoke.sh
./scripts/testing-run-all.sh

Run validate_env.sh as a pre-flight check before bringing the stack up to catch missing variables, certificates, or addressing issues early.

For maintainer-side verification, prefer the dedicated ./scripts/testing-smoke.sh, ./scripts/testing-run-all.sh, and ./scripts/attacker-run-all.sh wrappers over raw docker compose ... run commands. They keep the selected scenario explicit. If you use ./scripts/dev-compose.sh ... run testing ... or ... run attacker ... after local rebuilds, keep --scenario explicit there as well; the wrapper passes that scenario through to the runner container.

If the change affects startup, networking, or service behavior, do a fresh stack cycle and inspect logs before and after the test run. For broader validation, use the checks documented in the README and TESTING.md.

For release consistency checks, run the image-reference validator after updating the stack version and before publishing exported artifacts.

For maintainers building release-tagged images outside the normal dev override flow, use ./scripts/build-release-images.sh and ./scripts/build-latest-stubs.sh. The expected release flow is: bump VERSION, update runtime image references, run ./scripts/build-release-images.sh --push, run ./scripts/validate_image_refs.sh, then create the matching git tag for that release after validation.

Return To Published Images

If you want to leave the local rebuild path and return to the published runtime images pinned in the scenario compose files, pull the published tags and force-recreate the affected services:

docker compose --project-directory . -p dvrtc-pbx1 -f compose/base.yml -f compose/pbx1.yml pull nginx
docker compose --project-directory . -p dvrtc-pbx1 -f compose/base.yml -f compose/pbx1.yml up -d --force-recreate nginx

Use the same pattern for any other locally rebuilt service that should go back to the published image.

Related Documentation