Our server for receiving telemetry data from the NetLogo desktop application. The main instance of this runs at https://telemetry.netlogo.org.
This is the second attempt at said functionality. Using Matomo was our first. This was intended as a stopgap, after abandoning Matomo. We are likely to replace it with something else later.
This system was originally designed to expect the data over HTTP/2, but ultimately shifted to focus on HTTP/1.1. HTTP/2 uses vastly less bandwidth, but session management is substantially more difficult there.
- Install Postgres
- This will serve as the underlying database for storing the uploaded data
- Version 16.10 is confirmed to work, but in @TheBizzle's experience, Postgres is pretty flexible on version number.
- Run
psql -f ./sql/schema.sql- Alternative forms, depending on your configuration:
psql -f ./sql/schema.sql -U postgressudo -u postgres psql -f ./sql/schema.sql
- Initializes the database schema
- Alternative forms, depending on your configuration:
- Install NVM
- Allows you to manage multiple NPM versions, and to install specific versions on demand
- Run
nvm install 24.13.1- Gets you the proper version of NPM for this project
- Run
npm install- Installs the dependencies for this project
- Run
npm run lint- Verifies the code's basic correctness and style
- Initialize the file
.envto like so:
PORT=<the port number that you want to run on; defaults to `3030`>
POSTGRES_USERNAME=<your Postgres username>
POSTGRES_PASSWORD=<your Postgres password>
PG_HOST_NAME=<the domain name where your Postgres instance can be found; defaults to `localhost`>
PG_DB_NAME=nl_telemetry2
CERT_PATH=<path to your SSL cert; defaults to `cert.pem`>
KEY_PATH=< path to your SSL key; defaults to `key.pem`>- Run
npm run start
This command can be used to launch a totally fresh instance of the application via Docker Compose:
DOCKER_BUILDKIT=1
docker build -t telemetry2:latest . && \
docker compose down --volumes && \
docker compose upTo publish the Docker images from this repository for use on the cluster, navigate to this GitHub Actions page and click "Run workflow". After choosing the branch/tag that you want to publish, confirming your selection, and waiting less than a minute, the image should become available here.
You shouldn't need to do this, since it's already been done, but, before the app can be used on the server, a few "secret" values need to be defined in Kubernetes:
pg-pretalx-credentials(for root DB user)PGUSERPGPASSWORD
telemetry2-db-credentials(for the DB account that will be created for this app)PGUSERPGPASSWORD
This is also already done on the real server, but note that setting up a fresh instance of this requires applying the schema from sql/schema.sql.
The file for it is kubernetes/00-initialize-db.yaml. Note that it assumes a Postgres DB name of postgres-pretalx. It also assumes the presence of the secrets mentioned in the previous step.
With the container up in the registry, we now need to tell the server to load it. Critically, this is specified in kubernetes/01-deployment.yaml, at the path spec.template.spec.containers.image. The Git commit SHA needs to be updated there, for whichever commit was just uploaded.
The files in kubernetes/ are exact replicas of what is used for running this application on the cluster. For the most part, you should be able to just copy and paste them to your own cluster, and run them with kubectl apply -f <filename> in order to get the same behavior. (Though, this also requires ownership of the specified domain, the configuration of various secrets in the Kubernetes configuration, the availability of particular ports, and that the database is accessible at a very particular domain name.)
However, if the circumstance are all met, and you kubectl apply those three files, you should then have a fully-functioning instance of the application running on your Kubernetes cluster.