Skip to content

Conversation

@FrenchGithubUser
Copy link

No description provided.

Fizzadar and others added 9 commits December 19, 2025 10:57
For servers that verify the OTKs are indeed signed by the device key.
…in Docker (matrix-org#825)

(doesn't fix anything specifically, just aligning to what we should be doing)

It looks like this regressed in matrix-org#389 (previously, we we're using `HostnameRunningComplement` here)
…n_server_restart` take too long to run for our assumptions to be correct (matrix-org#829)
…t` to account for slow servers (de-flake) (matrix-org#830)

Follow-up to matrix-org#829

Part of element-hq/synapse#18537

### What does this PR do?

Fix `TestDelayedEvents/delayed_state_events_are_kept_on_server_restart` to account for slow servers (de-flake).

Previously, this was naively using a single delayed event with a 10 second delay. But because we're stopping and starting servers here, it could take up `deployment.GetConfig().SpawnHSTimeout` (defaults to 30 seconds) for the server to start up again so by the time the server is back up, the delayed event may have already been sent, invalidating our assertions below (which expect some delayed events to still be pending and then see one of them be sent after the server is back up).

We could account for this by setting the delayed event delay to be longer than `deployment.GetConfig().SpawnHSTimeout` but that would make the test suite take longer to run in all cases even for homeservers that are quick to restart because we have to wait for that large delay.

We instead account for this by scheduling many delayed events at short intervals (we chose 10 seconds because that's what the test naively chose before). Then whenever the servers comes back, we can just check until it decrements by 1.


### Experiencing the flaky failure

As experienced when running this test against the worker-based Synapse setup we use alongside the Synapse Pro Rust apps, element-hq/synapse-rust-apps#344 (comment). We probably experience this heavily in the private project because GitHub runners are less than half as powerful as those for public projects and that single container with a share of the 2 CPU cores available is just not powerful enough to run all 16 workers effectively.

<details>
<summary>For reference, the CI runners provided by GitHub for private projects are less than half as powerful as those for public projects.</summary>

> #### Standard GitHub-hosted runners for public repositories
>
> Virtual machine | Processor (CPU) | Memory (RAM) | Storage (SSD) | Architecture | Workflow label
> --- | --- | --- | --- | --- | ---
> Linux | 4 | 16 GB | 14 GB | x64 | ubuntu-latest, ubuntu-24.04, ubuntu-22.04
>
> *-- [Standard GitHub-hosted runners for public repositories](https://docs.github.com/en/actions/reference/runners/github-hosted-runners#standard-github-hosted-runners-for-public-repositories)*

---

> #### Standard GitHub-hosted runners for private repositories
>
> Virtual Machine | Processor (CPU) | Memory (RAM) | Storage (SSD) | Architecture | Workflow label
> --- | --- | --- | --- | --- | ---
> Linux | 2 | 7 GB | 14 GB | x64 | ubuntu-latest, ubuntu-24.04, ubuntu-22.04
>
> *-- [Standard GitHub-hosted runners for private repositories](https://docs.github.com/en/actions/reference/runners/github-hosted-runners#standard-github-hosted-runners-for-public-repositories)*

</details>


And for the same slow reasons, why we're also experiencing this as an occasional [flake](element-hq/synapse#18537) with `(workers, postgres)` in the public Synapse CI as well.


### Reproduction instructions

I can easily reproduce this problem if I use matrix-org#827 to limit the number of CPU's available for the homeserver containers to use: `COMPLEMENT_CONTAINER_CPUS=0.5`
…`COMPLEMENT_CONTAINER_CPU_CORES`, `COMPLEMENT_CONTAINER_MEMORY`) (matrix-org#827)

This is useful to mimic a resource-constrained environment, like a CI environment.

This is spawning from some [consistent flaky tests]
(element-hq/synapse-rust-apps#344 (comment))
I'm seeing when running the Complement test suite with some GitHub runners in
a private project. 

For reference, the CI runners provided by GitHub for private projects are less than half as powerful as those for public projects.

> #### Standard GitHub-hosted runners for public repositories
>
> Virtual machine | Processor (CPU) | Memory (RAM) | Storage (SSD) | Architecture | Workflow label
> --- | --- | --- | --- | --- | ---
> Linux | 4 | 16 GB | 14 GB | x64 | ubuntu-latest, ubuntu-24.04, ubuntu-22.04
>
> *-- [Standard GitHub-hosted runners for public repositories](https://docs.github.com/en/actions/reference/runners/github-hosted-runners#standard-github-hosted-runners-for-public-repositories)*

---

> #### Standard GitHub-hosted runners for private repositories
>
> Virtual Machine | Processor (CPU) | Memory (RAM) | Storage (SSD) | Architecture | Workflow label
> --- | --- | --- | --- | --- | ---
> Linux | 2 | 7 GB | 14 GB | x64 | ubuntu-latest, ubuntu-24.04, ubuntu-22.04
>
> *-- [Standard GitHub-hosted runners for private repositories](https://docs.github.com/en/actions/reference/runners/github-hosted-runners#standard-github-hosted-runners-for-public-repositories)*

---

I'm now able to reproduce the same failures locally when I constrain the CPU
to less than a single core `COMPLEMENT_CONTAINER_CPU_CORES=0.5`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants