Skip to content

fix(GHO-131): pin vultr provider to v2.29.0#370

Merged
noahwhite merged 3 commits intodevelopfrom
feature/GHO-131-pin-vultr-2.30.1
Apr 22, 2026
Merged

fix(GHO-131): pin vultr provider to v2.29.0#370
noahwhite merged 3 commits intodevelopfrom
feature/GHO-131-pin-vultr-2.30.1

Conversation

@noahwhite
Copy link
Copy Markdown
Owner

@noahwhite noahwhite commented Apr 21, 2026

Summary

Vultr provider v2.31.0 (released 2026-04-21 17:57 UTC) crashes tofu plan with Plugin did not respond on ReadResource for vultr_block_storage.this — see failed run 24750775114. The same regression also reproduces on v2.30.1 and v2.30.0. Pin to v2.29.0 — the highest clean version — until upstream ships a fix. Closes Renovate PR #368.

Linear: GHO-131.

Changes

  • Bump vultr/vultr required_providers constraint from = 2.28.1= 2.29.0 in:
    • opentofu/envs/dev/main.tofu
    • opentofu/modules/vultr/block_storage/main.tofu
    • opentofu/modules/vultr/firewall/main.tofu
    • opentofu/modules/vultr/instance/main.tofu
  • Regenerate opentofu/envs/dev/.terraform.lock.hcl with 2.29.0 hashes
  • Add renovate.json packageRule capping vultr/vultr at <=2.29.0 so Renovate stops re-proposing broken post-2.29.0 bumps

Regression boundary

Version Status
v2.28.1 last previously-known-good (pre-PR baseline)
v2.29.0 clean — new pin
v2.30.0 ReadResource panic on vultr_block_storage
v2.30.1 same panic
v2.31.0 same panic (+ "Merge organization/IAM from beta branch")

Context

The null_resource attach workaround for vultr_block_storage was removed in GHO-109 / PR #318 when the provider was bumped to v2.28.1 after upstream bug #660 was fixed. That workaround does not come back with this change — the current module continues to rely on native attached_to_instance support, which works on v2.29.0. The post-v2.29.0 regression is on ReadResource (refresh path), not on attach.

Known side effect: firewall rule recreation

The plan shows all vultr_firewall_rule resources as -/+ destroy and then create replacement. The provider changed how the source attribute is persisted to state somewhere between v2.28.1 and v2.29.0 (state has source="X", new provider returns source=null, tofu plans replacement). The module already uses subnet/subnet_size, so config is correct — this is a one-time state-migration artifact.

Accepted for this PR. Apply will briefly gap the origin firewall (~seconds); Cloudflare in front absorbs most of that window. Clean state afterward.

Test plan

  • pr-tofu-plan-develop CI passes (no Plugin did not respond crash)
  • pr-tofu-fmt-check CI passes
  • Plan shows only the expected provider bump + firewall rule recreation, no other drift
  • Merge triggers deploy-dev.yml; apply succeeds
  • Post-deploy health check passes on ghost-dev-01
  • Renovate PR chore(deps): update terraform vultr to v2.31.0 #368 closed

v2.31.0 (released 2026-04-21) crashes tofu plan with "Plugin did not
respond" on ReadResource for vultr_block_storage. See run 24750775114.
Pin to v2.30.1 (last released 2026-03-04, 7 weeks stable) until
upstream ships a fix.

- Bump required_providers from "= 2.28.1" to "= 2.30.1" in envs/dev
  and the three vultr modules (block_storage, firewall, instance)
- Regenerate .terraform.lock.hcl with 2.30.1 hashes
- Add renovate.json packageRule capping vultr/vultr at <=2.30.1 so
  Renovate stops re-proposing the broken bump

Intermediate versions v2.29.0/v2.29.1/v2.30.0 were skipped; we can
take them in a follow-up bump once 2.30.1 is confirmed clean.
@linear
Copy link
Copy Markdown

linear Bot commented Apr 21, 2026

v2.30.1 also crashes on vultr_block_storage ReadResource — same
Plugin did not respond panic as v2.31.0 on plan refresh. Step back
to v2.30.0 (2026-02-17, stable for 2+ weeks before 2.30.1 broke it).
Renovate cap lowered to <=2.30.0 accordingly.
@noahwhite
Copy link
Copy Markdown
Owner Author

Stepping pin back to v2.30.0 — v2.30.1 also crashes on vultr_block_storage.this ReadResource with the same Plugin did not respond panic.

Pushed commit 1567db2. Renovate cap updated to <=2.30.0 as well.

Separate concern surfaced in the latest plan output: vultr_firewall_rule resources show -/+ destroy and then create replacement because the source attribute is going to null — the provider renamed/removed source and now expects subnet/subnet_size (which our module already uses). This is a schema migration artifact that would occur on any upgrade away from 2.28.1 and will need to be handled separately — likely by tofu state rm + re-import, or by accepting the one-time recreation of firewall rules. Flagging for follow-up; not in scope for this PR.

v2.30.0 also crashes on vultr_block_storage ReadResource. Trying
v2.29.0 next; fall back to v2.28.1 (last known good) if this also
fails.
@noahwhite noahwhite changed the title feat(GHO-131): pin vultr provider to v2.30.1 fix(GHO-131): pin vultr provider to v2.29.0 Apr 22, 2026
@noahwhite
Copy link
Copy Markdown
Owner Author

Plan green on v2.29.0 — no ReadResource crash. Final pin landed at 2.29.0.

Regression boundary: vultr_block_storage ReadResource panic was introduced in v2.30.0 and persists through v2.30.1 and v2.31.0. v2.29.0 is the highest clean version.

Still open — firewall rule replacements. Plan shows all vultr_firewall_rule resources being destroyed and recreated because the provider changed how the source attribute is stored in state (- source = "X" -> null # forces replacement). Our module uses subnet/subnet_size so the config is correct; this is a one-time state artifact.

Options:

  1. Accept the recreation in this PR — ~seconds of firewall-rule absence during apply. Cloudflare in front will serve edge-cached responses; origin connections will briefly fail. Simplest path.
  2. State surgery follow-upstate rm + import each rule to bypass the replacement. Cleaner but 20+ rules to script.

Recommend option 1 unless you want zero-downtime on origin. Let me know.

@noahwhite noahwhite merged commit b1e08bb into develop Apr 22, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant