Skip to content

feat: Add energy equation solver with Boussinesq buoyancy coupling#176

Merged
shaia merged 20 commits intomasterfrom
feat/energy-equation
Apr 4, 2026
Merged

feat: Add energy equation solver with Boussinesq buoyancy coupling#176
shaia merged 20 commits intomasterfrom
feat/energy-equation

Conversation

@shaia
Copy link
Copy Markdown
Owner

@shaia shaia commented Mar 8, 2026

Implement temperature advection-diffusion (ROADMAP 2.1) with explicit Euler time integration, heat source callbacks, and Boussinesq buoyancy body force. Integrated into all three CPU solvers (explicit Euler, projection, RK2) with thermal diffusion CFL constraint. Disabled by default (alpha=0) for full backward compatibility.

Validated with de Vahl Davis differentially heated cavity (Ra=1000).

Implement temperature advection-diffusion (ROADMAP 2.1) with explicit
Euler time integration, heat source callbacks, and Boussinesq buoyancy
body force. Integrated into all three CPU solvers (explicit Euler,
projection, RK2) with thermal diffusion CFL constraint. Disabled by
default (alpha=0) for full backward compatibility.

Validated with de Vahl Davis differentially heated cavity (Ra=1000).
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements the energy equation solver for temperature advection-diffusion with explicit Euler time integration and Boussinesq buoyancy coupling, fulfilling ROADMAP item 2.1. It adds a new energy_solver module and integrates it into the three existing scalar CPU Navier-Stokes solvers. When thermal diffusivity alpha = 0 (the default), the energy equation is entirely skipped for full backward compatibility. The implementation is validated with a differentially heated cavity benchmark at Ra = 1000.

Changes:

  • New energy_solver module (energy_solver.h, energy_solver.c) providing energy_step_explicit and energy_compute_buoyancy
  • Extended ns_solver_params_t with energy equation parameters (alpha, beta, T_ref, gravity, heat_source_func) and added max_temperature to solver stats
  • Integrated energy equation into all three scalar CPU solvers (explicit Euler, projection, RK2) with a thermal diffusion CFL constraint, plus two new test suites

Reviewed changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
lib/include/cfd/solvers/energy_solver.h New public API header for energy solver functions
lib/include/cfd/solvers/navier_stokes_solver.h Added energy params to ns_solver_params_t, max_temperature to stats
lib/src/solvers/energy/cpu/energy_solver.c Core explicit Euler temperature advection-diffusion implementation
lib/src/solvers/navier_stokes/cpu/solver_explicit_euler.c Added buoyancy coupling, thermal CFL, energy step; ns_solver_params_default updated
lib/src/solvers/navier_stokes/cpu/solver_projection.c Added buoyancy coupling and post-correction energy step
lib/src/solvers/navier_stokes/cpu/solver_rk2.c Added buoyancy coupling and post-RK2 energy step
lib/CMakeLists.txt Added energy solver source and header to build
CMakeLists.txt Added energy solver and natural convection test executables
tests/solvers/energy/test_energy_solver.c Unit tests: pure diffusion, pure advection, disabled, buoyancy, backward compat
tests/validation/test_natural_convection.c Differentially heated cavity benchmark test at Ra = 1000

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

shaia added 4 commits March 8, 2026 07:19
The energy equation was refactored to use energy_step_explicit() but the
old t_new temporary array was left behind — allocated, copied, and freed
without ever being written to with computed values.
The heat source callback received hardcoded 0.0 for the time argument,
making time-varying heat sources silently incorrect. Add a time parameter
to energy_step_explicit and pass iter*dt from all three NS solvers,
matching how compute_source_terms already handles momentum sources.
The field was added to ns_solver_stats_t but never written, always
reporting 0.0. Now computed from field->T in all solver wrappers
(solver_registry.c and avx2 rk2) alongside max_velocity/max_pressure.
Exercises the heat source callback path with a uniform Q=1.0 source on
a quiescent flow. Verifies the interior temperature rises by Q*N*dt,
confirming the callback is invoked and the returned value is applied.
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 12 out of 12 changed files in this pull request and generated 6 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

shaia added 3 commits April 3, 2026 18:44
max_temperature used fabs which misreports when T contains negative
perturbations. Switch to plain max to match the documented semantics.
Also merge the temperature scan into the existing velocity/pressure
reduction loop to avoid a second sequential pass over the array.
Use DIS_NX, DIS_NY, DIS_TOTAL for clarity and maintainability,
matching the pattern used by other tests in this file.
OMP and SIMD backends don't call energy_step_explicit or
energy_compute_buoyancy, so alpha>0 or beta!=0 would silently produce
wrong results. Now return CFD_ERROR_UNSUPPORTED with a clear message.
Also document the limitation on the params struct.
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 12 out of 12 changed files in this pull request and generated 8 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

shaia added 5 commits April 3, 2026 21:23
The energy solver uses grid->dz[0] as a constant for all z-derivatives.
Reject non-uniform dz grids early with CFD_ERROR_UNSUPPORTED rather than
silently computing incorrect z-derivatives on stretched grids.
Consistent with other solvers that set a structured error message via
cfd_set_error when returning CFD_ERROR_DIVERGED. Also remove now-unused
stdio.h include.
GPU Euler and projection solvers don't implement energy coupling.
Return CFD_ERROR_UNSUPPORTED when alpha>0 or beta!=0 to prevent
silently ignoring temperature advection/diffusion and buoyancy.
max_t=0.0 incorrectly reports 0 when all temperatures are negative
(e.g. perturbation fields). Initialize from the first element so the
OMP max-reduction produces the correct result for any temperature range.
energy_step_explicit allocated and freed a full T_new buffer on every
call, adding heap pressure in long simulations. Add an internal
workspace-aware variant that accepts a caller-owned buffer. The three
scalar CPU solvers now allocate T_energy_ws once before the iteration
loop and pass it through, eliminating per-step malloc/free overhead.
The public API is unchanged — it wraps the workspace variant with NULL.
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 13 out of 13 changed files in this pull request and generated 3 comments.

Comments suppressed due to low confidence (1)

lib/src/solvers/navier_stokes/cpu/solver_explicit_euler.c:550

  • apply_boundary_conditions() applies periodic BCs to temperature as well, but this step only preserves/restores velocity boundaries. With alpha>0 (or even just beta!=0 using a fixed temperature field), this will overwrite any caller-set thermal wall BCs (e.g., Dirichlet/Neumann) and affect subsequent steps unless the caller re-applies T BCs every iteration. Consider saving/restoring T boundary values alongside velocities (e.g., add a boundary-copy helper for scalar fields) or otherwise avoid clobbering caller-set T boundaries.
        /* Save caller BCs, apply periodic BCs, restore caller BCs.
         * Use _3d helper for 6-face copy when nz > 1. */
        copy_boundary_velocities_3d(u_new, v_new, w_new,
                                    field->u, field->v, field->w, nx, ny, nz);
        apply_boundary_conditions(field, grid);
        copy_boundary_velocities_3d(field->u, field->v, field->w,
                                    u_new, v_new, w_new, nx, ny, nz);

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

shaia added 3 commits April 4, 2026 11:02
grid->dz is allocated with nz-1 elements (cell spacings), but the
validation loop iterated k < nz, reading one past the end. Limit to
k < nz - 1.
The generic periodic BC application overwrites caller-specified thermal
boundary values. Save and restore field->T around the call in explicit
Euler and RK2 solvers, reusing the energy workspace buffer.
The implementation computes -beta*(T-T_ref)*g as a per-unit-mass
acceleration term, not a force. Update the header doc to match.
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 13 out of 13 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

shaia added 4 commits April 4, 2026 11:33
The T_energy_ws buffer was only allocated when alpha>0, so buoyancy-only
runs (beta!=0, alpha==0) had no buffer to save/restore temperature
across apply_boundary_conditions. Extend the allocation condition to
cover both alpha>0 and beta!=0 in all three scalar CPU solvers.
Validate field, grid, params, field->T, grid->dx/dy, and minimum grid
size before any dereference. Prevents crashes when the public API is
called with NULL arguments or degenerate grids.
The central-difference stencil assumes uniform grid spacing but was
using per-cell dx[i]/dy[j] values that are only correct on uniform
grids. Reject non-uniform dx/dy early (like dz) and switch the inner
loop to precomputed inv_2dx/inv_2dy/inv_dx2/inv_dy2 constants.
The TSan CI crashed with a SEGV during process exit because
BC_BACKEND_AUTO selected OMP boundary conditions for a scalar
Poisson solver, spawning libomp threads whose cleanup conflicted
with TSan's shadow memory tracking.

Two fixes:
- Propagate CFD_ENABLE_OPENMP to library consumers (tests) via
  INTERFACE compile definition so compile-time guards work correctly
- Dispatch Poisson solver BCs through the solver's own backend
  instead of global AUTO, preventing backend mismatch
@shaia shaia merged commit 9021241 into master Apr 4, 2026
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants