Skip to content

GrIS-ocean-coupled#5

Merged
mvertens merged 426 commits intoNorESMhub:mainfrom
mpetrini-norce:gris_oceanforcing_coupled
May 4, 2026
Merged

GrIS-ocean-coupled#5
mvertens merged 426 commits intoNorESMhub:mainfrom
mpetrini-norce:gris_oceanforcing_coupled

Conversation

@mpetrini-norce
Copy link
Copy Markdown
Collaborator

CISM source code including new ocean-GrIS coupling features that work either in stand-alone mode or DOCN or fully coupled mode. Also included some changes to suppress elevation classes changes in DLND mode (useful for spinup/sensitivity tests). This CISM version should be used together with CISM-wrapper branch 'mpetrini-norce/wrap_ocean_forcing_coupled'.

gunterl and others added 30 commits April 18, 2022 20:30
This commit contains additional changes towards adding flexibility for future
python release. In particular the option parser (which is deprecated since version 3.2)
is replaced by argument parser.
In the ISMIP-HOM test case, the plotISMIP_HOM.py had to be slightly modified:
a. The "reduce" function now needs to be imported from the "functools" module.
b. The "map" function now returns an object as opposed to a list. While this
had no big influences in this code, it did in the "read" function defined in this
script and it had to be replaced by a list comprehension on line 222. In addition
it seems that the map function in python2 could skip empty lines when reading through
a text file. It's no longer the case in python3 and I had to add a check on line 219
to avoid an error.
needed for intel 2023.0.0 on derecho
The Intel compiler has been taking a long time to build glide_io.F90.
Some testing showed that this is because 'use glide_types' and three other use statements
were repeated in each of a large number of accessor subroutines.
For some reason, this hasn't been an issue on the Gnu compiler.

I modified ../utils/generate_ncvars.py and ../libglimmer/ncdf_template.in
so that these 'use' statements appear only at the top of the glide_io module.

With this change, the Intel build time on derecho with 8 cores ('make -j 8')
decreases from about 4:40 to 1:10, a welcome improvement.
Removed redundant 'use' statements that slowed the Intel build
Cleaning up some comments, e.g., changing 'Glimmer' to 'CISM'
Until now, the exponent n in the Glen flow law has been set in glimmer_physcon.F90
as an integer parameter, gn = 3.

With this commit, n is renamed n_glen (a more greppable name) for use in Glissade.
It is declared in glimmer_physcon.F90 as real(dp) with default value 3.0d0.

Since n_glen is no longer a parameter, it can now be set in the config file like other
physical 'constants' (e.g., rhoi and rhoo) that are not truly constant, but can
take different values in different models or benchmarking experiments.

To avoid changing answers in old Glide code, I retained the integer parameter gn = 3 in glimmer_physcon.F90.
This parameter is now used only in the Glide dycore (e.g., glide_velo.F90).

In Glissade, I replaced gn with n_glen in several places:
(1) In subroutine glissade_interior_dissipation_sia, the dissipation factor includes n_glen.
    Note: n_glen does not appear explicitly in the 1st-order dissipation, which is proportional to tau_eff^2/efvs.
(2) In glissade_velo_sia, n_glen appears in the vertical integral for the velocity.
(3) In glissade_velo_higher, flwafact = flwa**(-1./n_glen) where flwa = A.
(4) In glissade_velo_higher, the exponent p_effstr = (1.d0 - n_glen)/n_glen is used
    to compute effective_viscosity for BP, DIVA, or L1L2.
(5) In glissade_velo_higher, subroutine compute_3d_velocity_l1l2 has a factor proportional to
    tau**((n_glen-1.)/2.) in the vertical integral.

For (1) and (2), n_glen was previously assumed to be an integer.  Switching it to real(dp)
is answer-changing at the machine roundoff level for runs using the local SIA solver
in glissade_velo_sia.F90.

For (3), (4) and (5), n_glen replaces the equivalent expression real(gn,dp).
For this reason, answers are BFB when using the BP, DIVA or L1L2 solver.

In glissade_basal_traction, n_glen appeared in the expression for beta in the Coulomb sliding option,
HO_BABC_COULOMB_FRICTION.  Here, I replaced n_glen with powerlaw_m (also = 3.0d0 by default)
to be consistent with the expressions for beta in the Schoof and Tsai laws.

In glimmer_scales, Glen's n appears in the expressions for the scaling parameters vis0 and vis_scale,
Here, I defined a local integer parameter gn = 3 so that the scales are unchanged.

It is now possible for the user to specify arbitrary values of n_glen in tests such as the slab test.

Another minor change: I added some logic to the subroutine that computes L1L2 velocities.
For which_ho_efvs = 2 = HO_EFVS_NONLINEAR, the effective viscosity (efvs) is computed from
the effective strain rate using an equation from Perego et al. (2012).
But for option 0 (efvs = constant) and option 1 (efvs = multiple of flow factor),
this strain rate equation in the code does not apply. For these options, I added an
alternative that computes velocity in terms of the strain-rate-independent efvs.
This allows us to use L1L2 for problems with constant efvs (e.g., the slab test).
The code was calling subroutine init_isostasy with isostasy = 0 = ISOSTASY_NONE.
This subroutine is now called only if isostasy = 1 = ISOSTASY_COMPUTE.
I modified glissade.F90 to abort cleanly with a call to glide_finalise after an advective CFL error.
This is done only when the user does *not* specify adaptive subcycling.
The clean abort allows the new slabStability script to keep going, launching a new run
with a shorter timestep.

In subroutine glissade_flow_factor of glissade_therm.F90, I removed the FLWA_INPUT option
(option 3 of whichflwa).
This option is redundant with option 0, FLWA_CONST_FLWA, in which the user can set default_flwa
in the parameters section of the config file, and otherwise CISM uses default_flwa = 1.0e-16 Pa^-n yr^-1.
We probably should rename default_flwa to constant_flwa, but leaving it for now to avoid
breaking config files in test cases.

Note: This option was added by Matt Hoffman in 2014.  I am unaware of test cases that
use this option (most have flow_law = 0 or 2), but if there are any, we will need to fix them
by switching to whichflwa = 0.

In subroutine glissade_therm_driver of glissade_therm.F90, I increased the threshold
for column energy conservation errors from 1.0d-8 to 1.0d-7 W/m^2.
For very small timesteps of ~1.0e-6 yr, the smaller threshold can be exceeded because of roundoff errors.

In subroutine glissade_check_cfl of glissade_transport.F90, I modified the fatal abort
for large CFL violations (advective CFL > 10).
Now, CISM aborts for CFL > 10 only when adaptive_cfl_threshold > 0, i.e. transport subcycling is enabled.
This prevents excessive subcycling for egregious CFL violations.
If adaptive_cfl_threshold = 0. (the default), i.e. transport subcycling is not enabled,
then the code exits cleanly (with a call to glide_finalise) in glissade.F90 when advective CFL > 1.

I added a TODO note to set the default value of geot (the geothermal heat flux) to 0 instead of 0.05 W/m^2.
With the default value, simple prognostic tests like the dome are not mass-conserving.
Not changing just yet because answers will change for the dome test.
This commit modifies the run and plot scripts for the Dukowicz slab test case,
as described in Section 5 of this paper:

   J.K. Dukowicz, 2012. Reformulating the full-Stokes ice sheet model for a
   more efficient computational solution. The Cryosphere, 6, 21-34,
   https://doi.org/10.5194/tc-6-21-2012.

The test case consists of an ice slab of uniform thickness moving down an
inclined plane by a combination of sliding and shearing.
Analytic Stokes and first-order velocity solutions exist for all values of Glen's exponent n >= 1.
The solutions for n = 1 are derived in Dukowicz (2012), and solutions for n > 1
are derived in an unpublished manuscript by Dukowicz (2013).
These solutions can be compared to those simulated by CISM.

The original scripts, runSlab.py and plotSlab.py, were written by Matt Hoffman
with support for n = 1.  They came with warnings that the test is not supported.

The test is now supported, and the scripts include some new features:
* The user may specify any value of n >= 1 (not necessarily an integer).
  The tests assume which_ho_efvs = 2 (nonlinear viscosity) with flow_law = 0 (constant).
* Physics parameters are no longer hard-coded.  The user can enter the ice thickness,
  beta, viscosity coefficient (mu_n), and slope angle (theta) on the command line.
* The user can specify time parameters dt (the dynamic time step) and nt (number of steps).
  The previous version did not support transient runs.
* The user can specify a small thickness perturbation dh, which is added to the initial
  uniform thickness via random sampling from a Gaussian distribution.
  The perturbation will grow or decay, depending on the solver stability for given dx and dt.

For n = 1, the viscosity coefficient mu_1 has a default value of 1.e6 Pa yr in the relation
mu = mu_1 * eps((1-n)/n), where eps is the effective strain rate.

For n > 1, the user can specify a coefficient mu_n; otherwise the run script computes mu_n
such that the basal and surface speeds are nearly the same as for an n = 1 case with the
mu_1 = 1.e6 Pa yr and the same values of thickness, beta, and theta.

(Note: There is a subtle difference between the Dukowicz and CISM definitions of the
effective strain rate; the Dukowicz value is twice as large. Later, it might be helpful
to make the Dukowicz convention consistent with CISM.)

I modified the plotting script, plotSlab.py, to look in the config and output files
for physics parameters that are no longer hardwired.
I slightly modified the analytic formulas to give the correct solution for non-integer n.

This script creates two plots.  The first plot shows excellent agreement between higher-order
CISM solutions and the analytic solution for small values of the slope angle theta.
For steep slopes, the answers diverge as expected.

For the second plot, the extent of the y-axis is wrong. This remains to be fixed.

I also added a new script, stabilitySlab.py, to carry out stability tests as described in:

     Robinson, A., D. Goldberg, and W. H. Lipscomb, A comparison of the performance
     of depth-integrated ice-dynamics solvers, to be submitted to The Cryosphere.

The idea is that for a given set of physics parameters and stress-balance approximation
(DIVA, L1L2, etc.), the script launches multiple CISM runs at a range of grid resolutions.
At each grid resolution, the script determines the maximum stable time step.
A run is deemed stable when the standard deviation of an initial small thickness perturbation
is reduced over the course of 100 time steps.  A run is unstable if the standard deviation
increases or if the model aborts (usually with a CFL violation).

I have run the stability script for several solvers (DIVA, L1L2, SIA, SSA) for each of
two physical cases: one with thick shearing ice and one with thin sliding ice.
Each suite of experiments runs in a few minutes on 4 Macbook cores for solvers other than BP.
As expected, DIVA and SSA are much more stable than L1L2 and SIA.

This and the previous commit correspond to the CISM code and scripts used for
the initial submission by Robinson et al. (2021).
Rewrote the slab README file to describe the new command line options for runSlab.py,
and the new script stabilitySlab.py.
This commit introduces new algorithms for computing the 3D velocity field
in the L1L2 solver.  Also, I fixed a bug so that the L1L2 solver works
correctly for n_glen = 1.

The goal was to improve the stability of the L1L2 solver in slab tests.
At fine grid resolution (< 200 m), the solver is unstable even for very small dt,
instead of following the stability limits of an SIA solver (as is the case
for Yelmo).

The default method (method 1) for the 3D velocity computes most quantities on the
staggered grid and is unchanged.  There are two new methods:
* Method 2 follows the Yelmo algorithm as closely as possible, with some
  quantities on cell edges and some on vertices.  Unlike Yelmo, there is
  necessarily an averaging of uvel and vvel from edges to corners at the end,
  since CISM assumes B-grid velocities.
* Method 3 moves all intermediate calculations to cell edges, with a final
  averaging to vertices at the end.

To support the new methods, there is a new subroutine, glissade_average_to_edges,
in module glissade_grid_operators.

Both new methods give results similar to method 1. Neither improves stability
at very high resolution.  This suggests that the stability of Yelmo might
result from the overall C-grid velocity scheme rather than details of
the 3D velocity computation.

For each method, there is now an option to exclude the membrane stresses
in the tau_xz and tau_yz terms that appear in the vertical integration factor.
When these stresses are excluded, the stability curve for L1L2 solver is close
to that of an SIA solver, like Yelmo.
This commit includeds the following changes:
* I fixed some typos in the README file for the slab case.
* In runSlab.py, I added the option to use the local SIA solver (contained
  in module glissade_velo_sia.F90).
* I added some code which can apply an initial sinusoidal perturbation
  of wavelength 2*dx, instead of a random Gaussian perturbation.
  This is useful for getting reproducible results.
This commit adds a hybrid velocity solver as described by Robinson et al. (TC, 2021).
The solver first computes SIA velocities using the local SIA solver
(which_ho_approx = -1) with zero basal slip, then computes SSA velocities
using the Glissade higher-order SSA solver (which_ho_approx = 1), and
finally adds the two solutions.

The new logic is in module glissade_velo.  To use the hybrid solver,
set which_ho_approx = HO_APPROX_HYBRID = 5 in the config file.

For the slab test, the hybrid solver has the expected stability properties,
aligned with SSA at coarse resolution and SIA at fine resolution.
Answers are as expected for a dome test.
This commit streamlines the 3D velocity subroutine for L1L2 and makes
a small but important change in the algorithm.

The change is to evaluate the membrane stresses in tau_xz and tau_yz
at layer midpoints instead of lower layer boundaries, consistent
with where the SIA terms are evaluated.
I found that a small vertical offset can disrupt the balance between these
two terms and make the L1L2 solver unstable for the slab problem at resolutions
finer than ~200 m.  With the fix, the L1L2 stability curve is parallel
to the SIA stability curve at fine resolution, with L1L2 being slightly
more stable than SIA.

I also removed the alternate L1L2 discretization methods that were added
in a recent commit.  The alternate strategies evaluated some terms
at edges instead of vertices, and did not change the results significantly.
With this commit, all terms are evaluated at either cell centers or vertices.

This is the code version used for the slab tests shown for CISM
in Section 3 of the paper by Robinson, Goldberg & Lipscomb (2021, TC).
For these tests, I temporarily changed the energy conservation criterion
in glissade_therm.F90 to avoid false non-conservation alarms when
running at very fine resolution.  See the comments in that module.

This commit is answer-changing for L1L2, in a good way.
This commit includes a number of changes to update from python2 to python3,
following the examples from other test cases.

Also updated some comments, e.g. with the final citation for Robinson et al. (2022).

I verified that the run, plot, and stability scripts work on Cheyenne in a python3 environment.
This commit modifies the run and plot scripts for the Dukowicz slab test case,
as described in Section 5 of this paper:

   J.K. Dukowicz, 2012. Reformulating the full-Stokes ice sheet model for a
   more efficient computational solution. The Cryosphere, 6, 21-34,
   https://doi.org/10.5194/tc-6-21-2012.

The test case consists of an ice slab of uniform thickness moving down an
inclined plane by a combination of sliding and shearing.
Analytic Stokes and first-order velocity solutions exist for all values of Glen's exponent n >= 1.
The solutions for n = 1 are derived in Dukowicz (2012), and solutions for n > 1
are derived in an unpublished manuscript by Dukowicz (2013).
These solutions can be compared to those simulated by CISM.

The original scripts, runSlab.py and plotSlab.py, were written by Matt Hoffman
with support for n = 1.  They came with warnings that the test is not supported.

The test is now supported, and the scripts include some new features:
* The user may specify any value of n >= 1 (not necessarily an integer).
  The tests assume which_ho_efvs = 2 (nonlinear viscosity) with flow_law = 0 (constant).
* Physics parameters are no longer hard-coded.  The user can enter the ice thickness,
  beta, viscosity coefficient (mu_n), and slope angle (theta) on the command line.
* The user can specify time parameters dt (the dynamic time step) and nt (number of steps).
  The previous version did not support transient runs.
* The user can specify a small thickness perturbation dh, which is added to the initial
  uniform thickness via random sampling from a Gaussian distribution.
  The perturbation will grow or decay, depending on the solver stability for given dx and dt.

For n = 1, the viscosity coefficient mu_1 has a default value of 1.e6 Pa yr in the relation
mu = mu_1 * eps((1-n)/n), where eps is the effective strain rate.

For n > 1, the user can specify a coefficient mu_n; otherwise the run script computes mu_n
such that the basal and surface speeds are nearly the same as for an n = 1 case with the
mu_1 = 1.e6 Pa yr and the same values of thickness, beta, and theta.

(Note: There is a subtle difference between the Dukowicz and CISM definitions of the
effective strain rate; the Dukowicz value is twice as large. Later, it might be helpful
to make the Dukowicz convention consistent with CISM.)

I modified the plotting script, plotSlab.py, to look in the config and output files
for physics parameters that are no longer hardwired.
I slightly modified the analytic formulas to give the correct solution for non-integer n.

This script creates two plots.  The first plot shows excellent agreement between higher-order
CISM solutions and the analytic solution for small values of the slope angle theta.
For steep slopes, the answers diverge as expected.

For the second plot, the extent of the y-axis is wrong. This remains to be fixed.

I also added a new script, stabilitySlab.py, to carry out stability tests as described in:

     Robinson, A., D. Goldberg, and W. H. Lipscomb, A comparison of the performance
     of depth-integrated ice-dynamics solvers, to be submitted to The Cryosphere.

The idea is that for a given set of physics parameters and stress-balance approximation
(DIVA, L1L2, etc.), the script launches multiple CISM runs at a range of grid resolutions.
At each grid resolution, the script determines the maximum stable time step.
A run is deemed stable when the standard deviation of an initial small thickness perturbation
is reduced over the course of 100 time steps.  A run is unstable if the standard deviation
increases or if the model aborts (usually with a CFL violation).

I have run the stability script for several solvers (DIVA, L1L2, SIA, SSA) for each of
two physical cases: one with thick shearing ice and one with thin sliding ice.
Each suite of experiments runs in a few minutes on 4 Macbook cores for solvers other than BP.
As expected, DIVA and SSA are much more stable than L1L2 and SIA.

This and the previous commit correspond to the CISM code and scripts used for
the initial submission by Robinson et al. (2021).
This commit introduces new algorithms for computing the 3D velocity field
in the L1L2 solver.  Also, I fixed a bug so that the L1L2 solver works
correctly for n_glen = 1.

The goal was to improve the stability of the L1L2 solver in slab tests.
At fine grid resolution (< 200 m), the solver is unstable even for very small dt,
instead of following the stability limits of an SIA solver (as is the case
for Yelmo).

The default method (method 1) for the 3D velocity computes most quantities on the
staggered grid and is unchanged.  There are two new methods:
* Method 2 follows the Yelmo algorithm as closely as possible, with some
  quantities on cell edges and some on vertices.  Unlike Yelmo, there is
  necessarily an averaging of uvel and vvel from edges to corners at the end,
  since CISM assumes B-grid velocities.
* Method 3 moves all intermediate calculations to cell edges, with a final
  averaging to vertices at the end.

To support the new methods, there is a new subroutine, glissade_average_to_edges,
in module glissade_grid_operators.

Both new methods give results similar to method 1. Neither improves stability
at very high resolution.  This suggests that the stability of Yelmo might
result from the overall C-grid velocity scheme rather than details of
the 3D velocity computation.

For each method, there is now an option to exclude the membrane stresses
in the tau_xz and tau_yz terms that appear in the vertical integration factor.
When these stresses are excluded, the stability curve for L1L2 solver is close
to that of an SIA solver, like Yelmo.
This commit includeds the following changes:
* I fixed some typos in the README file for the slab case.
* In runSlab.py, I added the option to use the local SIA solver (contained
  in module glissade_velo_sia.F90).
* I added some code which can apply an initial sinusoidal perturbation
  of wavelength 2*dx, instead of a random Gaussian perturbation.
  This is useful for getting reproducible results.
This commit streamlines the 3D velocity subroutine for L1L1 and makes
a small but important change in the algorithm.

The change is to evaluate the membrane stresses in tau_xz and tau_yz
at layer midpoints instead of lower layer boundaries, consistent
with where the SIA terms are evaluated.
I found that a small vertical offset can disrupt the balance between these
two terms and make the L1L2 solver unstable for the slab problem at resolutions
finer than ~200 m.  With the fix, the L1L2 stability curve is parallel
to the SIA stability curve at fine resolution, with L1L2 being slightly
more stable than SIA.

I also removed the alternate L1L2 discretization methods that were added
in a recent commit.  The alternate strategies evaluated some terms
at edges instead of vertices, and did not change the results significantly.
With this commit, all terms are evaluated at either cell centers or vertices.

This is the code version used for the slab tests shown for CISM
in Section 3 of the paper by Robinson, Goldberg & Lipscomb (2021, TC).
For these tests, I temporarily changed the energy conservation criterion
in glissade_therm.F90 to avoid false non-conservation alarms when
running at very fine resolution.  See the comments in that module.

This commit is answer-changing for L1L2, in a good way.
whlipscomb and others added 12 commits November 5, 2025 18:51
This commit sets most of the verbose variables (used for debugging) to false in preparation for
bringing the code to the main branch. By default, CISM writes some general information at initialization.
Then at runtime, CISM writes one message per timestep and reports whether the velocity solver has converged.

I also fixed a scale issue with the fill values for usfc_obs and vsfc_obs.
At some point, we might want a more graceful treatment of fill values in the input data.
Lipscomb/smb cleanup.v2 This branch began with some cleanup and changes to CISM's surface mass balance options, and then absorbed some separate work going on at the same time.
@mvertens
Copy link
Copy Markdown

@mpetrini-norce - thanks for this! Once we have a PR to the CISM_Wrapper - then I'll fire of testing and if that works I'll accept both PRs.

@mpetrini-norce
Copy link
Copy Markdown
Collaborator Author

@mpetrini-norce - thanks for this! Once we have a PR to the CISM_Wrapper - then I'll fire of testing and if that works I'll accept both PRs.

Great, thanks! Please let me know if you need any help/infos.

@hgoelzer
Copy link
Copy Markdown
Collaborator

Thanks, this is what we need.
I do not fully understand why the commit history goes back 3 years. But if this is what we need to get up-to-date with our own dev and closer to ESCOMP, then go ahead.

@mvdebolskiy
Copy link
Copy Markdown
Collaborator

@mpetrini-norce this can be checked out in stand-alone if you clone CISM-wrapper first ./bin/git-fleximod update and than checkout this branch in the source_cism after adding your remote. See NorESMhub/CISM-wrapper#16

@mpetrini-norce
Copy link
Copy Markdown
Collaborator Author

@mpetrini-norce this can be checked out in stand-alone if you clone CISM-wrapper first ./bin/git-fleximod update and than checkout this branch in the source_cism after adding your remote. See NorESMhub/CISM-wrapper#16

Hi Matvey,
so you're suggesting that this merge is not needed anymore?

@mvdebolskiy
Copy link
Copy Markdown
Collaborator

@mpetrini-norce I've only made the wrapper PR that this PR has to go together. It's based on https://github.com/mpetrini-norce/CISM-wrapper/tree/wrap_ocean_forcing_coupled

@mvertens mvertens removed the request for review from mvdebolskiy May 4, 2026 08:31
@mvertens mvertens merged commit 4a59bba into NorESMhub:main May 4, 2026
@mvertens
Copy link
Copy Markdown

mvertens commented May 4, 2026

@mpetrini-norce - I merged this and then realized that this was a PR to branch main in our fork rather than noresm. We should be doing the PR to noresm to distinguish our code from that in ESCOMP and that should be our default branch.
I'll open a new PR with that change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants