Pace ships runnable examples, not a library. Every PR that adds or
modifies an example under python/, ansible/, or terraform/ must
include a populated Test Report in the PR body so reviewers can see
the example actually worked end-to-end against the target NetApp storage
system.
This document defines what to capture. The PR template (.github/PULL_REQUEST_TEMPLATE.md) ships the section ready to fill in.
Two layers, with clear ownership:
| Layer | Who runs it | What it checks |
|---|---|---|
| Static checks | CI (automatic) | Lint, format, syntax, secret scan, YAML parse |
| End-to-end run | Contributor | The example actually does what it claims, on a cluster |
Static checks are non-negotiable and gate the PR (see CONTRIBUTING.md). End-to-end evidence is contributor-supplied in the PR body. A reviewer will not approve a PR that lacks the Test Report.
Any of the following is acceptable. Just declare which one you used and
the exact ONTAP version (output of cluster show -fields version or the
/cluster REST endpoint).
| Environment | Notes |
|---|---|
| ONTAP Simulator | Free; great for most provisioning examples |
| ONTAP Select | Software-defined; good for multi-node features |
| Real ONTAP cluster | 9.8 or newer (matches the API floor used by every example) |
| Cloud Volumes ONTAP | Acceptable; mention the cloud and instance type |
If your change can only be exercised on hardware not available in the simulator (e.g., a feature requiring physical disks of a certain type), say so explicitly in the Cannot run on a cluster? section of the report so a reviewer with the right environment can run it.
Pick the style(s) your PR touches and capture the matching evidence.
-
First run - execute the script with realistic args:
python <script>.py --svm <svm> --volume <vol> ...
Capture stdout (10-50 lines is usually enough) and the exit code.
-
Re-run safety - run the same command again. Either:
- the script is idempotent and exits 0 with no destructive change, or
- the script detects the existing state and reports it, or
- if the script is intentionally not re-runnable, document that in the report so reviewers know what to expect.
-
Cleanup - if the script created resources, show the commands used to remove them (often a separate teardown script or REST
DELETEcalls). One-shot read-only scripts can skip this.
-
First run - run the playbook:
ansible-playbook -i inventory/hosts.yml <playbook>.yml
Capture the PLAY RECAP line -
ok=,changed=,failed=. -
Idempotency - re-run the same command immediately. The recap must report
changed=0. This is the canonical Ansible idempotency proof. If the recap showschanged > 0on the second run, the playbook is not idempotent and should be fixed before review. -
Teardown - for provisioning playbooks, run the corresponding
state: absenttask or a teardown playbook and capture its recap. Read-only fact-gathering playbooks can skip this.
The full lifecycle is the test. Capture each step:
terraform initterraform plan- should show the resources to be created.terraform apply- capture the Apply complete! summary line.terraform planagain - must reportNo changes(drift / idempotency proof).terraform destroy- capture the Destroy complete! line.terraform planonce more - should again reportNo changes(clean teardown proof).
For data-source-only modules (no resources), steps 1-3 plus the Apply complete! line are enough.
- Length - paste 10-50 lines per command. Truncate the middle with
# ... <N> lines elided ...if needed. - Redaction - replace cluster hostnames, serial numbers, license
keys, IP addresses, and SVM names if they are sensitive. Use placeholders
like
cluster-1.example.com,10.0.0.1,svm0. Do not redact ONTAP version, return codes, counts, or error messages - those are the parts reviewers need. - Format - wrap output in fenced code blocks with the
textlanguage tag for readability.
If you genuinely cannot run the example end-to-end (no access to a suitable cluster, feature gated to hardware you do not have, etc.):
- Say so explicitly in the report.
- Describe what would be the expected output.
- Tag the PR with the
needs-test-runlabel so a maintainer with access can run it before merge.
This is an escape hatch, not the default path. PRs without end-to-end evidence and without a credible reason will be sent back.
A workflow checks the PR body for a populated Test Report and, when it
looks empty, applies a needs-test-report label and posts a sticky
comment pointing here. The check is informational - it does not fail
the build - but reviewers will not approve a PR carrying that label.