Skip to content

Add PQC TLS audit tests for CNV endpoints#4162

Open
OhadRevah wants to merge 3 commits intoRedHatQE:mainfrom
OhadRevah:add-pqc-tls-audit-cnv-endpoints
Open

Add PQC TLS audit tests for CNV endpoints#4162
OhadRevah wants to merge 3 commits intoRedHatQE:mainfrom
OhadRevah:add-pqc-tls-audit-cnv-endpoints

Conversation

@OhadRevah
Copy link
Copy Markdown
Contributor

@OhadRevah OhadRevah commented Mar 16, 2026

Part of: https://redhat.atlassian.net/browse/CNV-74453

Add tests to verify post-quantum cryptography (PQC) TLS support across CNV service endpoints. Tests check node OpenSSL PQC group availability, classical fallback behavior when PQC groups are offered, and handshake rejection when only PQC groups are used.

New utility functions added for PQC-specific openssl s_client command composition, TLS group inspection, and Peer Temp Key parsing.

assisted by: claude code claude-opus-4-6
Signed-off-by: Ohad orevah@redhat.com

Short description:
More details:
What this PR does / why we need it:
Which issue(s) this PR fixes:
Special notes for reviewer:
jira-ticket:

https://redhat.atlassian.net/browse/CNV-81812

Summary by CodeRabbit

  • Tests
    • Added comprehensive TLS crypto-policy and PQC audit tests for virtualization endpoints, including node capability checks, per-service PQC/TLS probing, tiered tests, and selective skips for known exceptions.
    • Added tests validating that a Modern TLS profile propagates to services (enforces TLS1.3, blocks TLS1.2) and temporary network-policy handling for console-plugin reachability.
  • Chores
    • Expanded test fixtures and constants to support feature-gating, AAQ, TLS profile switching, worker-side probing, and PQC detection.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 16, 2026

Review Change Stack

📝 Walkthrough

Walkthrough

Adds PQC probing and Modern TLS profile tests for CNV services: constants (AAQ, PQC/TLS constants), utilities for OpenSSL-based TLS/PQC probes, pytest fixtures for environment and per-service data collection, and two test modules validating PQC negotiation and TLS profile propagation.

Changes

PQC and Modern TLS Testing

Layer / File(s) Summary
Constants and Contracts
tests/install_upgrade_operators/crypto_policy/constants.py
Adds AAQ to managed CRs, includes AAQ in expected TLS policy mappings, and defines TLS_MODERN_PROFILE, TLS version constants, virt-template deployment names, PQC group identifiers, and OpenSSL indicators.
TLS and PQC Utilities
tests/install_upgrade_operators/crypto_policy/utils.py
Filters observed crypto-policy keys before diff; waits for cluster operator/HCO stabilization after apiserver TLS patch; adds check_service_accepts_tls_version, get_node_available_tls_groups, compose_openssl_pqc_command, and get_services_pqc_status.
Fixture Infrastructure
tests/install_upgrade_operators/crypto_policy/conftest.py
Adds fixtures: worker node selection, Template feature-gate enablement (waits for virt-template deployments), CNV service discovery including virt-template services, AAQ enablement, modern_tls_profile_applied with stabilization waits and revert, node TLS group discovery, worker_exec helper, temporary NetworkPolicy for console-plugin TLS port, and pqc_status_by_service per-service PQC status collection.
PQC Validation Tests
tests/install_upgrade_operators/crypto_policy/test_pqc_tls_audit.py
New tests: node-level PQC group support check and per-service PQC negotiation validation with FIPS-aware assertions and conditional skips for known issues.
TLS Profile Propagation Test
tests/install_upgrade_operators/crypto_policy/test_tls_profile_propagation.py
New test: verifies Modern apiserver TLS profile causes CNV services to reject TLS 1.2 and accept TLS 1.3 using per-service probes.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 63.16% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly and specifically describes the main change: adding PQC TLS audit tests for CNV endpoints, which aligns with the actual changeset.
Description check ✅ Passed The PR description addresses the template requirements with substantive details about what is being added (PQC TLS support tests), why it matters (verifying crypto support across CNV endpoints), and links to tracking issues, though template sections are not strictly followed.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-virtualization-qe-bot-3
Copy link
Copy Markdown
Contributor

Report bugs in Issues

Welcome! 🎉

This pull request will be automatically processed with the following features:

🔄 Automatic Actions

  • Reviewer Assignment: Reviewers are automatically assigned based on the OWNERS file in the repository root
  • Size Labeling: PR size labels (XS, S, M, L, XL, XXL) are automatically applied based on changes
  • Issue Creation: A tracking issue is created for this PR and will be closed when the PR is merged or closed
  • Branch Labeling: Branch-specific labels are applied to track the target branch
  • Auto-verification: Auto-verified users have their PRs automatically marked as verified
  • Labels: Enabled categories: branch, can-be-merged, cherry-pick, has-conflicts, hold, needs-rebase, size, verified, wip

📋 Available Commands

PR Status Management

  • /wip - Mark PR as work in progress (adds WIP: prefix to title)
  • /wip cancel - Remove work in progress status
  • /hold - Block PR merging (approvers only)
  • /hold cancel - Unblock PR merging
  • /verified - Mark PR as verified
  • /verified cancel - Remove verification status
  • /reprocess - Trigger complete PR workflow reprocessing (useful if webhook failed or configuration changed)
  • /regenerate-welcome - Regenerate this welcome message

Review & Approval

  • /lgtm - Approve changes (looks good to me)
  • /approve - Approve PR (approvers only)
  • /assign-reviewers - Assign reviewers based on OWNERS file
  • /assign-reviewer @username - Assign specific reviewer
  • /check-can-merge - Check if PR meets merge requirements

Testing & Validation

  • /retest tox - Run Python test suite with tox
  • /retest build-container - Rebuild and test container image
  • /retest verify-bugs-are-open - verify-bugs-are-open
  • /retest all - Run all available tests

Container Operations

  • /build-and-push-container - Build and push container image (tagged with PR number)
    • Supports additional build arguments: /build-and-push-container --build-arg KEY=value

Cherry-pick Operations

  • /cherry-pick <branch> - Schedule cherry-pick to target branch when PR is merged
    • Multiple branches: /cherry-pick branch1 branch2 branch3

Label Management

  • /<label-name> - Add a label to the PR
  • /<label-name> cancel - Remove a label from the PR

✅ Merge Requirements

This PR will be automatically approved when the following conditions are met:

  1. Approval: /approve from at least one approver
  2. LGTM Count: Minimum 2 /lgtm from reviewers
  3. Status Checks: All required status checks must pass
  4. No Blockers: No wip, hold, has-conflicts labels and PR must be mergeable (no conflicts)
  5. Verified: PR must be marked as verified

📊 Review Process

Approvers and Reviewers

Approvers:

  • dshchedr
  • myakove
  • rnetser
  • vsibirsk

Reviewers:

  • OhadRevah
  • RoniKishner
  • albarker-rh
  • dshchedr
  • hmeir
  • rlobillo
  • rnetser
  • vsibirsk
Available Labels
  • hold
  • verified
  • wip
  • lgtm
  • approve

💡 Tips

  • WIP Status: Use /wip when your PR is not ready for review
  • Verification: The verified label is automatically removed on each new commit
  • Cherry-picking: Cherry-pick labels are processed when the PR is merged
  • Container Builds: Container images are automatically tagged with the PR number
  • Permission Levels: Some commands require approver permissions
  • Auto-verified Users: Certain users have automatic verification and merge privileges

For more information, please refer to the project documentation or contact the maintainers.

@openshift-virtualization-qe-bot-3
Copy link
Copy Markdown
Contributor

/retest all

Auto-triggered: Files in this PR were modified by merged PR #4147.

Overlapping files

tests/conftest.py

@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 24, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 98.67%. Comparing base (b4ad2e0) to head (0a93d5c).
⚠️ Report is 134 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4162      +/-   ##
==========================================
+ Coverage   98.63%   98.67%   +0.03%     
==========================================
  Files          25       25              
  Lines        2420     2486      +66     
==========================================
+ Hits         2387     2453      +66     
  Misses         33       33              
Flag Coverage Δ
utilities 98.67% <100.00%> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@openshift-virtualization-qe-bot-3
Copy link
Copy Markdown
Contributor

/retest all

Auto-triggered: Files in this PR were modified by merged PR #4267.

Overlapping files

utilities/constants.py

@openshift-virtualization-qe-bot-3
Copy link
Copy Markdown
Contributor

/retest all

Auto-triggered: Files in this PR were modified by merged PR #4365.

Overlapping files

tests/conftest.py

Signed-off-by: Ohad <orevah@redhat.com>
@openshift-virtualization-qe-bot
Copy link
Copy Markdown

D/S test tox -e verify-tc-requirement-polarion failed: cnv-tests-tox-executor/27667

coderabbitai[bot]
coderabbitai Bot previously approved these changes May 7, 2026
@openshift-virtualization-qe-bot-3
Copy link
Copy Markdown
Contributor

Clean rebase detected — no code changes compared to previous head (b332955).
The following labels were preserved: commented-OhadRevah, commented-coderabbitai[bot].

@openshift-virtualization-qe-bot
Copy link
Copy Markdown

D/S test tox -e verify-tc-requirement-polarion failed: cnv-tests-tox-executor/27669

@openshift-virtualization-qe-bot
Copy link
Copy Markdown

D/S test tox -e verify-tc-requirement-polarion failed: cnv-tests-tox-executor/27670

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
tests/install_upgrade_operators/crypto_policy/utils.py (1)

271-280: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

HIGH: Stabilize the apply phase before yielding from this context manager.

Right now callers enter the with update_apiserver_crypto_policy(...) body immediately after patching the APIServer, while the waits here run only during teardown. That leaves updated_api_server_crypto_policy probing mid-rollout, and modern_tls_profile_applied only works because it duplicates the apply-side waits outside the context manager.

♻️ Proposed fix
 `@contextmanager`
 def update_apiserver_crypto_policy(
     admin_client,
     hco_namespace,
     apiserver,
     tls_spec,
 ):
     with ResourceEditor(
         patches={apiserver: {"spec": {TLS_SECURITY_PROFILE: tls_spec}}},
     ):
+        wait_for_cluster_operator_stabilize(admin_client=admin_client, wait_timeout=TIMEOUT_40MIN)
+        wait_for_hco_conditions(
+            admin_client=admin_client,
+            hco_namespace=hco_namespace,
+            list_dependent_crs_to_check=MANAGED_CRS_LIST,
+        )
         yield
     wait_for_cluster_operator_stabilize(admin_client=admin_client, wait_timeout=TIMEOUT_40MIN)
     wait_for_hco_conditions(
         admin_client=admin_client,
         hco_namespace=hco_namespace,
         list_dependent_crs_to_check=MANAGED_CRS_LIST,
     )
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tests/install_upgrade_operators/crypto_policy/utils.py` around lines 271 -
280, The context manager currently yields immediately after applying the patch
which lets callers run during the rollout; move the cluster stabilization and
HCO condition waits to run before yielding so the apply phase completes first:
after using ResourceEditor with patches={apiserver: {"spec":
{TLS_SECURITY_PROFILE: tls_spec}}}, call
wait_for_cluster_operator_stabilize(admin_client=admin_client,
wait_timeout=TIMEOUT_40MIN) and
wait_for_hco_conditions(admin_client=admin_client, hco_namespace=hco_namespace,
list_dependent_crs_to_check=MANAGED_CRS_LIST) before the yield so callers
operate against a stabilized API server rollout.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@tests/install_upgrade_operators/crypto_policy/constants.py`:
- Line 4: The MANAGED_CRS_LIST constant is missing the AAQ CR that you imported
and added expectations for; update the MANAGED_CRS_LIST definition to include
the AAQ symbol so the AAQ crypto policy tests are iterated and exercised (locate
the MANAGED_CRS_LIST in the same constants.py and append/include AAQ alongside
the other CR classes referenced there).

In `@tests/install_upgrade_operators/crypto_policy/utils.py`:
- Around line 300-305: The current except block in the TLS check catches
ExecOnPodError from ExecCommandOnPod.exec and returns False, which incorrectly
maps an infrastructure failure to a protocol rejection; instead, change the
handler in the function using ExecCommandOnPod(utility_pods=utility_pods,
node=node).exec to re-raise the ExecOnPodError (or wrap it with additional
context mentioning service.instance.metadata.name and tls_version) rather than
returning False or logging a warning, so callers can distinguish pod-exec
failures from true TLS rejection results.

---

Duplicate comments:
In `@tests/install_upgrade_operators/crypto_policy/utils.py`:
- Around line 271-280: The context manager currently yields immediately after
applying the patch which lets callers run during the rollout; move the cluster
stabilization and HCO condition waits to run before yielding so the apply phase
completes first: after using ResourceEditor with patches={apiserver: {"spec":
{TLS_SECURITY_PROFILE: tls_spec}}}, call
wait_for_cluster_operator_stabilize(admin_client=admin_client,
wait_timeout=TIMEOUT_40MIN) and
wait_for_hco_conditions(admin_client=admin_client, hco_namespace=hco_namespace,
list_dependent_crs_to_check=MANAGED_CRS_LIST) before the yield so callers
operate against a stabilized API server rollout.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 38f777fc-93df-4f1e-a225-72027e949b5d

📥 Commits

Reviewing files that changed from the base of the PR and between ee8511b and 2fc27aa.

📒 Files selected for processing (5)
  • tests/install_upgrade_operators/crypto_policy/conftest.py
  • tests/install_upgrade_operators/crypto_policy/constants.py
  • tests/install_upgrade_operators/crypto_policy/test_pqc_tls_audit.py
  • tests/install_upgrade_operators/crypto_policy/test_tls_profile_propagation.py
  • tests/install_upgrade_operators/crypto_policy/utils.py

Comment thread tests/install_upgrade_operators/crypto_policy/constants.py
Comment thread tests/install_upgrade_operators/crypto_policy/utils.py Outdated
@OhadRevah
Copy link
Copy Markdown
Contributor Author

/build-and-push-container

@openshift-virtualization-qe-bot-4
Copy link
Copy Markdown

New container for quay.io/openshift-cnv/openshift-virtualization-tests:pr-4162 published

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (2)
tests/install_upgrade_operators/crypto_policy/utils.py (1)

300-306: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

HIGH: Don’t map pod-exec infrastructure failures to TLS rejection (False).

False should mean “TLS rejected.” ExecOnPodError means probe execution failed, which is a different failure class and should fail fast so callers can distinguish infra issues from protocol behavior.

Suggested fix
-    except ExecOnPodError:
+    except ExecOnPodError as error:
         service_name = service.instance.metadata.name
-        LOGGER.warning(f"Service {service_name} is unreachable during TLS {tls_version} check, treating as rejected")
-        return False
+        error_message = (
+            f"Failed TLS {tls_version} probe for service {service_name}: {error}"
+        )
+        LOGGER.error(error_message)
+        raise RuntimeError(error_message) from error

As per coding guidelines: "Never silently swallow exceptions. At minimum, log the error before continuing." and "Do not use defensive programming—fail-fast and don't hide bugs with fake defaults."

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tests/install_upgrade_operators/crypto_policy/utils.py` around lines 300 -
306, The current try/except around ExecCommandOnPod.exec maps ExecOnPodError to
a False TLS result, conflating infra failures with protocol rejection; change
the handler in the block that calls ExecCommandOnPod.exec so that on
ExecOnPodError you log the error (include service.instance.metadata.name and
tls_version) and then re-raise the exception (or raise a wrapped exception)
instead of returning False, leaving the final "return tls_version in output"
path to represent actual TLS rejection; ensure you reference ExecCommandOnPod,
ExecOnPodError, LOGGER, service.instance.metadata.name, tls_version and output
when making the change.
tests/install_upgrade_operators/crypto_policy/constants.py (1)

13-14: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

HIGH: AAQ is in expected-policy assertions but still excluded from the default managed CR iteration.

CRYPTO_POLICY_EXPECTED_DICT now includes AAQ, but MANAGED_CRS_LIST still omits it and a separate MANAGED_CRS_LIST_WITH_AAQ was introduced. Any code path still iterating MANAGED_CRS_LIST can silently skip AAQ policy validation/reconcile checks.

Suggested minimal fix
-MANAGED_CRS_LIST = [KubeVirt, CDI, NetworkAddonsConfig, SSP]
-MANAGED_CRS_LIST_WITH_AAQ = [*MANAGED_CRS_LIST, AAQ]
+MANAGED_CRS_LIST = [KubeVirt, CDI, NetworkAddonsConfig, SSP, AAQ]

Also applies to: 68-92

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tests/install_upgrade_operators/crypto_policy/constants.py` around lines 13 -
14, MANAGED_CRS_LIST currently omits AAQ while CRYPTO_POLICY_EXPECTED_DICT
includes it, causing any logic iterating MANAGED_CRS_LIST to skip AAQ; fix by
adding AAQ to the canonical list (remove the special-case
MANAGED_CRS_LIST_WITH_AAQ or make that the single source) so all iterations use
the same set; update references to use MANAGED_CRS_LIST (or rename to
MANAGED_CRS) and remove duplicated list definitions to ensure AAQ is included
everywhere (check usage sites that iterate MANAGED_CRS_LIST,
MANAGED_CRS_LIST_WITH_AAQ, and CRYPTO_POLICY_EXPECTED_DICT).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@tests/install_upgrade_operators/crypto_policy/conftest.py`:
- Around line 146-160: The fixtures enabled_template_feature_gate, enabled_aaq,
and console_plugin_test_network_policy are currently session-scoped and modify
cluster-wide state; change their pytest fixture scope to a narrower scope
(remove scope="session" or set scope="function" / default) so each test gets its
own setup/teardown, and ensure their existing yield-based teardown (e.g., the
update_hco_annotations context manager in enabled_template_feature_gate and
equivalent cleanup code in enabled_aaq and console_plugin_test_network_policy)
runs per test to revert cluster annotations/NetworkPolicy and wait for
deployments (Deployment.wait_for_replicas) after teardown.
- Around line 141-142: The helper worker_node currently just returns workers[0],
which yields a generic IndexError when the list is empty; replace that implicit
failure with an explicit assertion in worker_node that workers is non-empty
(e.g., assert workers, "no worker nodes available for test"), so CI failures are
fast and actionable and the contract is clear before attempting to index into
workers.

In
`@tests/install_upgrade_operators/crypto_policy/test_tls_profile_propagation.py`:
- Around line 16-19: The test method
TestTlsProfilePropagation.test_modern_profile_propagates_to_cnv_services is
missing a required pytest tier marker; add the appropriate marker (e.g.,
`@pytest.mark.tier1` or `@pytest.mark.tier2/`@pytest.mark.tier3 as per test
criticality) directly above the method so the decorators include
`@pytest.mark.polarion`(...), `@pytest.mark.tierX`, and
`@pytest.mark.usefixtures`(...); ensure you choose the correct tier label
consistent with the repository's test classification.

---

Duplicate comments:
In `@tests/install_upgrade_operators/crypto_policy/constants.py`:
- Around line 13-14: MANAGED_CRS_LIST currently omits AAQ while
CRYPTO_POLICY_EXPECTED_DICT includes it, causing any logic iterating
MANAGED_CRS_LIST to skip AAQ; fix by adding AAQ to the canonical list (remove
the special-case MANAGED_CRS_LIST_WITH_AAQ or make that the single source) so
all iterations use the same set; update references to use MANAGED_CRS_LIST (or
rename to MANAGED_CRS) and remove duplicated list definitions to ensure AAQ is
included everywhere (check usage sites that iterate MANAGED_CRS_LIST,
MANAGED_CRS_LIST_WITH_AAQ, and CRYPTO_POLICY_EXPECTED_DICT).

In `@tests/install_upgrade_operators/crypto_policy/utils.py`:
- Around line 300-306: The current try/except around ExecCommandOnPod.exec maps
ExecOnPodError to a False TLS result, conflating infra failures with protocol
rejection; change the handler in the block that calls ExecCommandOnPod.exec so
that on ExecOnPodError you log the error (include service.instance.metadata.name
and tls_version) and then re-raise the exception (or raise a wrapped exception)
instead of returning False, leaving the final "return tls_version in output"
path to represent actual TLS rejection; ensure you reference ExecCommandOnPod,
ExecOnPodError, LOGGER, service.instance.metadata.name, tls_version and output
when making the change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 6d09082a-70f7-4cb8-b2d9-d9a153297c9d

📥 Commits

Reviewing files that changed from the base of the PR and between 2fc27aa and 13e8d35.

📒 Files selected for processing (4)
  • tests/install_upgrade_operators/crypto_policy/conftest.py
  • tests/install_upgrade_operators/crypto_policy/constants.py
  • tests/install_upgrade_operators/crypto_policy/test_tls_profile_propagation.py
  • tests/install_upgrade_operators/crypto_policy/utils.py

Comment thread tests/install_upgrade_operators/crypto_policy/conftest.py
Comment thread tests/install_upgrade_operators/crypto_policy/conftest.py
@openshift-virtualization-qe-bot
Copy link
Copy Markdown

D/S test tox -e verify-tc-requirement-polarion failed: cnv-tests-tox-executor/27673

@OhadRevah
Copy link
Copy Markdown
Contributor Author

/build-and-push-container

@openshift-virtualization-qe-bot-3
Copy link
Copy Markdown
Contributor

New container for quay.io/openshift-cnv/openshift-virtualization-tests:pr-4162 published

@OhadRevah
Copy link
Copy Markdown
Contributor Author

/build-and-push-container

coderabbitai[bot]
coderabbitai Bot previously approved these changes May 7, 2026
@OhadRevah
Copy link
Copy Markdown
Contributor Author

/retest build-container

@openshift-virtualization-qe-bot
Copy link
Copy Markdown

D/S test tox -e verify-tc-requirement-polarion failed: cnv-tests-tox-executor/27679

@openshift-virtualization-qe-bot
Copy link
Copy Markdown

D/S test tox -e verify-tc-requirement-polarion failed: cnv-tests-tox-executor/27680

@openshift-virtualization-qe-bot
Copy link
Copy Markdown

D/S test tox -e verify-tc-requirement-polarion failed: cnv-tests-tox-executor/27681

@openshift-virtualization-qe-bot
Copy link
Copy Markdown

D/S test tox -e verify-tc-requirement-polarion failed: cnv-tests-tox-executor/27682

@openshift-virtualization-qe-bot
Copy link
Copy Markdown

D/S test tox -e verify-tc-requirement-polarion failed: cnv-tests-tox-executor/27683

@openshift-virtualization-qe-bot-5
Copy link
Copy Markdown

New container for quay.io/openshift-cnv/openshift-virtualization-tests:pr-4162 published

Signed-off-by: Ohad <orevah@redhat.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/install_upgrade_operators/crypto_policy/conftest.py (1)

99-114: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

HIGH: updated_api_server_crypto_policy should wait for stabilization before yielding.

The fixture wraps update_apiserver_crypto_policy(), which applies the patch via ResourceEditor and yields, then only stabilizes during teardown (lines 275–280 in utils.py). Tests using this fixture probe cluster state while operators are still reconciling, causing flaky assertions on crypto policy propagation.

Compare to modern_tls_profile_applied() (lines 188–203 in conftest.py), which explicitly waits for operator and HCO stabilization before yielding. The pattern should be consistent.

Add the same waits to updated_api_server_crypto_policy() before the yield:

with update_apiserver_crypto_policy(...):
    wait_for_cluster_operator_stabilize(admin_client=admin_client, wait_timeout=TIMEOUT_40MIN)
    wait_for_hco_conditions(
        admin_client=admin_client,
        hco_namespace=hco_namespace,
        list_dependent_crs_to_check=MANAGED_CRS_LIST,
    )
    yield

This ensures tests always see fully propagated state.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tests/install_upgrade_operators/crypto_policy/conftest.py` around lines 99 -
114, The fixture updated_api_server_crypto_policy applies
update_apiserver_crypto_policy but yields before waiting for reconciliation;
modify it so that inside the with update_apiserver_crypto_policy(...) block,
before yielding, call
wait_for_cluster_operator_stabilize(admin_client=admin_client,
wait_timeout=TIMEOUT_40MIN) and then
wait_for_hco_conditions(admin_client=admin_client, hco_namespace=hco_namespace,
list_dependent_crs_to_check=MANAGED_CRS_LIST) to ensure operator and HCO
stabilization occurs prior to yield.
♻️ Duplicate comments (1)
tests/install_upgrade_operators/crypto_policy/conftest.py (1)

146-175: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

HIGH: Session-scoped fixture chain still leaks cluster state across modules.

enabled_template_feature_gate, enabled_aaq, and console_plugin_test_network_policy all mutate shared HCO or namespace state, but they still live for the full pytest session. That means later modules can inherit modified feature gates / AAQ / NetworkPolicy state, and cnv_services_with_template plus pqc_status_by_service then cache results from that mutated state for the rest of the run. Narrow this chain to module so setup/teardown happens at the module boundary instead of end-of-suite.

♻️ Proposed fix
-@pytest.fixture(scope="session")
+@pytest.fixture(scope="module")
 def enabled_template_feature_gate(admin_client, hco_namespace, hyperconverged_resource_scope_session):

-@pytest.fixture(scope="session")
+@pytest.fixture(scope="module")
 def cnv_services_with_template(enabled_template_feature_gate, hco_namespace, admin_client):

-@pytest.fixture(scope="session")
+@pytest.fixture(scope="module")
 def enabled_aaq(admin_client, hco_namespace, hyperconged_resource_scope_session):

-@pytest.fixture(scope="session")
+@pytest.fixture(scope="module")
 def console_plugin_test_network_policy(hco_namespace, admin_client):

-@pytest.fixture(scope="session")
+@pytest.fixture(scope="module")
 def pqc_status_by_service(enabled_aaq, worker_exec, cnv_services_with_template, console_plugin_test_network_policy):

As per coding guidelines: "Fixture scope rules: use scope='module' for expensive setup in a test module ... Never use broader scope if fixture modifies state or creates per-test resources."

Also applies to: 178-185, 219-262

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tests/install_upgrade_operators/crypto_policy/conftest.py` around lines 146 -
175, The fixtures that mutate cluster/namespace state should not be
session-scoped; change the scope argument from "session" to "module" for
enabled_template_feature_gate and cnv_services_with_template (and likewise for
enabled_aaq, console_plugin_test_network_policy, and pqc_status_by_service
referenced in the comment) so setup/teardown occur at module boundaries instead
of the entire session; locate these fixtures by name in the file and update
their pytest.fixture(scope=...) decorator to scope="module" while keeping their
existing behavior (enter/exit context managers and yields) so teardown still
runs after each module.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@tests/install_upgrade_operators/crypto_policy/utils.py`:
- Around line 320-323: The current parsing of openssl TLS groups is wrong
because ExecCommandOnPod(...).exec(... ) output is newline-delimited but the
code uses split(":"), so replace the colon-split with a newline-aware split
(e.g., use splitlines() or split("\n")) and keep the existing strip() and filter
to remove empty entries; update the function that returns
node_available_tls_groups (the list comprehension that currently uses
split(":")) to use output.splitlines() (or output.strip().splitlines()) so each
group name like "SecP256r1MLKEM768" becomes its own list element.

---

Outside diff comments:
In `@tests/install_upgrade_operators/crypto_policy/conftest.py`:
- Around line 99-114: The fixture updated_api_server_crypto_policy applies
update_apiserver_crypto_policy but yields before waiting for reconciliation;
modify it so that inside the with update_apiserver_crypto_policy(...) block,
before yielding, call
wait_for_cluster_operator_stabilize(admin_client=admin_client,
wait_timeout=TIMEOUT_40MIN) and then
wait_for_hco_conditions(admin_client=admin_client, hco_namespace=hco_namespace,
list_dependent_crs_to_check=MANAGED_CRS_LIST) to ensure operator and HCO
stabilization occurs prior to yield.

---

Duplicate comments:
In `@tests/install_upgrade_operators/crypto_policy/conftest.py`:
- Around line 146-175: The fixtures that mutate cluster/namespace state should
not be session-scoped; change the scope argument from "session" to "module" for
enabled_template_feature_gate and cnv_services_with_template (and likewise for
enabled_aaq, console_plugin_test_network_policy, and pqc_status_by_service
referenced in the comment) so setup/teardown occur at module boundaries instead
of the entire session; locate these fixtures by name in the file and update
their pytest.fixture(scope=...) decorator to scope="module" while keeping their
existing behavior (enter/exit context managers and yields) so teardown still
runs after each module.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 8905b02b-61c9-4aff-8001-fd2136bfa97c

📥 Commits

Reviewing files that changed from the base of the PR and between 13e8d35 and 2a9efb9.

📒 Files selected for processing (5)
  • tests/install_upgrade_operators/crypto_policy/conftest.py
  • tests/install_upgrade_operators/crypto_policy/constants.py
  • tests/install_upgrade_operators/crypto_policy/test_pqc_tls_audit.py
  • tests/install_upgrade_operators/crypto_policy/test_tls_profile_propagation.py
  • tests/install_upgrade_operators/crypto_policy/utils.py

Comment thread tests/install_upgrade_operators/crypto_policy/utils.py
@OhadRevah
Copy link
Copy Markdown
Contributor Author

/build-and-push-container

@openshift-virtualization-qe-bot-2
Copy link
Copy Markdown
Contributor

New container for quay.io/openshift-cnv/openshift-virtualization-tests:pr-4162 published

@openshift-virtualization-qe-bot
Copy link
Copy Markdown

D/S test tox -e verify-tc-requirement-polarion failed: cnv-tests-tox-executor/27688


@pytest.fixture(scope="session")
def worker_node(workers):
assert workers, "No worker nodes available for crypto policy tests"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its safe to assume that the cluster got worker node

client=admin_client,
)
deployment.wait_for_replicas()
yield
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about cleanup?

@pytest.fixture(scope="session")
def enabled_template_feature_gate(admin_client, hco_namespace, hyperconverged_resource_scope_session):
"""Enables the Template feature gate via HCO annotation and waits for virt-template deployments."""
with update_hco_annotations(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is to expose all hidden endpoints?

if service.instance.spec.clusterIP not in (None, "", "None")
]
assert services_list, f"No services found in {hco_namespace.name}"
service_names = [svc.instance.metadata.name for svc in services_list]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

service.name?

services_list = [
service
for service in Service.get(namespace=hco_namespace.name, client=admin_client)
if service.instance.spec.clusterIP not in (None, "", "None")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can it be "None"?

"""
for service_name, accepted in pqc_status_by_service.items():
with subtests.test(msg=service_name):
if service_name == HYPERCONVERGED_CLUSTER_CLI_DOWNLOAD:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why continue and not xfail?

if jira_id := SERVICES_WITH_OPEN_BUGS.get(service_name):
if is_jira_open(jira_id=jira_id):
LOGGER.info(f"Skipping {service_name} — known bug: {jira_id}")
continue
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto



@pytest.fixture()
def modern_tls_profile_applied(admin_client, hco_namespace, api_server, enabled_aaq):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we want to update the api server, or hco?

"kubevirt-migration-prometheus": "CNV-83219",
}

pytestmark = pytest.mark.tier2
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its done automatically

)
from tests.install_upgrade_operators.crypto_policy.utils import check_service_accepts_tls_version

pytestmark = pytest.mark.tier2
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto



@pytest.fixture(scope="session")
def worker_node(workers):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have fixture worker_node1, does it work for you?

return services_list


@pytest.fixture(scope="session")
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here and in many other fixtures - I don't think the "session" scope is correct.

Comment on lines +214 to +216
@pytest.fixture(scope="session")
def worker_exec(workers_utility_pods, worker_node):
return ExecCommandOnPod(utility_pods=workers_utility_pods, node=worker_node)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand why you need this fixture.
use exec function directly where you need it, this fixture does not make sense

Comment on lines +252 to +254
accepted = [name for name, status in results.items() if status is True]
rejected = [name for name, status in results.items() if status is False]
unreachable = [name for name, status in results.items() if status is None]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you are going in a loop 3 times

rewrite the code - use for loop only once

)


class TestPqcCnvEndpoints:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only one test under class, why you need it?

pytestmark = pytest.mark.tier2


class TestTlsProfilePropagation:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same, only one test under class

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants