Add PQC TLS audit tests for CNV endpoints#4162
Add PQC TLS audit tests for CNV endpoints#4162OhadRevah wants to merge 3 commits intoRedHatQE:mainfrom
Conversation
📝 WalkthroughWalkthroughAdds PQC probing and Modern TLS profile tests for CNV services: constants (AAQ, PQC/TLS constants), utilities for OpenSSL-based TLS/PQC probes, pytest fixtures for environment and per-service data collection, and two test modules validating PQC negotiation and TLS profile propagation. ChangesPQC and Modern TLS Testing
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Report bugs in Issues Welcome! 🎉This pull request will be automatically processed with the following features: 🔄 Automatic Actions
📋 Available CommandsPR Status Management
Review & Approval
Testing & Validation
Container Operations
Cherry-pick Operations
Label Management
✅ Merge RequirementsThis PR will be automatically approved when the following conditions are met:
📊 Review ProcessApprovers and ReviewersApprovers:
Reviewers:
Available Labels
💡 Tips
For more information, please refer to the project documentation or contact the maintainers. |
d85cc94 to
6273011
Compare
|
/retest all Auto-triggered: Files in this PR were modified by merged PR #4147. Overlapping filestests/conftest.py |
42dccbe to
1a28d6d
Compare
1a28d6d to
1e2130c
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #4162 +/- ##
==========================================
+ Coverage 98.63% 98.67% +0.03%
==========================================
Files 25 25
Lines 2420 2486 +66
==========================================
+ Hits 2387 2453 +66
Misses 33 33
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
1e2130c to
3dc1309
Compare
|
/retest all Auto-triggered: Files in this PR were modified by merged PR #4267. Overlapping filesutilities/constants.py |
|
/retest all Auto-triggered: Files in this PR were modified by merged PR #4365. Overlapping filestests/conftest.py |
Signed-off-by: Ohad <orevah@redhat.com>
|
D/S test |
|
Clean rebase detected — no code changes compared to previous head ( |
|
D/S test |
|
D/S test |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
tests/install_upgrade_operators/crypto_policy/utils.py (1)
271-280:⚠️ Potential issue | 🟠 Major | ⚡ Quick winHIGH: Stabilize the apply phase before yielding from this context manager.
Right now callers enter the
with update_apiserver_crypto_policy(...)body immediately after patching theAPIServer, while the waits here run only during teardown. That leavesupdated_api_server_crypto_policyprobing mid-rollout, andmodern_tls_profile_appliedonly works because it duplicates the apply-side waits outside the context manager.♻️ Proposed fix
`@contextmanager` def update_apiserver_crypto_policy( admin_client, hco_namespace, apiserver, tls_spec, ): with ResourceEditor( patches={apiserver: {"spec": {TLS_SECURITY_PROFILE: tls_spec}}}, ): + wait_for_cluster_operator_stabilize(admin_client=admin_client, wait_timeout=TIMEOUT_40MIN) + wait_for_hco_conditions( + admin_client=admin_client, + hco_namespace=hco_namespace, + list_dependent_crs_to_check=MANAGED_CRS_LIST, + ) yield wait_for_cluster_operator_stabilize(admin_client=admin_client, wait_timeout=TIMEOUT_40MIN) wait_for_hco_conditions( admin_client=admin_client, hco_namespace=hco_namespace, list_dependent_crs_to_check=MANAGED_CRS_LIST, )🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/install_upgrade_operators/crypto_policy/utils.py` around lines 271 - 280, The context manager currently yields immediately after applying the patch which lets callers run during the rollout; move the cluster stabilization and HCO condition waits to run before yielding so the apply phase completes first: after using ResourceEditor with patches={apiserver: {"spec": {TLS_SECURITY_PROFILE: tls_spec}}}, call wait_for_cluster_operator_stabilize(admin_client=admin_client, wait_timeout=TIMEOUT_40MIN) and wait_for_hco_conditions(admin_client=admin_client, hco_namespace=hco_namespace, list_dependent_crs_to_check=MANAGED_CRS_LIST) before the yield so callers operate against a stabilized API server rollout.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@tests/install_upgrade_operators/crypto_policy/constants.py`:
- Line 4: The MANAGED_CRS_LIST constant is missing the AAQ CR that you imported
and added expectations for; update the MANAGED_CRS_LIST definition to include
the AAQ symbol so the AAQ crypto policy tests are iterated and exercised (locate
the MANAGED_CRS_LIST in the same constants.py and append/include AAQ alongside
the other CR classes referenced there).
In `@tests/install_upgrade_operators/crypto_policy/utils.py`:
- Around line 300-305: The current except block in the TLS check catches
ExecOnPodError from ExecCommandOnPod.exec and returns False, which incorrectly
maps an infrastructure failure to a protocol rejection; instead, change the
handler in the function using ExecCommandOnPod(utility_pods=utility_pods,
node=node).exec to re-raise the ExecOnPodError (or wrap it with additional
context mentioning service.instance.metadata.name and tls_version) rather than
returning False or logging a warning, so callers can distinguish pod-exec
failures from true TLS rejection results.
---
Duplicate comments:
In `@tests/install_upgrade_operators/crypto_policy/utils.py`:
- Around line 271-280: The context manager currently yields immediately after
applying the patch which lets callers run during the rollout; move the cluster
stabilization and HCO condition waits to run before yielding so the apply phase
completes first: after using ResourceEditor with patches={apiserver: {"spec":
{TLS_SECURITY_PROFILE: tls_spec}}}, call
wait_for_cluster_operator_stabilize(admin_client=admin_client,
wait_timeout=TIMEOUT_40MIN) and
wait_for_hco_conditions(admin_client=admin_client, hco_namespace=hco_namespace,
list_dependent_crs_to_check=MANAGED_CRS_LIST) before the yield so callers
operate against a stabilized API server rollout.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 38f777fc-93df-4f1e-a225-72027e949b5d
📒 Files selected for processing (5)
tests/install_upgrade_operators/crypto_policy/conftest.pytests/install_upgrade_operators/crypto_policy/constants.pytests/install_upgrade_operators/crypto_policy/test_pqc_tls_audit.pytests/install_upgrade_operators/crypto_policy/test_tls_profile_propagation.pytests/install_upgrade_operators/crypto_policy/utils.py
|
/build-and-push-container |
|
New container for quay.io/openshift-cnv/openshift-virtualization-tests:pr-4162 published |
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
tests/install_upgrade_operators/crypto_policy/utils.py (1)
300-306:⚠️ Potential issue | 🟠 Major | ⚡ Quick winHIGH: Don’t map pod-exec infrastructure failures to TLS rejection (
False).
Falseshould mean “TLS rejected.”ExecOnPodErrormeans probe execution failed, which is a different failure class and should fail fast so callers can distinguish infra issues from protocol behavior.Suggested fix
- except ExecOnPodError: + except ExecOnPodError as error: service_name = service.instance.metadata.name - LOGGER.warning(f"Service {service_name} is unreachable during TLS {tls_version} check, treating as rejected") - return False + error_message = ( + f"Failed TLS {tls_version} probe for service {service_name}: {error}" + ) + LOGGER.error(error_message) + raise RuntimeError(error_message) from errorAs per coding guidelines: "Never silently swallow exceptions. At minimum, log the error before continuing." and "Do not use defensive programming—fail-fast and don't hide bugs with fake defaults."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/install_upgrade_operators/crypto_policy/utils.py` around lines 300 - 306, The current try/except around ExecCommandOnPod.exec maps ExecOnPodError to a False TLS result, conflating infra failures with protocol rejection; change the handler in the block that calls ExecCommandOnPod.exec so that on ExecOnPodError you log the error (include service.instance.metadata.name and tls_version) and then re-raise the exception (or raise a wrapped exception) instead of returning False, leaving the final "return tls_version in output" path to represent actual TLS rejection; ensure you reference ExecCommandOnPod, ExecOnPodError, LOGGER, service.instance.metadata.name, tls_version and output when making the change.tests/install_upgrade_operators/crypto_policy/constants.py (1)
13-14:⚠️ Potential issue | 🟠 Major | ⚡ Quick winHIGH: AAQ is in expected-policy assertions but still excluded from the default managed CR iteration.
CRYPTO_POLICY_EXPECTED_DICTnow includesAAQ, butMANAGED_CRS_LISTstill omits it and a separateMANAGED_CRS_LIST_WITH_AAQwas introduced. Any code path still iteratingMANAGED_CRS_LISTcan silently skip AAQ policy validation/reconcile checks.Suggested minimal fix
-MANAGED_CRS_LIST = [KubeVirt, CDI, NetworkAddonsConfig, SSP] -MANAGED_CRS_LIST_WITH_AAQ = [*MANAGED_CRS_LIST, AAQ] +MANAGED_CRS_LIST = [KubeVirt, CDI, NetworkAddonsConfig, SSP, AAQ]Also applies to: 68-92
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/install_upgrade_operators/crypto_policy/constants.py` around lines 13 - 14, MANAGED_CRS_LIST currently omits AAQ while CRYPTO_POLICY_EXPECTED_DICT includes it, causing any logic iterating MANAGED_CRS_LIST to skip AAQ; fix by adding AAQ to the canonical list (remove the special-case MANAGED_CRS_LIST_WITH_AAQ or make that the single source) so all iterations use the same set; update references to use MANAGED_CRS_LIST (or rename to MANAGED_CRS) and remove duplicated list definitions to ensure AAQ is included everywhere (check usage sites that iterate MANAGED_CRS_LIST, MANAGED_CRS_LIST_WITH_AAQ, and CRYPTO_POLICY_EXPECTED_DICT).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@tests/install_upgrade_operators/crypto_policy/conftest.py`:
- Around line 146-160: The fixtures enabled_template_feature_gate, enabled_aaq,
and console_plugin_test_network_policy are currently session-scoped and modify
cluster-wide state; change their pytest fixture scope to a narrower scope
(remove scope="session" or set scope="function" / default) so each test gets its
own setup/teardown, and ensure their existing yield-based teardown (e.g., the
update_hco_annotations context manager in enabled_template_feature_gate and
equivalent cleanup code in enabled_aaq and console_plugin_test_network_policy)
runs per test to revert cluster annotations/NetworkPolicy and wait for
deployments (Deployment.wait_for_replicas) after teardown.
- Around line 141-142: The helper worker_node currently just returns workers[0],
which yields a generic IndexError when the list is empty; replace that implicit
failure with an explicit assertion in worker_node that workers is non-empty
(e.g., assert workers, "no worker nodes available for test"), so CI failures are
fast and actionable and the contract is clear before attempting to index into
workers.
In
`@tests/install_upgrade_operators/crypto_policy/test_tls_profile_propagation.py`:
- Around line 16-19: The test method
TestTlsProfilePropagation.test_modern_profile_propagates_to_cnv_services is
missing a required pytest tier marker; add the appropriate marker (e.g.,
`@pytest.mark.tier1` or `@pytest.mark.tier2/`@pytest.mark.tier3 as per test
criticality) directly above the method so the decorators include
`@pytest.mark.polarion`(...), `@pytest.mark.tierX`, and
`@pytest.mark.usefixtures`(...); ensure you choose the correct tier label
consistent with the repository's test classification.
---
Duplicate comments:
In `@tests/install_upgrade_operators/crypto_policy/constants.py`:
- Around line 13-14: MANAGED_CRS_LIST currently omits AAQ while
CRYPTO_POLICY_EXPECTED_DICT includes it, causing any logic iterating
MANAGED_CRS_LIST to skip AAQ; fix by adding AAQ to the canonical list (remove
the special-case MANAGED_CRS_LIST_WITH_AAQ or make that the single source) so
all iterations use the same set; update references to use MANAGED_CRS_LIST (or
rename to MANAGED_CRS) and remove duplicated list definitions to ensure AAQ is
included everywhere (check usage sites that iterate MANAGED_CRS_LIST,
MANAGED_CRS_LIST_WITH_AAQ, and CRYPTO_POLICY_EXPECTED_DICT).
In `@tests/install_upgrade_operators/crypto_policy/utils.py`:
- Around line 300-306: The current try/except around ExecCommandOnPod.exec maps
ExecOnPodError to a False TLS result, conflating infra failures with protocol
rejection; change the handler in the block that calls ExecCommandOnPod.exec so
that on ExecOnPodError you log the error (include service.instance.metadata.name
and tls_version) and then re-raise the exception (or raise a wrapped exception)
instead of returning False, leaving the final "return tls_version in output"
path to represent actual TLS rejection; ensure you reference ExecCommandOnPod,
ExecOnPodError, LOGGER, service.instance.metadata.name, tls_version and output
when making the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 6d09082a-70f7-4cb8-b2d9-d9a153297c9d
📒 Files selected for processing (4)
tests/install_upgrade_operators/crypto_policy/conftest.pytests/install_upgrade_operators/crypto_policy/constants.pytests/install_upgrade_operators/crypto_policy/test_tls_profile_propagation.pytests/install_upgrade_operators/crypto_policy/utils.py
|
D/S test |
|
/build-and-push-container |
|
New container for quay.io/openshift-cnv/openshift-virtualization-tests:pr-4162 published |
|
/build-and-push-container |
|
/retest build-container |
|
D/S test |
|
D/S test |
|
D/S test |
|
D/S test |
|
D/S test |
|
New container for quay.io/openshift-cnv/openshift-virtualization-tests:pr-4162 published |
Signed-off-by: Ohad <orevah@redhat.com>
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/install_upgrade_operators/crypto_policy/conftest.py (1)
99-114:⚠️ Potential issue | 🟠 Major | ⚡ Quick winHIGH:
updated_api_server_crypto_policyshould wait for stabilization before yielding.The fixture wraps
update_apiserver_crypto_policy(), which applies the patch viaResourceEditorand yields, then only stabilizes during teardown (lines 275–280 in utils.py). Tests using this fixture probe cluster state while operators are still reconciling, causing flaky assertions on crypto policy propagation.Compare to
modern_tls_profile_applied()(lines 188–203 in conftest.py), which explicitly waits for operator and HCO stabilization before yielding. The pattern should be consistent.Add the same waits to
updated_api_server_crypto_policy()before the yield:with update_apiserver_crypto_policy(...): wait_for_cluster_operator_stabilize(admin_client=admin_client, wait_timeout=TIMEOUT_40MIN) wait_for_hco_conditions( admin_client=admin_client, hco_namespace=hco_namespace, list_dependent_crs_to_check=MANAGED_CRS_LIST, ) yieldThis ensures tests always see fully propagated state.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/install_upgrade_operators/crypto_policy/conftest.py` around lines 99 - 114, The fixture updated_api_server_crypto_policy applies update_apiserver_crypto_policy but yields before waiting for reconciliation; modify it so that inside the with update_apiserver_crypto_policy(...) block, before yielding, call wait_for_cluster_operator_stabilize(admin_client=admin_client, wait_timeout=TIMEOUT_40MIN) and then wait_for_hco_conditions(admin_client=admin_client, hco_namespace=hco_namespace, list_dependent_crs_to_check=MANAGED_CRS_LIST) to ensure operator and HCO stabilization occurs prior to yield.
♻️ Duplicate comments (1)
tests/install_upgrade_operators/crypto_policy/conftest.py (1)
146-175:⚠️ Potential issue | 🟠 Major | ⚡ Quick winHIGH: Session-scoped fixture chain still leaks cluster state across modules.
enabled_template_feature_gate,enabled_aaq, andconsole_plugin_test_network_policyall mutate shared HCO or namespace state, but they still live for the full pytest session. That means later modules can inherit modified feature gates / AAQ / NetworkPolicy state, andcnv_services_with_templatepluspqc_status_by_servicethen cache results from that mutated state for the rest of the run. Narrow this chain tomoduleso setup/teardown happens at the module boundary instead of end-of-suite.♻️ Proposed fix
-@pytest.fixture(scope="session") +@pytest.fixture(scope="module") def enabled_template_feature_gate(admin_client, hco_namespace, hyperconverged_resource_scope_session): -@pytest.fixture(scope="session") +@pytest.fixture(scope="module") def cnv_services_with_template(enabled_template_feature_gate, hco_namespace, admin_client): -@pytest.fixture(scope="session") +@pytest.fixture(scope="module") def enabled_aaq(admin_client, hco_namespace, hyperconged_resource_scope_session): -@pytest.fixture(scope="session") +@pytest.fixture(scope="module") def console_plugin_test_network_policy(hco_namespace, admin_client): -@pytest.fixture(scope="session") +@pytest.fixture(scope="module") def pqc_status_by_service(enabled_aaq, worker_exec, cnv_services_with_template, console_plugin_test_network_policy):As per coding guidelines: "Fixture scope rules: use
scope='module'for expensive setup in a test module ... Never use broader scope if fixture modifies state or creates per-test resources."Also applies to: 178-185, 219-262
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/install_upgrade_operators/crypto_policy/conftest.py` around lines 146 - 175, The fixtures that mutate cluster/namespace state should not be session-scoped; change the scope argument from "session" to "module" for enabled_template_feature_gate and cnv_services_with_template (and likewise for enabled_aaq, console_plugin_test_network_policy, and pqc_status_by_service referenced in the comment) so setup/teardown occur at module boundaries instead of the entire session; locate these fixtures by name in the file and update their pytest.fixture(scope=...) decorator to scope="module" while keeping their existing behavior (enter/exit context managers and yields) so teardown still runs after each module.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@tests/install_upgrade_operators/crypto_policy/utils.py`:
- Around line 320-323: The current parsing of openssl TLS groups is wrong
because ExecCommandOnPod(...).exec(... ) output is newline-delimited but the
code uses split(":"), so replace the colon-split with a newline-aware split
(e.g., use splitlines() or split("\n")) and keep the existing strip() and filter
to remove empty entries; update the function that returns
node_available_tls_groups (the list comprehension that currently uses
split(":")) to use output.splitlines() (or output.strip().splitlines()) so each
group name like "SecP256r1MLKEM768" becomes its own list element.
---
Outside diff comments:
In `@tests/install_upgrade_operators/crypto_policy/conftest.py`:
- Around line 99-114: The fixture updated_api_server_crypto_policy applies
update_apiserver_crypto_policy but yields before waiting for reconciliation;
modify it so that inside the with update_apiserver_crypto_policy(...) block,
before yielding, call
wait_for_cluster_operator_stabilize(admin_client=admin_client,
wait_timeout=TIMEOUT_40MIN) and then
wait_for_hco_conditions(admin_client=admin_client, hco_namespace=hco_namespace,
list_dependent_crs_to_check=MANAGED_CRS_LIST) to ensure operator and HCO
stabilization occurs prior to yield.
---
Duplicate comments:
In `@tests/install_upgrade_operators/crypto_policy/conftest.py`:
- Around line 146-175: The fixtures that mutate cluster/namespace state should
not be session-scoped; change the scope argument from "session" to "module" for
enabled_template_feature_gate and cnv_services_with_template (and likewise for
enabled_aaq, console_plugin_test_network_policy, and pqc_status_by_service
referenced in the comment) so setup/teardown occur at module boundaries instead
of the entire session; locate these fixtures by name in the file and update
their pytest.fixture(scope=...) decorator to scope="module" while keeping their
existing behavior (enter/exit context managers and yields) so teardown still
runs after each module.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 8905b02b-61c9-4aff-8001-fd2136bfa97c
📒 Files selected for processing (5)
tests/install_upgrade_operators/crypto_policy/conftest.pytests/install_upgrade_operators/crypto_policy/constants.pytests/install_upgrade_operators/crypto_policy/test_pqc_tls_audit.pytests/install_upgrade_operators/crypto_policy/test_tls_profile_propagation.pytests/install_upgrade_operators/crypto_policy/utils.py
|
/build-and-push-container |
|
New container for quay.io/openshift-cnv/openshift-virtualization-tests:pr-4162 published |
|
D/S test |
|
|
||
| @pytest.fixture(scope="session") | ||
| def worker_node(workers): | ||
| assert workers, "No worker nodes available for crypto policy tests" |
There was a problem hiding this comment.
I think its safe to assume that the cluster got worker node
| client=admin_client, | ||
| ) | ||
| deployment.wait_for_replicas() | ||
| yield |
| @pytest.fixture(scope="session") | ||
| def enabled_template_feature_gate(admin_client, hco_namespace, hyperconverged_resource_scope_session): | ||
| """Enables the Template feature gate via HCO annotation and waits for virt-template deployments.""" | ||
| with update_hco_annotations( |
There was a problem hiding this comment.
The idea is to expose all hidden endpoints?
| if service.instance.spec.clusterIP not in (None, "", "None") | ||
| ] | ||
| assert services_list, f"No services found in {hco_namespace.name}" | ||
| service_names = [svc.instance.metadata.name for svc in services_list] |
| services_list = [ | ||
| service | ||
| for service in Service.get(namespace=hco_namespace.name, client=admin_client) | ||
| if service.instance.spec.clusterIP not in (None, "", "None") |
| """ | ||
| for service_name, accepted in pqc_status_by_service.items(): | ||
| with subtests.test(msg=service_name): | ||
| if service_name == HYPERCONVERGED_CLUSTER_CLI_DOWNLOAD: |
| if jira_id := SERVICES_WITH_OPEN_BUGS.get(service_name): | ||
| if is_jira_open(jira_id=jira_id): | ||
| LOGGER.info(f"Skipping {service_name} — known bug: {jira_id}") | ||
| continue |
|
|
||
|
|
||
| @pytest.fixture() | ||
| def modern_tls_profile_applied(admin_client, hco_namespace, api_server, enabled_aaq): |
There was a problem hiding this comment.
do we want to update the api server, or hco?
| "kubevirt-migration-prometheus": "CNV-83219", | ||
| } | ||
|
|
||
| pytestmark = pytest.mark.tier2 |
| ) | ||
| from tests.install_upgrade_operators.crypto_policy.utils import check_service_accepts_tls_version | ||
|
|
||
| pytestmark = pytest.mark.tier2 |
|
|
||
|
|
||
| @pytest.fixture(scope="session") | ||
| def worker_node(workers): |
There was a problem hiding this comment.
we have fixture worker_node1, does it work for you?
| return services_list | ||
|
|
||
|
|
||
| @pytest.fixture(scope="session") |
There was a problem hiding this comment.
here and in many other fixtures - I don't think the "session" scope is correct.
| @pytest.fixture(scope="session") | ||
| def worker_exec(workers_utility_pods, worker_node): | ||
| return ExecCommandOnPod(utility_pods=workers_utility_pods, node=worker_node) |
There was a problem hiding this comment.
I don't understand why you need this fixture.
use exec function directly where you need it, this fixture does not make sense
| accepted = [name for name, status in results.items() if status is True] | ||
| rejected = [name for name, status in results.items() if status is False] | ||
| unreachable = [name for name, status in results.items() if status is None] |
There was a problem hiding this comment.
you are going in a loop 3 times
rewrite the code - use for loop only once
| ) | ||
|
|
||
|
|
||
| class TestPqcCnvEndpoints: |
There was a problem hiding this comment.
only one test under class, why you need it?
| pytestmark = pytest.mark.tier2 | ||
|
|
||
|
|
||
| class TestTlsProfilePropagation: |
There was a problem hiding this comment.
same, only one test under class
Part of: https://redhat.atlassian.net/browse/CNV-74453
Add tests to verify post-quantum cryptography (PQC) TLS support across CNV service endpoints. Tests check node OpenSSL PQC group availability, classical fallback behavior when PQC groups are offered, and handshake rejection when only PQC groups are used.
New utility functions added for PQC-specific openssl s_client command composition, TLS group inspection, and Peer Temp Key parsing.
assisted by: claude code claude-opus-4-6
Signed-off-by: Ohad orevah@redhat.com
Short description:
More details:
What this PR does / why we need it:
Which issue(s) this PR fixes:
Special notes for reviewer:
jira-ticket:
https://redhat.atlassian.net/browse/CNV-81812
Summary by CodeRabbit