Conversation
|
@tthvo: This pull request references CORS-4072 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@tthvo: This pull request references CORS-4072 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@tthvo: This pull request references CORS-4072 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/testwith openshift/installer/main/e2e-aws-ovn-dualstack-ipv4-primary-techpreview openshift/machine-api-provider-aws#177 openshift/machine-api-provider-aws#171 |
|
/testwith openshift/installer/main/e2e-aws-ovn-dualstack-ipv6-primary-techpreview openshift/machine-api-provider-aws#177 openshift/machine-api-provider-aws#171 |
|
/payload-job periodic-ci-openshift-openshift-tests-private-release-4.22-amd64-nightly-aws-usgov-ipi-private-ep-fips-f7 |
|
@tthvo: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/70bcb810-1773-11f1-831b-7034f4f09cdc-0 |
|
/label platform/aws |
|
/test golint |
|
/testwith openshift/installer/main/e2e-aws-ovn-dualstack-ipv4-primary-techpreview openshift/machine-api-provider-aws#177 |
|
/testwith openshift/installer/main/e2e-aws-ovn-dualstack-ipv6-primary-techpreview openshift/machine-api-provider-aws#177 |
patrickdillon
left a comment
There was a problem hiding this comment.
This LGTM, but I had a few questions, particularly about why we need the node port changes.
The code changes for that and SSH look good, but it's unfortunate we need to do that. I suppose, particularly for SSH, it does not make sense to use Security Group Roles? i.e. we instruct users to add the role to the installer host
I'm less sure about node port... if we do need it, seems like Security Group Role might be ok there?
|
Also, I don't really get how install is copmleting in the test with MAPI PR: https://prow.ci.openshift.org/view/gs/test-platform-results/logs/multi-pr-openshift-installer-10353-openshift-machine-api-provider-aws-177-e2e-aws-ovn-dualstack-ipv4-primary-techpreview/2029358234887786496 I thought we were blocked by ingress on this. I know previously you were using the node port hack to get around the ingress issue; and there's still nodeport changes in this pr, but I thought those were just security group rules. Seems like I'm missing something |
The security group already have rules to allow node ports (30000-32767) to However, we also need to allow traffic from the load balancers created by CCM for
Right, unfortunately, the roles are specifically for cluster components (i.e. nodes, load balancers) so we can't really add one to installer host. Also, a role is corresponding a security group, which means the installer host may be allowed to access ports (for example, 30000-32767) more than intended. |
Right, we are still blocked by ingress and CCM as of today. However, in dualstack IPv4 primary, the following happens:
This means IPv4 traffic will flow to routers just fine even though IPv6 is not supported. To the cluster operators, the setup is the same as a regular singlestack IPv4 since they only care if the router is reachable (regardless of what IP family). Thus, the install completed. For dualstack IPv6 primary, the That being said, it's a "false positive" that the install succeeded :D |
I see! Thinking about this more, as I mentioned above, this should actually be handled by the ingress component... Technically we should not have to deal with this. In the case that we actually do need to do something like this, I do wonder if we could pre-provision, or use a determnistic name for a security group, so that ingress creates the LB in a security group we have pre-created rules for. |
Ahh yess, the installer can pre-create an additional security group with only allowed node port ingress rule (30000-32767) and attach it to cluster machines, similar to AdditionalSecurityGroupIDs feature. The only issue is that 30000-32767 is still open to all IPs if |
|
Regarding ingress services/nodeport, I realize now I was assuming that aws would have the same behavior as cloud-provider-gcp, but it does not. So sorry for confusing things. In gcp, the cloud provider will handle opening all the ports for an ingress service, but that does not happen in |
Right, I was thinking we would leave That said, the current approach looks good and ingress isn't even sorted out, so maybe that is just something we want to consider once ingress is in place, as it might help simplify our code a bit. |
|
@tthvo: This pull request references CORS-4072 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pkg/infrastructure/aws/clusterapi/aws.go (1)
564-603:⚠️ Potential issue | 🟠 Major
isSSHRuleGonemay report success prematurely for private installs.The function checks for hardcoded
0.0.0.0/0and::/0CIDRs, butremoveSSHRuleidentifies bootstrap SSH rules by their description (BootstrapSSHDescription). For private installs, bootstrap SSH rules use VPC/subnet CIDRs (fromsshRuleCidrs), not0.0.0.0/0. This meansisSSHRuleGonewill returntruebefore the actual bootstrap SSH rules are deleted.Consider matching by
BootstrapSSHDescription(consistent withremoveSSHRule) instead of hardcoded CIDRs:🛠️ Proposed fix to match rules by description
func isSSHRuleGone(ctx context.Context, client *ec2.Client, sgID string) (bool, error) { // ... existing SG fetch code ... sg := sgs[0] for _, rule := range sg.IpPermissions { if ptr.Deref(rule.ToPort, 0) != 22 { continue } - // Check IPv4 rules + // Check IPv4 rules for bootstrap SSH by description for _, source := range rule.IpRanges { - if source.CidrIp != nil && *source.CidrIp == "0.0.0.0/0" { - ruleDesc := ptr.Deref(source.Description, "[no description]") - logrus.Debugf("Found ingress rule %s with source cidr %s. Still waiting for deletion...", ruleDesc, *source.CidrIp) + if strings.EqualFold(ptr.Deref(source.Description, ""), awsmanifest.BootstrapSSHDescription) { + logrus.Debugf("Found bootstrap SSH ingress rule with IPv4 CIDR %s. Still waiting for deletion...", ptr.Deref(source.CidrIp, "")) return false, nil } } - // Check IPv6 rules + // Check IPv6 rules for bootstrap SSH by description for _, source := range rule.Ipv6Ranges { - if source.CidrIpv6 != nil && *source.CidrIpv6 == "::/0" { - ruleDesc := ptr.Deref(source.Description, "[no description]") - logrus.Debugf("Found ingress rule %s with source cidr %s. Still waiting for deletion...", ruleDesc, *source.CidrIpv6) + if strings.EqualFold(ptr.Deref(source.Description, ""), awsmanifest.BootstrapSSHDescription) { + logrus.Debugf("Found bootstrap SSH ingress rule with IPv6 CIDR %s. Still waiting for deletion...", ptr.Deref(source.CidrIpv6, "")) return false, nil } } } return true, nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/infrastructure/aws/clusterapi/aws.go` around lines 564 - 603, isSSHRuleGone currently looks for hardcoded CIDRs (0.0.0.0/0 and ::/0) which misses bootstrap SSH rules that are identified by description in removeSSHRule; update isSSHRuleGone to detect any SSH ingress rules (ToPort == 22) that have the BootstrapSSHDescription (use the same BootstrapSSHDescription symbol) in their Description fields for both IpRanges and Ipv6Ranges (and any other permission fields you use for bootstrap rules), and treat presence of those described rules as "not gone" (return false), otherwise return true; keep the existing error handling and logging (use ptr.Deref on Description like existing code) so behavior matches removeSSHRule's identification logic.
♻️ Duplicate comments (1)
pkg/infrastructure/clusterapi/ignition.go (1)
328-378:⚠️ Potential issue | 🟠 MajorSilent success when
clusterConfigDataFileis not found may mask issues.If
clusterConfigDataFile("/opt/openshift/manifests/cluster-config.yaml") is not present inconfig.Storage.Files, the function returnsnilwithout an error. Given thatupdateMachineNetworksis called specifically to update machine networks for dual-stack, a missing cluster-config file is likely an unexpected state that should fail the install rather than silently succeed.🛠️ Proposed fix to fail when cluster-config is not found
func updateMachineNetworks(in IgnitionInput, config *igntypes.Config, machineNetworks []types.MachineNetworkEntry) error { + found := false for i, fileData := range config.Storage.Files { if fileData.Path != clusterConfigDataFile { continue } + found = true contents := strings.Split(*config.Storage.Files[i].Contents.Source, ",") // ... existing logic ... encoded := fmt.Sprintf("%s%s", header, base64.StdEncoding.EncodeToString(configCMContents)) // replace the contents with the edited information config.Storage.Files[i].Contents.Source = &encoded - break + return nil } - return nil + if !found { + return fmt.Errorf("failed to find %s in bootstrap ignition", clusterConfigDataFile) + } + return nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/infrastructure/clusterapi/ignition.go` around lines 328 - 378, The function updateMachineNetworks currently iterates config.Storage.Files for clusterConfigDataFile and silently returns nil if not found; change it to return an explicit error when the loop finishes without locating clusterConfigDataFile so the caller knows the expected cluster-config manifest was missing. Locate updateMachineNetworks and the loop that checks fileData.Path != clusterConfigDataFile, and after the loop (or by tracking a found boolean) return a descriptive fmt.Errorf mentioning clusterConfigDataFile when it was not found instead of nil; keep all existing decoding/marshalling behavior unchanged when the file is present.
🧹 Nitpick comments (1)
pkg/asset/manifests/capiutils/helpers.go (1)
57-77: Consider renaming loop variable to avoid shadowing theipnetpackage.The loop variable
ipnetshadows the imported package name. While this works correctly here (the package isn't accessed inside the loop), it can cause confusion during maintenance.♻️ Suggested refactor
// GetIPv4CIDRs returns only IPNets of IPv4 family. func GetIPv4CIDRs(cidrs []ipnet.IPNet) []ipnet.IPNet { var ipv4Nets []ipnet.IPNet - for _, ipnet := range cidrs { - if netutils.IsIPv4CIDR(&ipnet.IPNet) { - ipv4Nets = append(ipv4Nets, ipnet) + for _, cidr := range cidrs { + if netutils.IsIPv4CIDR(&cidr.IPNet) { + ipv4Nets = append(ipv4Nets, cidr) } } return ipv4Nets } // GetIPv6CIDRs returns only IPNets of IPv6 family. func GetIPv6CIDRs(cidrs []ipnet.IPNet) []ipnet.IPNet { var ipv6Nets []ipnet.IPNet - for _, ipnet := range cidrs { - if netutils.IsIPv6CIDR(&ipnet.IPNet) { - ipv6Nets = append(ipv6Nets, ipnet) + for _, cidr := range cidrs { + if netutils.IsIPv6CIDR(&cidr.IPNet) { + ipv6Nets = append(ipv6Nets, cidr) } } return ipv6Nets }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/asset/manifests/capiutils/helpers.go` around lines 57 - 77, The loop variable name `ipnet` shadows the imported package `ipnet`, causing confusion; in both GetIPv4CIDRs and GetIPv6CIDRs, rename the loop variable (e.g., use `n`, `netItem`, or `cidr`) and update references inside the loop (e.g., netutils.IsIPv4CIDR(&netItem.IPNet) and ipv4Nets = append(ipv4Nets, netItem)) so the code no longer shadows the package while preserving use of the ipnet.IPNet type and behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@pkg/infrastructure/aws/clusterapi/aws.go`:
- Around line 564-603: isSSHRuleGone currently looks for hardcoded CIDRs
(0.0.0.0/0 and ::/0) which misses bootstrap SSH rules that are identified by
description in removeSSHRule; update isSSHRuleGone to detect any SSH ingress
rules (ToPort == 22) that have the BootstrapSSHDescription (use the same
BootstrapSSHDescription symbol) in their Description fields for both IpRanges
and Ipv6Ranges (and any other permission fields you use for bootstrap rules),
and treat presence of those described rules as "not gone" (return false),
otherwise return true; keep the existing error handling and logging (use
ptr.Deref on Description like existing code) so behavior matches removeSSHRule's
identification logic.
---
Duplicate comments:
In `@pkg/infrastructure/clusterapi/ignition.go`:
- Around line 328-378: The function updateMachineNetworks currently iterates
config.Storage.Files for clusterConfigDataFile and silently returns nil if not
found; change it to return an explicit error when the loop finishes without
locating clusterConfigDataFile so the caller knows the expected cluster-config
manifest was missing. Locate updateMachineNetworks and the loop that checks
fileData.Path != clusterConfigDataFile, and after the loop (or by tracking a
found boolean) return a descriptive fmt.Errorf mentioning clusterConfigDataFile
when it was not found instead of nil; keep all existing decoding/marshalling
behavior unchanged when the file is present.
---
Nitpick comments:
In `@pkg/asset/manifests/capiutils/helpers.go`:
- Around line 57-77: The loop variable name `ipnet` shadows the imported package
`ipnet`, causing confusion; in both GetIPv4CIDRs and GetIPv6CIDRs, rename the
loop variable (e.g., use `n`, `netItem`, or `cidr`) and update references inside
the loop (e.g., netutils.IsIPv4CIDR(&netItem.IPNet) and ipv4Nets =
append(ipv4Nets, netItem)) so the code no longer shadows the package while
preserving use of the ipnet.IPNet type and behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: bd03415a-daaa-4907-8723-6655733350bb
📒 Files selected for processing (8)
pkg/asset/manifests/aws/cluster.gopkg/asset/manifests/capiutils/helpers.gopkg/infrastructure/aws/clusterapi/aws.gopkg/infrastructure/aws/clusterapi/ignition.gopkg/infrastructure/azure/ignition.gopkg/infrastructure/clusterapi/ignition.gopkg/infrastructure/clusterapi/types.gopkg/infrastructure/gcp/clusterapi/ignition.go
🚧 Files skipped from review as they are similar to previous changes (1)
- pkg/infrastructure/azure/ignition.go
|
/testwith openshift/installer/main/e2e-aws-ovn-dualstack-ipv6-primary-techpreview openshift/machine-api-provider-aws#177 /testwith openshift/installer/main/e2e-aws-ovn-dualstack-ipv4-primary-techpreview openshift/machine-api-provider-aws#177 |
|
Phew, I think all CodeRabbit reviews are now addressed and the PR is rebased on top of latest In case anyone misses due to the long bot review, the latest comments/discussions were posted here: #10353 (comment) |
|
/test e2e-aws-ovn e2e-aws-ovn-edge-zones e2e-aws-ovn-shared-vpc-edge-zones |
1 similar comment
|
/test e2e-aws-ovn e2e-aws-ovn-edge-zones e2e-aws-ovn-shared-vpc-edge-zones |
Note: Defining a non-nil (but empty) vpc.ipv6 block will configure cAPA to provision the network infrastructure with IPv6 capabilities.
The machines need configuring to: - Use resource-name hostname type in order to have A and AAAA records for its hostname. - Assign a primary IPv6 address in order to register with IPv6 target group when dualstack is IPv6 primary. - Enable IPv6 IMDS HTTP endpoint.
The VPC IPv6 CIDR is not available at install time (when the installer provisions the VPC). Thus, we need to add it in the node port rules later on and ensure the rules are available.
The etcd operator requires IPv6 machine networks in dualstack networking. Since we may not know the VPC IPV6 at install time, we need to add it in later on once the infrastructure is ready.
|
@tthvo: This pull request references CORS-4072 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
pkg/infrastructure/aws/clusterapi/aws.go (1)
584-599:⚠️ Potential issue | 🟠 MajorMatch bootstrap SSH rules by description, not by
0.0.0.0/0/::/0.
removeSSHRuleremoves entries byawsmanifest.BootstrapSSHDescription, but this verifier only waits on world-open CIDRs. In private or BYO-subnet installs the bootstrap SSH rule can be scoped to cluster/VPC CIDRs instead, so this can returntruebefore the bootstrap SSH rule has actually been removed. Check bothIpRangesandIpv6Rangesby description to keep the verifier aligned with the removal logic.Proposed fix
for _, rule := range sg.IpPermissions { if ptr.Deref(rule.ToPort, 0) != 22 { continue } // Check IPv4 rules for _, source := range rule.IpRanges { - if source.CidrIp != nil && *source.CidrIp == "0.0.0.0/0" { - ruleDesc := ptr.Deref(source.Description, "[no description]") - logrus.Debugf("Found ingress rule %s with source cidr %s. Still waiting for deletion...", ruleDesc, *source.CidrIp) + if strings.EqualFold(ptr.Deref(source.Description, ""), awsmanifest.BootstrapSSHDescription) { + logrus.Debugf("Found bootstrap SSH ingress rule with source cidr %s. Still waiting for deletion...", ptr.Deref(source.CidrIp, "")) return false, nil } } // Check IPv6 rules for _, source := range rule.Ipv6Ranges { - if source.CidrIpv6 != nil && *source.CidrIpv6 == "::/0" { - ruleDesc := ptr.Deref(source.Description, "[no description]") - logrus.Debugf("Found ingress rule %s with source cidr %s. Still waiting for deletion...", ruleDesc, *source.CidrIpv6) + if strings.EqualFold(ptr.Deref(source.Description, ""), awsmanifest.BootstrapSSHDescription) { + logrus.Debugf("Found bootstrap SSH ingress rule with source cidr %s. Still waiting for deletion...", ptr.Deref(source.CidrIpv6, "")) return false, nil } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/infrastructure/aws/clusterapi/aws.go` around lines 584 - 599, The verifier currently only checks for world-open CIDRs when scanning rule.IpRanges and rule.Ipv6Ranges, which can miss bootstrap SSH rules that are scoped to non-world CIDRs; update the checks in the loops that iterate rule.IpRanges and rule.Ipv6Ranges to look for source.Description matching awsmanifest.BootstrapSSHDescription (use ptr.Deref or nil-safe compare) and return false if such a description is found (so the verifier aligns with removeSSHRule which deletes by awsmanifest.BootstrapSSHDescription); preserve or remove the CIDR checks as desired, but ensure the description-based check is present for both IpRanges and Ipv6Ranges.
🧹 Nitpick comments (1)
pkg/asset/manifests/capiutils/helpers.go (1)
58-76: Minor: Variable shadows imported package name.The loop variable
ipnetshadows the importedgithub.com/openshift/installer/pkg/ipnetpackage. While this doesn't cause functional issues since the package isn't used inside the loop body, renaming tocidrwould improve clarity.♻️ Suggested improvement
func GetIPv4CIDRs(cidrs []ipnet.IPNet) []ipnet.IPNet { var ipv4Nets []ipnet.IPNet - for _, ipnet := range cidrs { - if netutils.IsIPv4CIDR(&ipnet.IPNet) { - ipv4Nets = append(ipv4Nets, ipnet) + for _, cidr := range cidrs { + if netutils.IsIPv4CIDR(&cidr.IPNet) { + ipv4Nets = append(ipv4Nets, cidr) } } return ipv4Nets } func GetIPv6CIDRs(cidrs []ipnet.IPNet) []ipnet.IPNet { var ipv6Nets []ipnet.IPNet - for _, ipnet := range cidrs { - if netutils.IsIPv6CIDR(&ipnet.IPNet) { - ipv6Nets = append(ipv6Nets, ipnet) + for _, cidr := range cidrs { + if netutils.IsIPv6CIDR(&cidr.IPNet) { + ipv6Nets = append(ipv6Nets, cidr) } } return ipv6Nets }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/asset/manifests/capiutils/helpers.go` around lines 58 - 76, The loop variable name `ipnet` in GetIPv4CIDRs and GetIPv6CIDRs shadows the imported ipnet package; rename the loop variable (e.g., to `cidr`) in both functions and update all uses inside the loops (references to `ipnet.IPNet` -> `cidr.IPNet`) to avoid shadowing and improve clarity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/infrastructure/clusterapi/ignition.go`:
- Around line 328-377: The updateMachineNetworks function currently returns nil
when no file with path clusterConfigDataFile is found in config.Storage.Files,
causing silent success; modify updateMachineNetworks to detect when the loop
does not find clusterConfigDataFile (e.g., set a found flag or return after
handling) and return a descriptive error if the file is missing so callers know
the ignition lacked the expected cluster-config file; reference the loop over
config.Storage.Files, the clusterConfigDataFile constant and the break after
updating config.Storage.Files[i].Contents.Source to locate where to add this
check.
---
Duplicate comments:
In `@pkg/infrastructure/aws/clusterapi/aws.go`:
- Around line 584-599: The verifier currently only checks for world-open CIDRs
when scanning rule.IpRanges and rule.Ipv6Ranges, which can miss bootstrap SSH
rules that are scoped to non-world CIDRs; update the checks in the loops that
iterate rule.IpRanges and rule.Ipv6Ranges to look for source.Description
matching awsmanifest.BootstrapSSHDescription (use ptr.Deref or nil-safe compare)
and return false if such a description is found (so the verifier aligns with
removeSSHRule which deletes by awsmanifest.BootstrapSSHDescription); preserve or
remove the CIDR checks as desired, but ensure the description-based check is
present for both IpRanges and Ipv6Ranges.
---
Nitpick comments:
In `@pkg/asset/manifests/capiutils/helpers.go`:
- Around line 58-76: The loop variable name `ipnet` in GetIPv4CIDRs and
GetIPv6CIDRs shadows the imported ipnet package; rename the loop variable (e.g.,
to `cidr`) in both functions and update all uses inside the loops (references to
`ipnet.IPNet` -> `cidr.IPNet`) to avoid shadowing and improve clarity.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 31d01973-45b0-409a-b123-1fcf7db806c4
📒 Files selected for processing (13)
pkg/asset/installconfig/aws/route53.gopkg/asset/machines/aws/awsmachines.gopkg/asset/machines/clusterapi.gopkg/asset/manifests/aws/cluster.gopkg/asset/manifests/aws/zones.gopkg/asset/manifests/aws/zones_test.gopkg/asset/manifests/capiutils/helpers.gopkg/infrastructure/aws/clusterapi/aws.gopkg/infrastructure/aws/clusterapi/ignition.gopkg/infrastructure/azure/ignition.gopkg/infrastructure/clusterapi/ignition.gopkg/infrastructure/clusterapi/types.gopkg/infrastructure/gcp/clusterapi/ignition.go
🚧 Files skipped from review as they are similar to previous changes (2)
- pkg/infrastructure/azure/ignition.go
- pkg/asset/machines/clusterapi.go
|
/testwith openshift/installer/main/e2e-aws-ovn-dualstack-ipv6-primary-techpreview openshift/machine-api-provider-aws#177 /testwith openshift/installer/main/e2e-aws-ovn-dualstack-ipv4-primary-techpreview openshift/machine-api-provider-aws#177 |
|
/skip |
|
/retest-required |
|
/test e2e-aws-ovn-shared-vpc-edge-zones e2e-aws-byo-subnet-role-security-groups e2e-aws-ovn-edge-zones |
|
@tthvo: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/lgtm |
|
/verified by e2e-aws-ovn-dualstack (see multi-pr tests) |
|
@patrickdillon: This PR has been marked as verified by DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/tide refresh |
Description
This PR added dualstack (IPv4 + IPv6) networking support for OCP on AWS platform. In this configurations, network infrastructure (i.e. vpc, subnets, load balancers, etc) and pod/service internal networking will be dualstack.
There are 2 dualstack variants: dualstack IPv4 primary and dualstack IPv6 primary. See below for the install-config snippet to install each case.
This addresses: CORS-4074, CORS-4076, CORS-4077, CORS-4167, CORS-4168, and CORS-4243
Setup Instructions
Use the below
install-configsnippet to configure networking and AWS platform.Note:
machineNetworkdoes not contain IPv6 CIDR as it is unknown at install time (i.e. will be patched later when infra is ready). The cluster network and service network must be provided by the user with valid ULA IPv6 CIDRs.Dualstack IPv4 Primary:
Dualstack IPv6 Primary:
Summary by CodeRabbit
New Features
Refactor
Tests