Enable HTTPS for exposed endpoints Proposal#666
Conversation
Signed-off-by: mayuka-c <Mayuka.C@ibm.com>
Signed-off-by: mayuka-c <Mayuka.C@ibm.com>
Signed-off-by: mayuka-c <Mayuka.C@ibm.com>
Signed-off-by: mayuka-c <Mayuka.C@ibm.com>
Signed-off-by: mayuka-c <Mayuka.C@ibm.com>
Signed-off-by: mayuka-c <Mayuka.C@ibm.com>
Signed-off-by: mayuka-c <Mayuka.C@ibm.com>
Signed-off-by: mayuka-c <Mayuka.C@ibm.com>
|
Just one concern here - caddy becomes a shared point for TLS, certs, and routing, which means it’s a single point of failure. If it goes down or is misconfigured, we could lose access to everything even if the services are fine. |
Yeah SPOF is there since we run it as a single pod. But if misconfiguration is done it can be done to all other services that we are running as everything is running as single pod :) Even if it goes down we do have the restartPolicy set to always restart the pod. In the worst case since the underlying data is present in the LPAR, it can be redeployed too. Also, Please feel free to come up with the other approaches that you have in mind and we can evaluate the pros and cons. Also do consider all the scenarios so that everything is covered and how easy will it be to integrate as customers will bring in their own catalog and we need to make it seemeless for them to integrate with AI-Services. Also what exactly is this side car proxy container and what does it do? |
We need to check if they support multi-instance caddy, same problem do exist with NGINX or any global proxy like solution, customers using Podman should consider migrating to OpenShift if they require a more reliable and scalable architecture. |
instead of restart we need consider exploring how to recover only the proxy instead of an entire product reinstallation. |
Uh oh!
There was an error while loading. Please reload this page.