-
Notifications
You must be signed in to change notification settings - Fork 158
Open
Labels
bugSomething isn't workingSomething isn't workingdeployspecific to this repository... does not imply product specific issuesspecific to this repository... does not imply product specific issues
Description
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
-
Follow the readme at https://github.com/open-cluster-management/deploy#prepare-to-deploy-open-cluster-management-instance-only-do-once
-
I think the major reason is mongodb failed to start caused the mcm apiserver failed to start.
-
I was using snapshot 2.0.0-SNAPSHOT-2020-06-23-14-20-27
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
[root@ocp43-dev-inf prereqs]# oc get pods -n open-cluster-management
NAME READY STATUS RESTARTS AGE
acm-controller-9dd999dcc-c42jg 1/1 Running 0 14m
acm-proxyserver-7c45654bf6-bg9jg 1/1 Running 0 14m
application-chart-ae36f-applicationui-55f468c4f8-6hcnt 1/1 Running 0 13m
cert-manager-e61c1-6cf697d5df-rhhcc 1/1 Running 0 14m
cert-manager-webhook-03d4d-cainjector-8d76f6646-t5crm 1/1 Running 0 13m
cert-manager-webhook-74bdc8455d-jn6qf 1/1 Running 0 13m
cluster-manager-64964fdf4f-bmhld 1/1 Running 0 15m
cluster-manager-64964fdf4f-vfckd 1/1 Running 0 15m
cluster-manager-64964fdf4f-xwk9v 1/1 Running 0 15m
configmap-watcher-4b90e-66c867f4f5-7rj8z 1/1 Running 0 13m
console-chart-65124-consoleapi-56ff5dbdfb-5cslf 1/1 Running 0 10m
console-chart-65124-consoleui-7b4fd788c6-4zrqr 1/1 Running 0 10m
console-header-55ffb7666d-pnnp7 1/1 Running 0 10m
etcd-cluster-8mvj8pznpx 0/1 Init:0/1 0 14m
etcd-operator-558567f79d-wsczb 3/3 Running 0 15m
grc-7079f-grcui-7d9c8dd454-d8qf5 1/1 Running 0 13m
grc-7079f-grcuiapi-74896974c4-6pvj6 1/1 Running 0 13m
grc-7079f-policy-propagator-67c7546d77-6v628 1/1 Running 0 13m
hive-operator-6bf77bd558-v5nqd 1/1 Running 0 15m
klusterlet-addon-controller-5f47d9f99-dd8pw 1/1 Running 0 13m
managedcluster-import-controller-69b69bf967-kjmw9 1/1 Running 0 13m
management-ingress-2511c-6c7dff479c-9tzqr 2/2 Running 0 12m
mcm-apiserver-564cb96f8d-2v4gc 0/1 CrashLoopBackOff 7 14m
mcm-apiserver-6f794b6df-ggm44 0/1 CrashLoopBackOff 6 12m
mcm-controller-8676c9b6db-gqkw2 1/1 Running 0 14m
mcm-webhook-98957b97f-7sdjw 1/1 Running 0 14m
multicluster-hub-custom-registry-64cdb758bc-7d9g7 1/1 Running 0 16m
multicluster-mongodb-0 0/1 Init:1/2 0 13m
multicluster-operators-application-68445cbf88-5rxjr 4/4 Running 0 15m
multicluster-operators-hub-subscription-84c69bb5bf-h766b 1/1 Running 0 15m
multicluster-operators-standalone-subscription-55cc9d964d-c9p7q 1/1 Running 0 15m
multiclusterhub-operator-7cf7b55cc7-kh2cg 1/1 Running 0 6m50s
multiclusterhub-repo-fdd98b94f-nwrvh 1/1 Running 0 14m
search-operator-5c9f65c7c9-td78r 1/1 Running 0 10m
search-prod-798d3-redisgraph-58858bdb48-t5drs 1/1 Running 0 10m
search-prod-798d3-search-aggregator-65c8cbcd4f-j4qpr 1/1 Running 0 10m
search-prod-798d3-search-api-6df9bd58cd-qwzkf 1/1 Running 0 10m
search-prod-798d3-search-collector-774667b9f-68qg8 1/1 Running 0 10m
topology-30155-topology-5447cdd666-sq2vw 1/1 Running 0 10m
topology-30155-topologyapi-5fc4c96466-h6m45 1/1 Running 0 10m[root@ocp43-dev-inf prereqs]# oc get pods -n open-cluster-management | grep -v Running
NAME READY STATUS RESTARTS AGE
etcd-cluster-8mvj8pznpx 0/1 Init:0/1 0 14m
mcm-apiserver-564cb96f8d-2v4gc 0/1 CrashLoopBackOff 7 14m
mcm-apiserver-6f794b6df-ggm44 0/1 CrashLoopBackOff 6 12m
multicluster-mongodb-0 0/1 Init:1/2 0 13m[root@ocp43-dev-inf prereqs]# oc describe pods -n open-cluster-management etcd-cluster-8mvj8pznpx mcm-apiserver-564cb96f8d-2v4gc mcm-apiserver-6f794b6df-ggm44 multicluster-mongodb-0
Name: etcd-cluster-8mvj8pznpx
Namespace: open-cluster-management
Priority: 0
Node: worker1.ocp43-dev.os.fyre.ibm.com/10.16.100.29
Start Time: Tue, 23 Jun 2020 07:54:33 -0700
Labels: app=etcd
etcd_cluster=etcd-cluster
etcd_node=etcd-cluster-8mvj8pznpx
Annotations: etcd.version: 3.2.13
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "openshift-sdn",
"interface": "eth0",
"ips": [
"10.254.8.26"
],
"dns": {},
"default-route": [
"10.254.8.1"
]
}]
openshift.io/scc: multicloud-scc
Status: Pending
IP: 10.254.8.26
IPs:
IP: 10.254.8.26
Controlled By: EtcdCluster/etcd-cluster
Init Containers:
check-dns:
Container ID: cri-o://0616c0d1d24a5d34578631732fbee767547736ec874f105423004c72b149d3c3
Image: busybox:1.28.0-glibc
Image ID: docker.io/library/busybox@sha256:0b55a30394294ab23b9afd58fab94e61a923f5834fba7ddbae7f8e0c11ba85e6
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
TIMEOUT_READY=0
while ( ! nslookup etcd-cluster-8mvj8pznpx.etcd-cluster.open-cluster-management.svc )
do
# If TIMEOUT_READY is 0 we should never time out and exit
TIMEOUT_READY=$(( TIMEOUT_READY-1 ))
if [ $TIMEOUT_READY -eq 0 ];
then
echo "Timed out waiting for DNS entry"
exit 1
fi
sleep 1
done
State: Running
Started: Tue, 23 Jun 2020 07:55:01 -0700
Ready: False
Restart Count: 0
Environment: <none>
Mounts: <none>
Containers:
etcd:
Container ID:
Image: quay.io/coreos/etcd:v3.2.13
Image ID:
Ports: 2380/TCP, 2379/TCP
Host Ports: 0/TCP, 0/TCP
Command:
/usr/local/bin/etcd
--data-dir=/var/etcd/data
--name=etcd-cluster-8mvj8pznpx
--initial-advertise-peer-urls=http://etcd-cluster-8mvj8pznpx.etcd-cluster.open-cluster-management.svc:2380
--listen-peer-urls=http://0.0.0.0:2380
--listen-client-urls=http://0.0.0.0:2379
--advertise-client-urls=http://etcd-cluster-8mvj8pznpx.etcd-cluster.open-cluster-management.svc:2379
--initial-cluster=etcd-cluster-8mvj8pznpx=http://etcd-cluster-8mvj8pznpx.etcd-cluster.open-cluster-management.svc:2380
--initial-cluster-state=new
--initial-cluster-token=2341bb1f-5975-444c-8aef-4c6f8ee82d83
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Liveness: exec [/bin/sh -ec ETCDCTL_API=3 etcdctl endpoint status] delay=10s timeout=10s period=60s #success=1 #failure=3
Readiness: exec [/bin/sh -ec ETCDCTL_API=3 etcdctl endpoint status] delay=1s timeout=5s period=5s #success=1 #failure=3
Environment: <none>
Mounts:
/var/etcd from etcd-data (rw)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
etcd-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: etcd-cluster-8mvj8pznpx
ReadOnly: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
Normal Scheduled <unknown> default-scheduler Successfully assigned open-cluster-management/etcd-cluster-8mvj8pznpx to worker1.ocp43-dev.os.fyre.ibm.com
Normal SuccessfulAttachVolume 15m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-b8bdd641-d4fa-437a-b0bd-2923c1234880"
Normal Pulling 14m kubelet, worker1.ocp43-dev.os.fyre.ibm.com Pulling image "busybox:1.28.0-glibc"
Normal Pulled 14m kubelet, worker1.ocp43-dev.os.fyre.ibm.com Successfully pulled image "busybox:1.28.0-glibc"
Normal Created 14m kubelet, worker1.ocp43-dev.os.fyre.ibm.com Created container check-dns
Normal Started 14m kubelet, worker1.ocp43-dev.os.fyre.ibm.com Started container check-dns
Name: mcm-apiserver-564cb96f8d-2v4gc
Namespace: open-cluster-management
Priority: 0
Node: worker1.ocp43-dev.os.fyre.ibm.com/10.16.100.29
Start Time: Tue, 23 Jun 2020 07:54:30 -0700
Labels: app=mcm-apiserver
pod-template-hash=564cb96f8d
Annotations: k8s.v1.cni.cncf.io/networks-status:
[{
"name": "openshift-sdn",
"interface": "eth0",
"ips": [
"10.254.8.32"
],
"dns": {},
"default-route": [
"10.254.8.1"
]
}]
openshift.io/scc: restricted
Status: Running
IP: 10.254.8.32
IPs:
IP: 10.254.8.32
Controlled By: ReplicaSet/mcm-apiserver-564cb96f8d
Containers:
mcm-apiserver:
Container ID: cri-o://ff500f77e94a41b9971e0a039ada8ede86bb4168f55044ef01b961f76c400f68
Image: quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259
Image ID: quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259
Port: <none>
Host Port: <none>
Args:
/mcm-apiserver
--mongo-database=mcm
--enable-admission-plugins=HCMUserIdentity,KlusterletCA,NamespaceLifecycle
--secure-port=6443
--tls-cert-file=/var/run/apiserver/tls.crt
--tls-private-key-file=/var/run/apiserver/tls.key
--klusterlet-cafile=/var/run/klusterlet/ca.crt
--klusterlet-certfile=/var/run/klusterlet/tls.crt
--klusterlet-keyfile=/var/run/klusterlet/tls.key
--http2-max-streams-per-connection=1000
--etcd-servers=http://etcd-cluster.open-cluster-management.svc.cluster.local:2379
--mongo-host=multicluster-mongodb
--mongo-replicaset=rs0
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Tue, 23 Jun 2020 08:04:50 -0700
Finished: Tue, 23 Jun 2020 08:05:11 -0700
Ready: False
Restart Count: 7
Limits:
memory: 2Gi
Requests:
cpu: 200m
memory: 256Mi
Liveness: http-get https://:6443/healthz delay=2s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:6443/healthz delay=2s timeout=1s period=10s #success=1 #failure=3
Environment:
MONGO_USERNAME: <set to the key 'user' in secret 'mongodb-admin'> Optional: false
MONGO_PASSWORD: <set to the key 'password' in secret 'mongodb-admin'> Optional: false
MONGO_SSLCA: /certs/mongodb-ca/tls.crt
MONGO_SSLCERT: /certs/mongodb-client/tls.crt
MONGO_SSLKEY: /certs/mongodb-client/tls.key
Mounts:
/certs/mongodb-ca from mongodb-ca-cert (rw)
/certs/mongodb-client from mongodb-client-cert (rw)
/var/run/apiserver from apiserver-certs (rw)
/var/run/klusterlet from klusterlet-certs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from acm-foundation-sa-token-jqbzg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
apiserver-certs:
Type: Secret (a volume populated by a Secret)
SecretName: mcm-apiserver-self-signed-secrets
Optional: false
klusterlet-certs:
Type: Secret (a volume populated by a Secret)
SecretName: mcm-klusterlet-self-signed-secrets
Optional: false
mongodb-ca-cert:
Type: Secret (a volume populated by a Secret)
SecretName: multicloud-ca-cert
Optional: false
mongodb-client-cert:
Type: Secret (a volume populated by a Secret)
SecretName: multicluster-mongodb-client-cert
Optional: false
acm-foundation-sa-token-jqbzg:
Type: Secret (a volume populated by a Secret)
SecretName: acm-foundation-sa-token-jqbzg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned open-cluster-management/mcm-apiserver-564cb96f8d-2v4gc to worker1.ocp43-dev.os.fyre.ibm.com
Warning FailedMount 14m (x8 over 15m) kubelet, worker1.ocp43-dev.os.fyre.ibm.com MountVolume.SetUp failed for volume "mongodb-ca-cert" : secret "multicloud-ca-cert" not found
Warning FailedMount 14m (x8 over 15m) kubelet, worker1.ocp43-dev.os.fyre.ibm.com MountVolume.SetUp failed for volume "mongodb-client-cert" : secret "multicluster-mongodb-client-cert" not found
Warning FailedMount 13m kubelet, worker1.ocp43-dev.os.fyre.ibm.com Unable to attach or mount volumes: unmounted volumes=[mongodb-ca-cert mongodb-client-cert], unattached volumes=[acm-foundation-sa-token-jqbzg apiserver-certs klusterlet-certs mongodb-ca-cert mongodb-client-cert]: timed out waiting for the condition
Normal Pulling 12m kubelet, worker1.ocp43-dev.os.fyre.ibm.com Pulling image "quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259"
Normal Pulled 12m kubelet, worker1.ocp43-dev.os.fyre.ibm.com Successfully pulled image "quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259"
Normal Created 12m kubelet, worker1.ocp43-dev.os.fyre.ibm.com Created container mcm-apiserver
Normal Started 12m kubelet, worker1.ocp43-dev.os.fyre.ibm.com Started container mcm-apiserver
Warning Unhealthy 12m (x2 over 12m) kubelet, worker1.ocp43-dev.os.fyre.ibm.com Liveness probe failed: Get https://10.254.8.32:6443/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 12m (x2 over 12m) kubelet, worker1.ocp43-dev.os.fyre.ibm.com Readiness probe failed: Get https://10.254.8.32:6443/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning BackOff 5m2s (x31 over 11m) kubelet, worker1.ocp43-dev.os.fyre.ibm.com Back-off restarting failed container
Name: mcm-apiserver-6f794b6df-ggm44
Namespace: open-cluster-management
Priority: 0
Node: worker2.ocp43-dev.os.fyre.ibm.com/10.16.100.30
Start Time: Tue, 23 Jun 2020 07:56:02 -0700
Labels: app=mcm-apiserver
certmanager.k8s.io/time-restarted=2020-6-23.1456
pod-template-hash=6f794b6df
Annotations: k8s.v1.cni.cncf.io/networks-status:
[{
"name": "openshift-sdn",
"interface": "eth0",
"ips": [
"10.254.0.38"
],
"dns": {},
"default-route": [
"10.254.0.1"
]
}]
openshift.io/scc: restricted
Status: Running
IP: 10.254.0.38
IPs:
IP: 10.254.0.38
Controlled By: ReplicaSet/mcm-apiserver-6f794b6df
Containers:
mcm-apiserver:
Container ID: cri-o://b4b4cc14454bd0b3b7d43ac339dabbdcc9000d4aa2644004b0febae4d58bce70
Image: quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259
Image ID: quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259
Port: <none>
Host Port: <none>
Args:
/mcm-apiserver
--mongo-database=mcm
--enable-admission-plugins=HCMUserIdentity,KlusterletCA,NamespaceLifecycle
--secure-port=6443
--tls-cert-file=/var/run/apiserver/tls.crt
--tls-private-key-file=/var/run/apiserver/tls.key
--klusterlet-cafile=/var/run/klusterlet/ca.crt
--klusterlet-certfile=/var/run/klusterlet/tls.crt
--klusterlet-keyfile=/var/run/klusterlet/tls.key
--http2-max-streams-per-connection=1000
--etcd-servers=http://etcd-cluster.open-cluster-management.svc.cluster.local:2379
--mongo-host=multicluster-mongodb
--mongo-replicaset=rs0
State: Running
Started: Tue, 23 Jun 2020 08:09:26 -0700
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Tue, 23 Jun 2020 08:09:14 -0700
Finished: Tue, 23 Jun 2020 08:09:24 -0700
Ready: False
Restart Count: 8
Limits:
memory: 2Gi
Requests:
cpu: 200m
memory: 256Mi
Liveness: http-get https://:6443/healthz delay=2s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:6443/healthz delay=2s timeout=1s period=10s #success=1 #failure=3
Environment:
MONGO_USERNAME: <set to the key 'user' in secret 'mongodb-admin'> Optional: false
MONGO_PASSWORD: <set to the key 'password' in secret 'mongodb-admin'> Optional: false
MONGO_SSLCA: /certs/mongodb-ca/tls.crt
MONGO_SSLCERT: /certs/mongodb-client/tls.crt
MONGO_SSLKEY: /certs/mongodb-client/tls.key
Mounts:
/certs/mongodb-ca from mongodb-ca-cert (rw)
/certs/mongodb-client from mongodb-client-cert (rw)
/var/run/apiserver from apiserver-certs (rw)
/var/run/klusterlet from klusterlet-certs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from acm-foundation-sa-token-jqbzg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
apiserver-certs:
Type: Secret (a volume populated by a Secret)
SecretName: mcm-apiserver-self-signed-secrets
Optional: false
klusterlet-certs:
Type: Secret (a volume populated by a Secret)
SecretName: mcm-klusterlet-self-signed-secrets
Optional: false
mongodb-ca-cert:
Type: Secret (a volume populated by a Secret)
SecretName: multicloud-ca-cert
Optional: false
mongodb-client-cert:
Type: Secret (a volume populated by a Secret)
SecretName: multicluster-mongodb-client-cert
Optional: false
acm-foundation-sa-token-jqbzg:
Type: Secret (a volume populated by a Secret)
SecretName: acm-foundation-sa-token-jqbzg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned open-cluster-management/mcm-apiserver-6f794b6df-ggm44 to worker2.ocp43-dev.os.fyre.ibm.com
Warning FailedMount 13m (x6 over 13m) kubelet, worker2.ocp43-dev.os.fyre.ibm.com MountVolume.SetUp failed for volume "mongodb-client-cert" : secret "multicluster-mongodb-client-cert" not found
Warning Unhealthy 12m (x2 over 12m) kubelet, worker2.ocp43-dev.os.fyre.ibm.com Readiness probe failed: Get https://10.254.0.38:6443/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Normal Killing 12m kubelet, worker2.ocp43-dev.os.fyre.ibm.com Container mcm-apiserver failed liveness probe, will be restarted
Normal Pulling 12m (x3 over 13m) kubelet, worker2.ocp43-dev.os.fyre.ibm.com Pulling image "quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259"
Normal Pulled 12m (x3 over 12m) kubelet, worker2.ocp43-dev.os.fyre.ibm.com Successfully pulled image "quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259"
Normal Created 12m (x3 over 12m) kubelet, worker2.ocp43-dev.os.fyre.ibm.com Created container mcm-apiserver
Normal Started 12m (x3 over 12m) kubelet, worker2.ocp43-dev.os.fyre.ibm.com Started container mcm-apiserver
Warning Unhealthy 11m (x4 over 12m) kubelet, worker2.ocp43-dev.os.fyre.ibm.com Liveness probe failed: Get https://10.254.0.38:6443/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 8m29s (x2 over 8m39s) kubelet, worker2.ocp43-dev.os.fyre.ibm.com Readiness probe failed: Get https://10.254.0.38:6443/healthz: dial tcp 10.254.0.38:6443: connect: connection refused
Warning BackOff 3m30s (x33 over 11m) kubelet, worker2.ocp43-dev.os.fyre.ibm.com Back-off restarting failed container
Name: multicluster-mongodb-0
Namespace: open-cluster-management
Priority: 0
Node: worker0.ocp43-dev.os.fyre.ibm.com/10.16.100.28
Start Time: Tue, 23 Jun 2020 07:55:22 -0700
Labels: app=multicluster-mongodb
controller-revision-hash=multicluster-mongodb-557c8b465f
release=multicluster-mongodb-62daa
statefulset.kubernetes.io/pod-name=multicluster-mongodb-0
Annotations: k8s.v1.cni.cncf.io/networks-status:
[{
"name": "openshift-sdn",
"interface": "eth0",
"ips": [
"10.254.12.37"
],
"dns": {},
"default-route": [
"10.254.12.1"
]
}]
openshift.io/scc: anyuid
Status: Pending
IP: 10.254.12.37
IPs:
IP: 10.254.12.37
Controlled By: StatefulSet/multicluster-mongodb
Init Containers:
install:
Container ID: cri-o://9e4d0cfc80e3cebdb12181b8499fff387559e54552d72f3c8b3368481eda1daa
Image: quay.io/open-cluster-management/multicluster-mongodb-init@sha256:904ebd15cf4074dca8d8f980433501af7037335ecaf06c79c90b3fda9a99b7e3
Image ID: quay.io/open-cluster-management/multicluster-mongodb-init@sha256:904ebd15cf4074dca8d8f980433501af7037335ecaf06c79c90b3fda9a99b7e3
Port: <none>
Host Port: <none>
Command:
/install/install.sh
Args:
--work-dir=/var/lib/mongodb/work-dir
--config-dir=/var/lib/mongodb/data/configdb
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 23 Jun 2020 07:56:41 -0700
Finished: Tue, 23 Jun 2020 07:56:41 -0700
Ready: True
Restart Count: 0
Limits:
memory: 5Gi
Requests:
memory: 2Gi
Environment: <none>
Mounts:
/ca-readonly from ca (rw)
/configdb-readonly from config (rw)
/install from install (rw)
/keydir-readonly from keydir (rw)
/tmp from tmp-mongodb (rw)
/var/lib/mongodb/data/configdb from configdir (rw)
/var/lib/mongodb/data/db from mongodbdir (rw,path="datadir")
/var/lib/mongodb/work-dir from mongodbdir (rw,path="workdir")
/var/run/secrets/kubernetes.io/serviceaccount from multicluster-mongodb-token-4g4x5 (ro)
bootstrap:
Container ID: cri-o://ac85c09ab3dc962da8a3e3b2abb263717d5fb787adf1a7b872bcad43e8d5fbd0
Image: quay.io/open-cluster-management/multicluster-mongodb@sha256:9320e0acc578efd94b6056b8be344b3e742fd0597568013187ef69ecbd077866
Image ID: quay.io/open-cluster-management/multicluster-mongodb@sha256:9320e0acc578efd94b6056b8be344b3e742fd0597568013187ef69ecbd077866
Port: <none>
Host Port: <none>
Command:
/var/lib/mongodb/work-dir/peer-finder
Args:
-on-start=/init/on-start.sh
-service=multicluster-mongodb
State: Running
Started: Tue, 23 Jun 2020 07:57:03 -0700
Ready: False
Restart Count: 0
Limits:
memory: 5Gi
Requests:
memory: 2Gi
Environment:
POD_NAMESPACE: open-cluster-management (v1:metadata.namespace)
REPLICA_SET: rs0
AUTH: true
ADMIN_USER: <set to the key 'user' in secret 'mongodb-admin'> Optional: false
ADMIN_PASSWORD: <set to the key 'password' in secret 'mongodb-admin'> Optional: false
NETWORK_IP_VERSION: ipv4
Mounts:
/init from init (rw)
/tmp from tmp-mongodb (rw)
/var/lib/mongodb/data/configdb from configdir (rw)
/var/lib/mongodb/data/db from mongodbdir (rw,path="datadir")
/var/lib/mongodb/work-dir from mongodbdir (rw,path="workdir")
/var/run/secrets/kubernetes.io/serviceaccount from multicluster-mongodb-token-4g4x5 (ro)
Containers:
multicluster-mongodb:
Container ID:
Image: quay.io/open-cluster-management/multicluster-mongodb@sha256:9320e0acc578efd94b6056b8be344b3e742fd0597568013187ef69ecbd077866
Image ID:
Port: 27017/TCP
Host Port: 0/TCP
Command:
mongod
--config=/var/lib/mongodb/data/configdb/mongod.conf
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
memory: 5Gi
Requests:
memory: 2Gi
Liveness: exec [mongo --ssl --sslCAFile=/var/lib/mongodb/data/configdb/tls.crt --sslPEMKeyFile=/var/lib/mongodb/work-dir/mongo.pem --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [mongo --ssl --sslCAFile=/var/lib/mongodb/data/configdb/tls.crt --sslPEMKeyFile=/var/lib/mongodb/work-dir/mongo.pem --eval db.adminCommand('ping')] delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
AUTH: true
ADMIN_USER: <set to the key 'user' in secret 'mongodb-admin'> Optional: false
ADMIN_PASSWORD: <set to the key 'password' in secret 'mongodb-admin'> Optional: false
Mounts:
/tmp from tmp-mongodb (rw)
/var/lib/mongodb/data/configdb from configdir (rw)
/var/lib/mongodb/data/db from mongodbdir (rw,path="datadir")
/var/lib/mongodb/work-dir from mongodbdir (rw,path="workdir")
/var/run/secrets/kubernetes.io/serviceaccount from multicluster-mongodb-token-4g4x5 (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
mongodbdir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mongodbdir-multicluster-mongodb-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: multicluster-mongodb
Optional: false
init:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: multicluster-mongodb-init
Optional: false
install:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: multicluster-mongodb-install
Optional: false
ca:
Type: Secret (a volume populated by a Secret)
SecretName: multicloud-ca-cert
Optional: false
keydir:
Type: Secret (a volume populated by a Secret)
SecretName: multicluster-mongodb-keyfile
Optional: false
configdir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp-mongodb:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp-metrics:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
multicluster-mongodb-token-4g4x5:
Type: Secret (a volume populated by a Secret)
SecretName: multicluster-mongodb-token-4g4x5
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
Normal Scheduled <unknown> default-scheduler Successfully assigned open-cluster-management/multicluster-mongodb-0 to worker0.ocp43-dev.os.fyre.ibm.com
Normal SuccessfulAttachVolume 14m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-891a979a-c1ce-4234-a016-98fde691c76f"
Warning FailedMount 13m (x7 over 14m) kubelet, worker0.ocp43-dev.os.fyre.ibm.com MountVolume.SetUp failed for volume "ca" : secret "multicloud-ca-cert" not found
Normal Pulling 13m kubelet, worker0.ocp43-dev.os.fyre.ibm.com Pulling image "quay.io/open-cluster-management/multicluster-mongodb-init@sha256:904ebd15cf4074dca8d8f980433501af7037335ecaf06c79c90b3fda9a99b7e3"
Normal Pulled 12m kubelet, worker0.ocp43-dev.os.fyre.ibm.com Successfully pulled image "quay.io/open-cluster-management/multicluster-mongodb-init@sha256:904ebd15cf4074dca8d8f980433501af7037335ecaf06c79c90b3fda9a99b7e3"
Normal Created 12m kubelet, worker0.ocp43-dev.os.fyre.ibm.com Created container install
Normal Started 12m kubelet, worker0.ocp43-dev.os.fyre.ibm.com Started container install
Normal Pulling 12m kubelet, worker0.ocp43-dev.os.fyre.ibm.com Pulling image "quay.io/open-cluster-management/multicluster-mongodb@sha256:9320e0acc578efd94b6056b8be344b3e742fd0597568013187ef69ecbd077866"
Normal Pulled 12m kubelet, worker0.ocp43-dev.os.fyre.ibm.com Successfully pulled image "quay.io/open-cluster-management/multicluster-mongodb@sha256:9320e0acc578efd94b6056b8be344b3e742fd0597568013187ef69ecbd077866"
Normal Created 12m kubelet, worker0.ocp43-dev.os.fyre.ibm.com Created container bootstrap
Normal Started 12m kubelet, worker0.ocp43-dev.os.fyre.ibm.com Started container bootstrapDesktop (please complete the following information):
- OS: [e.g. mac, rhel, etc..]
[root@ocp43-dev-inf deploy]# oc get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master0.ocp43-dev.os.fyre.ibm.com Ready master 10h v1.16.2 10.16.96.192 <none> Red Hat Enterprise Linux CoreOS 43.81.202004130853.0 (Ootpa) 4.18.0-147.8.1.el8_1.x86_64 cri-o://1.16.5-1.dev.rhaos4.3.git91157c1.el8
worker0.ocp43-dev.os.fyre.ibm.com Ready worker 10h v1.16.2 10.16.100.28 <none> Red Hat Enterprise Linux CoreOS 43.81.202004130853.0 (Ootpa) 4.18.0-147.8.1.el8_1.x86_64 cri-o://1.16.5-1.dev.rhaos4.3.git91157c1.el8
worker1.ocp43-dev.os.fyre.ibm.com Ready worker 10h v1.16.2 10.16.100.29 <none> Red Hat Enterprise Linux CoreOS 43.81.202004130853.0 (Ootpa) 4.18.0-147.8.1.el8_1.x86_64 cri-o://1.16.5-1.dev.rhaos4.3.git91157c1.el8
worker2.ocp43-dev.os.fyre.ibm.com Ready worker 11h v1.16.2 10.16.100.30 <none> Red Hat Enterprise Linux CoreOS 43.81.202004130853.0 (Ootpa) 4.18.0-147.8.1.el8_1.x86_64 cri-o://1.16.5-1.dev.rhaos4.3.git91157c1.el8- Browser [e.g. chrome, safari, firefox]
- Snapshot [e.g. SNAPSHOT-XX-XX-XX-XX]
2.0.0-SNAPSHOT-2020-06-23-14-20-27
Additional context
Add any other context about the problem here.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingdeployspecific to this repository... does not imply product specific issuesspecific to this repository... does not imply product specific issues