Not able to install a mysql cluster with the operator
I'm running a K3S cluster on a number of virtual Debian 6.1.106-3 hosts:
Client Version: v1.29.9
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.0+k3s1
Its running on 3 masters and 3 worker nodes and is using metallb for loadbalancing.
I've installed the mysql operator with helm chart version 2.2.1:
helm install my-mysql-operator mysql-operator/mysql-operator --namespace mysql-operator --create-namespace
I've then started the cluster with the example giving from the output of the helm install:
helm install my-mysql-innodbcluster mysql-operator/mysql-innodbcluster -n $NAMESPACE --create-namespace --version 2.2.1 --set credentials.root.password=">-0URS4F3P4SS" --set tls.useSelfSigned=true
The problem is that the cluster does not come up, instead the complains about the sidecar.
kubectl get all -n $NAMESPACE
NAME READY STATUS RESTARTS AGE
pod/my-mysql-innodbcluster-0 0/2 Init:CrashLoopBackOff 9 (4m29s ago) 25m
pod/my-mysql-innodbcluster-1 0/2 Init:CrashLoopBackOff 9 (4m2s ago) 25m
pod/my-mysql-innodbcluster-2 0/2 Init:CrashLoopBackOff 9 (4m4s ago) 25m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-mysql-innodbcluster ClusterIP 10.43.103.55 <none> 3306/TCP,33060/TCP,6446/TCP,6448/TCP,6447/TCP,6449/TCP,6450/TCP,8443/TCP 25m
service/my-mysql-innodbcluster-instances ClusterIP None <none> 3306/TCP,33060/TCP,33061/TCP 25m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-mysql-innodbcluster-router 0/0 0 0 25m
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-mysql-innodbcluster-router-6bd9464bcd 0 0 0 25m
NAME READY AGE
statefulset.apps/my-mysql-innodbcluster 0/3 25m
After start kubectl describe -n $NAMESPACE pod/my-mysql-innodbcluster-0 lists these events:
Normal Scheduled 71s default-scheduler Successfully assigned your-namespace/my-mysql-innodbcluster-0 to spruce
Warning FailedScheduling 75s default-scheduler 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.
Warning FailedScheduling 73s default-scheduler 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.
Error Logging 74s kopf Handler 'on_pod_create' failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured
Normal SuccessfulAttachVolume 61s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-cf926941-fb3a-4ec0-aba7-0ea3627012f5"
Normal Created 59s kubelet Created container fixdatadir
Normal Pulled 59s kubelet Container image "container-registry.oracle.com/mysql/community-operator:9.0.1-2.2.1" already present on machine
Normal Started 59s kubelet Started container fixdatadir
Normal Pulled 58s kubelet Container image "container-registry.oracle.com/mysql/community-operator:9.0.1-2.2.1" already present on machine
Normal Created 58s kubelet Created container initconf
Normal Started 58s kubelet Started container initconf
Error Logging 44s kopf Handler 'on_pod_create' failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured
Normal Pulled 15s (x4 over 56s) kubelet Container image "container-registry.oracle.com/mysql/community-server:9.0.1" already present on machine
Normal Created 15s (x4 over 56s) kubelet Created container initmysql
Normal Started 15s (x4 over 56s) kubelet Started container initmysql
Error Logging 14s kopf Handler 'on_pod_create' failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured
Warning BackOff 13s (x6 over 54s) kubelet Back-off restarting failed container initmysql in pod my-mysql-innodbcluster-0_your-namespace(ee16423a-d87a-414b-9690-a294c0686b77)
After a while the following gets repeated:
Error Logging 2m41s kopf Handler 'on_pod_create' failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured
Error Logging 2m10s kopf Handler 'on_pod_create' failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured
Error Logging 100s kopf Handler 'on_pod_create' failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured
Warning BackOff 88s (x197 over 26m) kubelet Back-off restarting failed container initmysql in pod my-mysql-innodbcluster-0_your-namespace(ee16423a-d87a-414b-9690-a294c0686b77)
Error Logging 70s kopf Handler 'on_pod_create' failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured
Error Logging 40s kopf Handler 'on_pod_create' failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured
Error Logging 10s kopf Handler 'on_pod_create' failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured
Log output that might be relevant:
kubectl logs -n $NAMESPACE my-mysql-innodbcluster-0
Defaulted container "sidecar" out of: sidecar, mysql, fixdatadir (init), initconf (init), initmysql (init)
Error from server (BadRequest): container "sidecar" in pod "my-mysql-innodbcluster-0" is waiting to start: PodInitializing
Would appriciate any information on what might be causing this issue?
Subject
Written By
Posted
Not able to install a mysql cluster with the operator
September 22, 2024 01:01AM
September 26, 2024 04:36AM
September 28, 2024 05:42AM
Sorry, only registered users may post in this forum.
Content reproduced on this site is the property of the respective copyright holders.
It is not reviewed in advance by Oracle and does not necessarily represent the opinion
of Oracle or any other party.