<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel>
        <title>MySQL Forums - MySQL &amp; Kubernetes</title>
        <description>Forum for MySQL &amp; Kubernetes discussions</description>
        <link>https://forums.mysql.com/list.php?149</link>
        <lastBuildDate>Wed, 15 Apr 2026 10:33:06 +0000</lastBuildDate>
        <generator>Phorum 5.2.23</generator>
        <item>
            <guid>https://forums.mysql.com/read.php?149,741748,741748#msg-741748</guid>
            <title>What are the three arguments expected by this script (build_deps.sh)? (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,741748,741748#msg-741748</link>
            <description><![CDATA[ I need to upgrade kopf to fix a bug, but the required version depends on a newer Python.<br />
I&#039;m not sure how Python is installed in your base image — is it provided via PYTHON_TARBALL in this script?<br />
Is this a custom-built Python package maintained internally?]]></description>
            <dc:creator>Bing Ma</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Fri, 10 Apr 2026 10:05:31 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,741372,741372#msg-741372</guid>
            <title>What&#039;s wrong with this configuration? (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,741372,741372#msg-741372</link>
            <description><![CDATA[ I defined Innodb cluster configuration file as follows : <br />
<br />
<br />
    # <a href="https://medium.com/edstem/postgres-deployment-to-kubernetes-using-nodeport-626a93d8a919"  rel="nofollow">https://medium.com/edstem/postgres-deployment-to-kubernetes-using-nodeport-626a93d8a919</a><br />
    <br />
    apiVersion: storage.k8s.io/v1<br />
    kind: StorageClass<br />
    metadata:<br />
      name: mysql-sc<br />
      #namespace: default<br />
    provisioner: kubernetes.io/no-provisioner<br />
    volumeBindingMode: WaitForFirstConsumer<br />
    <br />
    ---<br />
    <br />
    # <a href="https://dev.mysql.com/doc/mysql-operator/en/mysql-operator-backups.html"  rel="nofollow">https://dev.mysql.com/doc/mysql-operator/en/mysql-operator-backups.html</a><br />
    <br />
    apiVersion: v1<br />
    kind: PersistentVolume<br />
    metadata:<br />
      name: innodb-volume<br />
      labels:<br />
        type: local<br />
        #app: mysql<br />
    spec:<br />
      storageClassName: mysql-sc<br />
      capacity:<br />
        storage: 40Gi<br />
      accessModes:<br />
        - ReadWriteOnce<br />
      hostPath:<br />
        path: /var/lib/data/mysql # <a href="https://stackoverflow.com/questions/62577494/mkdir-mnt-data-read-only-file-system-back-off-restarting-&gt"  rel="nofollow">https://stackoverflow.com/questions/62577494/mkdir-mnt-data-read-only-file-system-back-off-restarting-&gt</a>;<br />
    <br />
    ---<br />
    <br />
    apiVersion: v1<br />
    kind: PersistentVolumeClaim<br />
    metadata:<br />
      name: innodb-volume-claim<br />
      #labels:<br />
        #app: mysql<br />
    spec:<br />
      storageClassName: mysql-sc<br />
      accessModes:<br />
        - ReadWriteOnce<br />
      resources:<br />
        requests:<br />
          storage: 40Gi<br />
    <br />
    ---<br />
    <br />
    # <a href="https://dev.mysql.com/doc/mysql-operator/en/mysql-operator-innodbcluster-common.html"  rel="nofollow">https://dev.mysql.com/doc/mysql-operator/en/mysql-operator-innodbcluster-common.html</a><br />
    <br />
    apiVersion: mysql.oracle.com/v2<br />
    kind: InnoDBCluster<br />
    metadata:<br />
      name: mycluster<br />
    spec:<br />
      secretName: mypwds<br />
      tlsUseSelfSigned: true<br />
      instances: 3<br />
      version: 9.5.0<br />
      router:<br />
        instances: 1<br />
        version: 9.5.0<br />
      datadirVolumeClaimTemplate:<br />
        storageClassName: mysql-sc<br />
        accessModes:<br />
          - ReadWriteOnce<br />
        resources:<br />
          requests:<br />
            storage: 40Gi<br />
      initDB:<br />
        clone:<br />
          donorUrl: mycluster-0.mycluster-instances.another.svc.cluster.local:3306<br />
          rootUser: root<br />
          secretKeyRef:<br />
            name: mypwds<br />
      mycnf: |<br />
        [mysqld]<br />
        max_connections=162<br />
<br />
Applying this configuration : <br />
<br />
    (base) raphy@raohy:~/.talos/openmetadata/mysql$ kubectl apply -f mysqlcluster.yaml <br />
    storageclass.storage.k8s.io/mysql-sc unchanged<br />
    persistentvolume/innodb-volume created<br />
    persistentvolumeclaim/innodb-volume-claim created<br />
    innodbcluster.mysql.oracle.com/mycluster configured<br />
<br />
I get : <br />
<br />
    (base) raphy@raohy:~/.talos/openmetadata/mysql$ kubectl get pvc<br />
    NAME                  STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE<br />
    innodb-volume-claim   Pending                                      mysql-sc       &lt;unset&gt;                 11m<br />
<br />
    (base) raphy@raohy:~/.talos/openmetadata/mysql$ kubectl -n default get pods<br />
    NAME                                                      READY   STATUS      RESTARTS       AGE<br />
    omd-os-cluster-dashboards-5b9fbdfd45-rzhsn                1/1     Running     0              63m<br />
    omd-os-cluster-masters-0                                  1/1     Running     0              63m<br />
    omd-os-cluster-masters-1                                  1/1     Running     0              60m<br />
    omd-os-cluster-masters-2                                  1/1     Running     0              58m<br />
    omd-os-cluster-securityconfig-update-cj7bh                0/1     Completed   0              63m<br />
    opensearch-operator-controller-manager-7448949c9b-6kzlm   2/2     Running     47 (65m ago)   18h<br />
<br />
<br />
How to make the InnoDB Cluster working?]]></description>
            <dc:creator>Raphy Stonehorse</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Tue, 11 Nov 2025 09:56:35 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,741328,741328#msg-741328</guid>
            <title>a lot c++ errors when invoke get_cluster() (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,741328,741328#msg-741328</link>
            <description><![CDATA[ [2025-10-28 09:05:14,779] kopf.objects         [ERROR   ] status() failed at mabing1028-1-2.mabing1028-1-instances.default.svc.cluster.local:3306: error=LogicError: Uncaught exception: std::bad_alloc<br />
Traceback (most recent call last):<br />
  File &quot;/usr/lib/mysqlsh/python-packages/mysqloperator/controller/diagnose.py&quot;, line 147, in diagnose_instance<br />
    mstatus = cluster.status({&quot;extended&quot;: 1})<br />
RuntimeError: LogicError: Uncaught exception: std::bad_alloc<br />
<br />
[2025-10-28 09:05:14,785] kopf.objects         [INFO    ] ||||do_diagnose_cluster.5.1<br />
[2025-10-28 09:05:14,963] kopf.objects         [ERROR   ] get_cluster() error for mabing1028-1.mabing1028-instances.default.svc.cluster.local:3306: error=LogicError: Uncaught exception: std::bad_alloc<br />
Traceback (most recent call last):<br />
  File &quot;/usr/lib/mysqlsh/python-packages/mysqloperator/controller/diagnose.py&quot;, line 119, in diagnose_instance<br />
    cluster = dba.get_cluster()<br />
RuntimeError: LogicError: Uncaught exception: std::bad_alloc<br />
<br />
[2025-10-28 09:05:14,965] kopf.objects         [INFO    ] ||||do_diagnose_cluster.5.1<br />
[2025-10-28 09:05:15,022] kopf.objects         [ERROR   ] get_cluster() error for mabing1028-2-2.mabing1028-2-instances.default.svc.cluster.local:3306: error=LogicError: Uncaught exception: std::bad_alloc<br />
Traceback (most recent call last):<br />
  File &quot;/usr/lib/mysqlsh/python-packages/mysqloperator/controller/diagnose.py&quot;, line 119, in diagnose_instance<br />
    cluster = dba.get_cluster()<br />
RuntimeError: LogicError: Uncaught exception: std::bad_alloc<br />
<br />
[2025-10-28 09:05:15,026] kopf.objects         [INFO    ] ||||do_diagnose_cluster.5.1<br />
[2025-10-28 09:05:15,045] kopf.objects         [ERROR   ] get_cluster() error for mabing1028-1-0.mabing1028-1-instances.default.svc.cluster.local:3306: error=LogicError: Uncaught exception: std::bad_alloc<br />
Traceback (most recent call last):<br />
  File &quot;/usr/lib/mysqlsh/python-packages/mysqloperator/controller/diagnose.py&quot;, line 119, in diagnose_instance<br />
    cluster = dba.get_cluster()<br />
RuntimeError: LogicError: Uncaught exception: std::bad_alloc<br />
<br />
====<br />
I changed the logger.info to logger.exception to get the stack info]]></description>
            <dc:creator>Bing Ma</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Tue, 28 Oct 2025 09:13:05 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,741327,741327#msg-741327</guid>
            <title>malloc_consolidate() error (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,741327,741327#msg-741327</link>
            <description><![CDATA[ [2025-10-28 06:49:47,734] CM_mabing1028-1      [INFO    ] Trying to connect to a member of cluster default/mabing1028-1<br />
malloc_consolidate(): invalid chunk size]]></description>
            <dc:creator>Bing Ma</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Tue, 28 Oct 2025 06:56:28 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,741000,741000#msg-741000</guid>
            <title>What is the cause of this error? (2 replies)</title>
            <link>https://forums.mysql.com/read.php?149,741000,741000#msg-741000</link>
            <description><![CDATA[ 2025-07-21 22:21:13: Info: mysqlsh   Ver 8.3.0 for Linux on aarch64 - for MySQL 8.3.0 (MySQL Community Server (GPL)) - build 13725054 - commit_id b871a975f78bf8d8cafe15536b5a6e1507c090c7 - product_id el8-arm-64bit rpm<br />
2025-07-21 22:21:13: Info: Using credential store helper: /usr/bin/mysql-secret-store-login-path<br />
2025-07-21 22:21:13: Info: Loading startup files...<br />
2025-07-21 22:21:13: Info: Loading plugins...<br />
[2025-07-21 22:21:16,160] root                 [INFO    ] Environment provided cluster domain: cluster.local<br />
[2025-07-21 22:21:16,163] kopf.activities.star [INFO    ] MySQL Operator/operator.py=2.1.2 timestamp=2024-01-02T06:45:11 kopf=1.35.4 uid=2<br />
[2025-07-21 22:21:16,171] kopf.activities.star [INFO    ] KUBERNETES_VERSION =1.31<br />
[2025-07-21 22:21:16,171] kopf.activities.star [INFO    ] OPERATOR_VERSION   =2.1.2<br />
[2025-07-21 22:21:16,171] kopf.activities.star [INFO    ] OPERATOR_EDITION   =community<br />
[2025-07-21 22:21:16,171] kopf.activities.star [INFO    ] OPERATOR_EDITIONS  =[&#039;community&#039;, &#039;enterprise&#039;]<br />
[2025-07-21 22:21:16,171] kopf.activities.star [INFO    ] SHELL_VERSION      =8.3.0<br />
[2025-07-21 22:21:16,172] kopf.activities.star [INFO    ] DEFAULT_VERSION_TAG=8.3.0<br />
[2025-07-21 22:21:16,172] kopf.activities.star [INFO    ] SIDECAR_VERSION_TAG=8.3.0-2.1.2<br />
[2025-07-21 22:21:16,172] kopf.activities.star [INFO    ] DEFAULT_IMAGE_REPOSITORY   =container-registry.oracle.com/mysql<br />
[2025-07-21 22:21:16,183] kopf.activities.star [INFO    ] Activity &#039;on_startup&#039; succeeded.<br />
[2025-07-21 22:21:16,184] kopf._core.engines.a [INFO    ] Initial authentication has been initiated.<br />
[2025-07-21 22:21:16,186] kopf.activities.auth [INFO    ] Activity &#039;login_via_client&#039; succeeded.<br />
[2025-07-21 22:21:16,186] kopf._core.engines.a [INFO    ] Initial authentication has finished.<br />
[2025-07-21 22:21:16,254] kopf._core.reactor.o [ERROR   ] Peering observer for mysql-operator@none has failed: (&#039;storage is (re)initializing&#039;, {&#039;kind&#039;: &#039;Status&#039;, &#039;apiVersion&#039;: &#039;v1&#039;, &#039;metadata&#039;: {}, &#039;status&#039;: &#039;Failure&#039;, &#039;message&#039;: &#039;storage is (re)initializing&#039;, &#039;reason&#039;: &#039;TooManyRequests&#039;, &#039;details&#039;: {&#039;retryAfterSeconds&#039;: 1}, &#039;code&#039;: 429})<br />
Traceback (most recent call last):<br />
  File &quot;/usr/lib/mysqlsh/python-packages/kopf/_cogs/clients/errors.py&quot;, line 148, in check_response<br />
    response.raise_for_status()<br />
  File &quot;/usr/lib/mysqlsh/python-packages/aiohttp/client_reqrep.py&quot;, line 1005, in raise_for_status<br />
    raise ClientResponseError(<br />
aiohttp.client_exceptions.ClientResponseError: 429, message=&#039;Too Many Requests&#039;, url=URL(&#039;<a href="https://10.233.0.1:443/apis/zalando.org/v1/clusterkopfpeerings&#039"  rel="nofollow">https://10.233.0.1:443/apis/zalando.org/v1/clusterkopfpeerings&#039</a>;)<br />
<br />
The above exception was the direct cause of the following exception:<br />
<br />
Traceback (most recent call last):<br />
  File &quot;/usr/lib/mysqlsh/python-packages/kopf/_cogs/aiokits/aiotasks.py&quot;, line 108, in guard<br />
    await coro<br />
  File &quot;/usr/lib/mysqlsh/python-packages/kopf/_core/reactor/queueing.py&quot;, line 175, in watcher<br />
    async for raw_event in stream:<br />
  File &quot;/usr/lib/mysqlsh/python-packages/kopf/_cogs/clients/watching.py&quot;, line 82, in infinite_watch<br />
    async for raw_event in stream:<br />
  File &quot;/usr/lib/mysqlsh/python-packages/kopf/_cogs/clients/watching.py&quot;, line 159, in continuous_watch<br />
    objs, resource_version = await fetching.list_objs(<br />
  File &quot;/usr/lib/mysqlsh/python-packages/kopf/_cogs/clients/fetching.py&quot;, line 28, in list_objs<br />
    rsp = await api.get(<br />
  File &quot;/usr/lib/mysqlsh/python-packages/kopf/_cogs/clients/api.py&quot;, line 111, in get<br />
    response = await request(<br />
  File &quot;/usr/lib/mysqlsh/python-packages/kopf/_cogs/clients/auth.py&quot;, line 45, in wrapper<br />
    return await fn(*args, **kwargs, context=context)<br />
  File &quot;/usr/lib/mysqlsh/python-packages/kopf/_cogs/clients/api.py&quot;, line 85, in request<br />
    await errors.check_response(response)  # but do not parse it!<br />
  File &quot;/usr/lib/mysqlsh/python-packages/kopf/_cogs/clients/errors.py&quot;, line 150, in check_response<br />
    raise cls(payload, status=response.status) from e<br />
kopf._cogs.clients.errors.APIClientError: (&#039;storage is (re)initializing&#039;, {&#039;kind&#039;: &#039;Status&#039;, &#039;apiVersion&#039;: &#039;v1&#039;, &#039;metadata&#039;: {}, &#039;status&#039;: &#039;Failure&#039;, &#039;message&#039;: &#039;storage is (re)initializing&#039;, &#039;reason&#039;: &#039;TooManyRequests&#039;, &#039;details&#039;: {&#039;retryAfterSeconds&#039;: 1}, &#039;code&#039;: 429})]]></description>
            <dc:creator>Bing Ma</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Tue, 22 Jul 2025 03:45:02 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,740032,740032#msg-740032</guid>
            <title>What is the version correspondence between the operator and MySQL? (4 replies)</title>
            <link>https://forums.mysql.com/read.php?149,740032,740032#msg-740032</link>
            <description><![CDATA[ For example, if I&#039;m using MySQL 8.0.31, should I use MySQL Operator for Kubernetes 8.0.31-2.0.7 (2022-10-11, General Availability) or should I use the latest Operator for Kubernetes 8.0.40-2.0.16 (2024-10-15, General Availability)?]]></description>
            <dc:creator>Bing Ma</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Mon, 30 Mar 2026 10:32:05 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,739689,739689#msg-739689</guid>
            <title>Will the operator automatically set the value of innodb-buffer-pool-size based on resources.request.memory or resources.limit.memory? (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,739689,739689#msg-739689</link>
            <description><![CDATA[ For example, when resources.request.memory is greater than 4G, innodb-buffer-pool-size is resources.request.memory*0.8, otherwise it is half of resources.request.memory.]]></description>
            <dc:creator>Bing Ma</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Wed, 04 Dec 2024 09:59:41 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,726591,726591#msg-726591</guid>
            <title>Recovery after node restart (1 reply)</title>
            <link>https://forums.mysql.com/read.php?149,726591,726591#msg-726591</link>
            <description><![CDATA[ Hi,<br />
<br />
I&#039;m running a K3S cluster on a number of virtual Debian 6.1.106-3 hosts:<br />
<br />
Client Version: v1.29.9<br />
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3<br />
Server Version: v1.29.0+k3s1<br />
<br />
Its running on 3 masters and 3 worker nodes and is using metallb for loadbalancing.<br />
<br />
I&#039;ve created a mysql cluster with the helm chart:<br />
<br />
helm upgrade --install mysql-cluster mysql-operator/mysql-innodbcluster --namespace test --create-namespace -f mysql-cluster/mysql-cluster-values.yaml<br />
<br />
It seems like the router is not able to recover when the worker node is restarted.<br />
<br />
kubectl logs -n test pods/mysql-cluster-router-7988469f76-fcbff --all-containers <br />
[Entrypoint] MYSQL_CREATE_ROUTER_USER is 0, Router will reuse mysqlrouter account at runtime<br />
[Entrypoint] Succesfully contacted mysql server at mysql-cluster-instances.test.svc.cluster.local:3306. Checking for cluster state.<br />
[Entrypoint] Succesfully contacted mysql server at mysql-cluster-instances.test.svc.cluster.local. Trying to bootstrap reusing account &quot;mysqlrouter&quot;.<br />
Please enter MySQL password for mysqlrouter: <br />
Error: The provided server is currently not an ONLINE member of a InnoDB cluster.<br />
<br />
I was finally able to recover the cluster by restarting all worker nodes, after which I could perform a dba.rebootClusterFromCompleteOutage()<br />
<br />
I thought by using a cluster and by using the operator I would have a system able to handle single node failure?<br />
Should the operator not be able to survive and recover from a temporary failure of one of the cluster nodes?]]></description>
            <dc:creator>Annie Blomqvist</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Sat, 28 Sep 2024 16:18:11 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,726472,726472#msg-726472</guid>
            <title>Not able to install a mysql cluster with the operator (2 replies)</title>
            <link>https://forums.mysql.com/read.php?149,726472,726472#msg-726472</link>
            <description><![CDATA[ I&#039;m running a K3S cluster on a number of virtual Debian 6.1.106-3 hosts:<br />
<br />
Client Version: v1.29.9<br />
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3<br />
Server Version: v1.29.0+k3s1<br />
<br />
Its running on 3 masters and 3 worker nodes and is using metallb for loadbalancing.<br />
<br />
I&#039;ve installed the mysql operator with helm chart version 2.2.1:<br />
<br />
  helm install my-mysql-operator mysql-operator/mysql-operator  --namespace mysql-operator --create-namespace<br />
<br />
I&#039;ve then started the cluster with the example giving from the output of the helm install:<br />
<br />
  helm install my-mysql-innodbcluster mysql-operator/mysql-innodbcluster -n $NAMESPACE --create-namespace         --version 2.2.1         --set credentials.root.password=&quot;&gt;-0URS4F3P4SS&quot;         --set tls.useSelfSigned=true<br />
<br />
The problem is that the cluster does not come up, instead the complains about the sidecar.<br />
<br />
kubectl get all -n $NAMESPACE<br />
NAME                           READY   STATUS                  RESTARTS        AGE<br />
pod/my-mysql-innodbcluster-0   0/2     Init:CrashLoopBackOff   9 (4m29s ago)   25m<br />
pod/my-mysql-innodbcluster-1   0/2     Init:CrashLoopBackOff   9 (4m2s ago)    25m<br />
pod/my-mysql-innodbcluster-2   0/2     Init:CrashLoopBackOff   9 (4m4s ago)    25m<br />
<br />
NAME                                       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                                                                    AGE<br />
service/my-mysql-innodbcluster             ClusterIP   10.43.103.55   &lt;none&gt;        3306/TCP,33060/TCP,6446/TCP,6448/TCP,6447/TCP,6449/TCP,6450/TCP,8443/TCP   25m<br />
service/my-mysql-innodbcluster-instances   ClusterIP   None           &lt;none&gt;        3306/TCP,33060/TCP,33061/TCP                                               25m<br />
<br />
NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE<br />
deployment.apps/my-mysql-innodbcluster-router   0/0     0            0           25m<br />
<br />
NAME                                                       DESIRED   CURRENT   READY   AGE<br />
replicaset.apps/my-mysql-innodbcluster-router-6bd9464bcd   0         0         0       25m<br />
<br />
NAME                                      READY   AGE<br />
statefulset.apps/my-mysql-innodbcluster   0/3     25m<br />
<br />
<br />
After start kubectl describe -n $NAMESPACE pod/my-mysql-innodbcluster-0 lists these events:<br />
<br />
  Normal   Scheduled               71s                default-scheduler        Successfully assigned your-namespace/my-mysql-innodbcluster-0 to spruce<br />
  Warning  FailedScheduling        75s                default-scheduler        0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.<br />
  Warning  FailedScheduling        73s                default-scheduler        0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.<br />
  Error    Logging                 74s                kopf                     Handler &#039;on_pod_create&#039; failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured<br />
  Normal   SuccessfulAttachVolume  61s                attachdetach-controller  AttachVolume.Attach succeeded for volume &quot;pvc-cf926941-fb3a-4ec0-aba7-0ea3627012f5&quot;<br />
  Normal   Created                 59s                kubelet                  Created container fixdatadir<br />
  Normal   Pulled                  59s                kubelet                  Container image &quot;container-registry.oracle.com/mysql/community-operator:9.0.1-2.2.1&quot; already present on machine<br />
  Normal   Started                 59s                kubelet                  Started container fixdatadir<br />
  Normal   Pulled                  58s                kubelet                  Container image &quot;container-registry.oracle.com/mysql/community-operator:9.0.1-2.2.1&quot; already present on machine<br />
  Normal   Created                 58s                kubelet                  Created container initconf<br />
  Normal   Started                 58s                kubelet                  Started container initconf<br />
  Error    Logging                 44s                kopf                     Handler &#039;on_pod_create&#039; failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured<br />
  Normal   Pulled                  15s (x4 over 56s)  kubelet                  Container image &quot;container-registry.oracle.com/mysql/community-server:9.0.1&quot; already present on machine<br />
  Normal   Created                 15s (x4 over 56s)  kubelet                  Created container initmysql<br />
  Normal   Started                 15s (x4 over 56s)  kubelet                  Started container initmysql<br />
  Error    Logging                 14s                kopf                     Handler &#039;on_pod_create&#039; failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured<br />
  Warning  BackOff                 13s (x6 over 54s)  kubelet                  Back-off restarting failed container initmysql in pod my-mysql-innodbcluster-0_your-namespace(ee16423a-d87a-414b-9690-a294c0686b77)<br />
<br />
<br />
After a while the following gets repeated:<br />
<br />
  Error    Logging                 2m41s                kopf                     Handler &#039;on_pod_create&#039; failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured<br />
  Error    Logging                 2m10s                kopf                     Handler &#039;on_pod_create&#039; failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured<br />
  Error    Logging                 100s                 kopf                     Handler &#039;on_pod_create&#039; failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured<br />
  Warning  BackOff                 88s (x197 over 26m)  kubelet                  Back-off restarting failed container initmysql in pod my-mysql-innodbcluster-0_your-namespace(ee16423a-d87a-414b-9690-a294c0686b77)<br />
  Error    Logging                 70s                  kopf                     Handler &#039;on_pod_create&#039; failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured<br />
  Error    Logging                 40s                  kopf                     Handler &#039;on_pod_create&#039; failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured<br />
  Error    Logging                 10s                  kopf                     Handler &#039;on_pod_create&#039; failed temporarily: Sidecar of my-mysql-innodbcluster-0 is not yet configured<br />
<br />
<br />
Log output that might be relevant:<br />
<br />
kubectl logs -n $NAMESPACE my-mysql-innodbcluster-0<br />
Defaulted container &quot;sidecar&quot; out of: sidecar, mysql, fixdatadir (init), initconf (init), initmysql (init)<br />
Error from server (BadRequest): container &quot;sidecar&quot; in pod &quot;my-mysql-innodbcluster-0&quot; is waiting to start: PodInitializing<br />
<br />
<br />
Would appriciate any information on what might be causing this issue?]]></description>
            <dc:creator>Annie Blomqvist</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Sat, 28 Sep 2024 11:42:57 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,726195,726195#msg-726195</guid>
            <title>MySQL Operator Not Processing InnoDBCluster Resources in Kubernetes 1.30.1 (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,726195,726195#msg-726195</link>
            <description><![CDATA[ Hello everyone,<br />
<br />
I&#039;m experiencing an issue with deploying a MySQL InnoDBCluster using the Oracle MySQL Operator in a Kubernetes cluster. Despite the operator pod running and having the necessary permissions, it isn&#039;t processing the InnoDBCluster resource, and no pods or other resources are being created in the target namespace.<br />
<br />
Environment Details:<br />
<br />
Kubernetes Version:<br />
<br />
Client Version: v1.30.3<br />
Server Version: v1.30.1<br />
MySQL Operator Version: 9.0.1-2.2.1<br />
<br />
Namespace for Operator: mysql-operator<br />
<br />
Namespace for InnoDBCluster: tn-sql<br />
<br />
Issue Description:<br />
<br />
The MySQL Operator pod is running in the mysql-operator namespace without any restarts or errors.<br />
The InnoDBCluster CustomResourceDefinition (CRD) is installed and established.<br />
All necessary secrets are created and contain the required data.<br />
The operator&#039;s service account (mysql-operator-sa) has the necessary permissions, confirmed using kubectl auth can-i commands.<br />
The InnoDBCluster resource (tn-sql-cluster) is created in the tn-sql namespace.<br />
No pods, services, or other resources are being created by the operator in the tn-sql namespace.<br />
The operator logs show no activity beyond the initial startup; no errors or exceptions are logged.<br />
Steps Taken So Far:<br />
<br />
RBAC Configuration:<br />
<br />
Created a ClusterRole (mysql-operator-cluster-admin-role) with permissions for all necessary API groups and resources.<br />
Created a ClusterRoleBinding (mysql-operator-rolebinding) linking the ClusterRole to the operator&#039;s service account.<br />
Verified that the operator&#039;s service account has the necessary permissions using kubectl auth can-i commands.<br />
Operator Deployment:<br />
<br />
Deployed the operator using an Ansible playbook (see playbook below).<br />
Set the WATCH_NAMESPACE environment variable to watch all namespaces (&quot;&quot;).<br />
Increased the operator&#039;s logging level to DEBUG.<br />
Resource Definitions:<br />
<br />
Created all necessary secrets for the cluster admin, root, metrics, router, and backup users.<br />
Deployed the InnoDBCluster resource with the required specifications.<br />
Diagnostics:<br />
<br />
Checked operator logs for any errors or reconciliation activity.<br />
Verified that the InnoDBCluster resource exists but has no status updates.<br />
Confirmed there are no network policies or resource quotas interfering.<br />
Ensured that the CRD is established and uses the correct API version.<br />
Sanitized Ansible Playbook:<br />
<br />
yaml<br />
Copy code<br />
---<br />
- name: Deploy MySQL Operator and Create InnoDBCluster with Dynamic PVCs, Router, and Prometheus Metrics<br />
  hosts: localhost<br />
  connection: local<br />
  gather_facts: no<br />
  collections:<br />
    - kubernetes.core<br />
  vars:<br />
    mysql_namespace: tn-sql<br />
    storage_class: csi-sc-cinderplugin<br />
    mysql_root_password: &quot;YourRootPassword&quot;<br />
    mysql_metrics_user: &quot;metrics&quot;<br />
    mysql_metrics_password: &quot;YourMetricsPassword&quot;<br />
    mysql_router_user: &quot;mysqlrouter&quot;<br />
    mysql_router_password: &quot;YourRouterPassword&quot;<br />
    mysql_backup_user: &quot;mysqlbackup&quot;<br />
    mysql_backup_password: &quot;YourBackupPassword&quot;<br />
    mysql_instances: 3<br />
    mysql_routers: 1<br />
    metrics_enabled: true<br />
<br />
  tasks:<br />
    - name: Create Namespaces<br />
      kubernetes.core.k8s:<br />
        state: present<br />
        definition:<br />
          apiVersion: v1<br />
          kind: Namespace<br />
          metadata:<br />
            name: &quot;{{ item }}&quot;<br />
      loop:<br />
        - &quot;{{ mysql_namespace }}&quot;<br />
        - mysql-operator<br />
<br />
    - name: Create Secrets for MySQL Users<br />
      kubernetes.core.k8s:<br />
        state: present<br />
        definition:<br />
          apiVersion: v1<br />
          kind: Secret<br />
          metadata:<br />
            name: tn-sql-cluster-privsecrets<br />
            namespace: &quot;{{ mysql_namespace }}&quot;<br />
          type: Opaque<br />
          stringData:<br />
            clusterAdminUsername: &quot;admin&quot;<br />
            clusterAdminPassword: &quot;{{ mysql_root_password }}&quot;<br />
            rootUser: &quot;root&quot;<br />
            rootHost: &quot;%&quot;<br />
            rootPassword: &quot;{{ mysql_root_password }}&quot;<br />
            metricsUser: &quot;{{ mysql_metrics_user }}&quot;<br />
            metricsPassword: &quot;{{ mysql_metrics_password }}&quot;<br />
<br />
    - name: Create Secrets for Router and Backup Users<br />
      kubernetes.core.k8s:<br />
        state: present<br />
        definition:<br />
          apiVersion: v1<br />
          kind: Secret<br />
          metadata:<br />
            name: tn-sql-cluster-router<br />
            namespace: &quot;{{ mysql_namespace }}&quot;<br />
          type: Opaque<br />
          stringData:<br />
            routerUsername: &quot;{{ mysql_router_user }}&quot;<br />
            routerPassword: &quot;{{ mysql_router_password }}&quot;<br />
<br />
    - name: Create Secret for Backup User<br />
      kubernetes.core.k8s:<br />
        state: present<br />
        definition:<br />
          apiVersion: v1<br />
          kind: Secret<br />
          metadata:<br />
            name: tn-sql-cluster-backup<br />
            namespace: &quot;{{ mysql_namespace }}&quot;<br />
          type: Opaque<br />
          stringData:<br />
            backupUser: &quot;{{ mysql_backup_user }}&quot;<br />
            backupPassword: &quot;{{ mysql_backup_password }}&quot;<br />
<br />
    - name: Create Service Account for MySQL Operator<br />
      kubernetes.core.k8s:<br />
        state: present<br />
        definition:<br />
          apiVersion: v1<br />
          kind: ServiceAccount<br />
          metadata:<br />
            name: mysql-operator-sa<br />
            namespace: mysql-operator<br />
<br />
    - name: Create ClusterRole with Admin Permissions<br />
      kubernetes.core.k8s:<br />
        state: present<br />
        definition:<br />
          apiVersion: rbac.authorization.k8s.io/v1<br />
          kind: ClusterRole<br />
          metadata:<br />
            name: mysql-operator-cluster-admin-role<br />
          rules:<br />
            - apiGroups: [&quot;mysql.oracle.com&quot;]<br />
              resources:<br />
                - &quot;innodbclusters&quot;<br />
                - &quot;innodbclusters/status&quot;<br />
                - &quot;mysqlbackups&quot;<br />
                - &quot;mysqlbackups/status&quot;<br />
              verbs:<br />
                - &quot;*&quot;<br />
            - apiGroups: [&quot;apps&quot;]<br />
              resources:<br />
                - &quot;statefulsets&quot;<br />
                - &quot;deployments&quot;<br />
                - &quot;daemonsets&quot;<br />
                - &quot;replicasets&quot;<br />
              verbs:<br />
                - &quot;*&quot;<br />
            - apiGroups: [&quot;&quot;]<br />
              resources:<br />
                - &quot;pods&quot;<br />
                - &quot;services&quot;<br />
                - &quot;configmaps&quot;<br />
                - &quot;secrets&quot;<br />
                - &quot;persistentvolumeclaims&quot;<br />
              verbs:<br />
                - &quot;*&quot;<br />
            - apiGroups: [&quot;batch&quot;]<br />
              resources:<br />
                - &quot;jobs&quot;<br />
                - &quot;cronjobs&quot;<br />
              verbs:<br />
                - &quot;*&quot;<br />
            - apiGroups: [&quot;events.k8s.io&quot;]<br />
              resources:<br />
                - &quot;events&quot;<br />
              verbs:<br />
                - &quot;get&quot;<br />
                - &quot;list&quot;<br />
                - &quot;watch&quot;<br />
            - apiGroups: [&quot;apiextensions.k8s.io&quot;]<br />
              resources:<br />
                - &quot;customresourcedefinitions&quot;<br />
              verbs:<br />
                - &quot;get&quot;<br />
                - &quot;list&quot;<br />
                - &quot;watch&quot;<br />
            - apiGroups: [&quot;zalando.org&quot;]<br />
              resources:<br />
                - &quot;clusterkopfpeerings&quot;<br />
              verbs:<br />
                - &quot;*&quot;<br />
<br />
    - name: Create ClusterRoleBinding for MySQL Operator<br />
      kubernetes.core.k8s:<br />
        state: present<br />
        definition:<br />
          apiVersion: rbac.authorization.k8s.io/v1<br />
          kind: ClusterRoleBinding<br />
          metadata:<br />
            name: mysql-operator-rolebinding<br />
          subjects:<br />
            - kind: ServiceAccount<br />
              name: mysql-operator-sa<br />
              namespace: mysql-operator<br />
          roleRef:<br />
            kind: ClusterRole<br />
            name: mysql-operator-cluster-admin-role<br />
            apiGroup: rbac.authorization.k8s.io<br />
<br />
    - name: Deploy MySQL Operator CRDs<br />
      kubernetes.core.k8s:<br />
        state: present<br />
        src: &quot;<a href="https://raw.githubusercontent.com/mysql/mysql-operator/9.0.1-2.2.1/deploy/deploy-crds.yaml&quot"  rel="nofollow">https://raw.githubusercontent.com/mysql/mysql-operator/9.0.1-2.2.1/deploy/deploy-crds.yaml&quot</a>;<br />
<br />
    - name: Deploy MySQL Operator<br />
      kubernetes.core.k8s:<br />
        state: present<br />
        definition:<br />
          apiVersion: apps/v1<br />
          kind: Deployment<br />
          metadata:<br />
            name: mysql-operator<br />
            namespace: mysql-operator<br />
            labels:<br />
              name: mysql-operator<br />
          spec:<br />
            replicas: 1<br />
            selector:<br />
              matchLabels:<br />
                name: mysql-operator<br />
            template:<br />
              metadata:<br />
                labels:<br />
                  name: mysql-operator<br />
              spec:<br />
                serviceAccountName: mysql-operator-sa<br />
                containers:<br />
                  - name: mysql-operator<br />
                    image: container-registry.oracle.com/mysql/community-operator:9.0.1-2.2.1<br />
                    args:<br />
                      - mysqlsh<br />
                      - --log-level=DEBUG<br />
                      - --pym<br />
                      - mysqloperator<br />
                      - operator<br />
                    env:<br />
                      - name: MYSQL_CLUSTER_DOMAIN<br />
                        value: &quot;cluster.local&quot;<br />
                      - name: MYSQL_OPERATOR_K8S_CLUSTER_DOMAIN<br />
                        value: &quot;cluster.local&quot;<br />
                      - name: WATCH_NAMESPACE<br />
                        value: &quot;&quot;<br />
                    readinessProbe:<br />
                      exec:<br />
                        command:<br />
                          - cat<br />
                          - /tmp/mysql-operator-ready<br />
                      initialDelaySeconds: 5<br />
                      periodSeconds: 10<br />
<br />
    - name: Wait for MySQL Operator to be Ready<br />
      kubernetes.core.k8s_info:<br />
        kind: Deployment<br />
        namespace: mysql-operator<br />
        name: mysql-operator<br />
      register: operator_deployment<br />
      until: operator_deployment.resources[0].status.availableReplicas | default(0) | int &gt;= 1<br />
      retries: 20<br />
      delay: 15<br />
<br />
    - name: Deploy PersistentVolumeClaims for MySQL Instances<br />
      kubernetes.core.k8s:<br />
        state: present<br />
        definition:<br />
          apiVersion: v1<br />
          kind: PersistentVolumeClaim<br />
          metadata:<br />
            name: tn-sql-cluster-pvc-{{ item }}<br />
            namespace: &quot;{{ mysql_namespace }}&quot;<br />
          spec:<br />
            accessModes: [&quot;ReadWriteOnce&quot;]<br />
            resources:<br />
              requests:<br />
                storage: 20Gi<br />
            storageClassName: &quot;{{ storage_class }}&quot;<br />
      loop: &quot;{{ range(0, mysql_instances) | list }}&quot;<br />
<br />
    - name: Deploy InnoDBCluster<br />
      kubernetes.core.k8s:<br />
        state: present<br />
        definition:<br />
          apiVersion: mysql.oracle.com/v2<br />
          kind: InnoDBCluster<br />
          metadata:<br />
            name: tn-sql-cluster<br />
            namespace: &quot;{{ mysql_namespace }}&quot;<br />
          spec:<br />
            secretName: tn-sql-cluster-privsecrets<br />
            tlsUseSelfSigned: true<br />
            instances: &quot;{{ mysql_instances }}&quot;<br />
            router:<br />
              instances: &quot;{{ mysql_routers }}&quot;<br />
            backup:<br />
              enabled: true<br />
            metricsExporter:<br />
              enabled: &quot;{{ metrics_enabled }}&quot;<br />
              user: &quot;{{ mysql_metrics_user }}&quot;<br />
              password: &quot;{{ mysql_metrics_password }}&quot;<br />
              serviceMonitor:<br />
                enabled: true<br />
<br />
    - name: Wait for InnoDBCluster to be Ready<br />
      kubernetes.core.k8s_info:<br />
        api_version: &quot;mysql.oracle.com/v2&quot;<br />
        kind: InnoDBCluster<br />
        namespace: &quot;{{ mysql_namespace }}&quot;<br />
        name: tn-sql-cluster<br />
      register: cluster_info<br />
      retries: 30<br />
      delay: 20<br />
      until: cluster_info.resources | length &gt; 0 and cluster_info.resources[0].get(&#039;status&#039;, {}).get(&#039;cluster&#039;, {}).get(&#039;status&#039;) == &quot;ONLINE&quot;<br />
<br />
Operator Logs:<br />
<br />
The operator logs show only the initial startup information and do not display any reconciliation activity or errors. Here&#039;s a snippet:<br />
<br />
css<br />
Copy code<br />
[2024-09-18 05:17:19,091] kopf.activities.star [INFO    ] Activity &#039;on_startup&#039; succeeded.<br />
[2024-09-18 05:17:19,092] kopf._core.engines.a [INFO    ] Initial authentication has been initiated.<br />
[2024-09-18 05:17:19,094] kopf.activities.auth [INFO    ] Activity &#039;login_via_client&#039; succeeded.<br />
[2024-09-18 05:17:19,094] kopf._core.engines.a [INFO    ] Initial authentication has finished.<br />
InnoDBCluster Resource Status:<br />
<br />
yaml<br />
Copy code<br />
Name:         tn-sql-cluster<br />
Namespace:    tn-sql<br />
API Version:  mysql.oracle.com/v2<br />
Kind:         InnoDBCluster<br />
Metadata:<br />
  Creation Timestamp:  2024-09-18T05:33:41Z<br />
  Generation:          1<br />
  Resource Version:    26739894<br />
  UID:                 63a0bd37-e553-4100-a88f-61e7a41254d1<br />
Spec:<br />
  Base Server Id:      1000<br />
  Instances:           3<br />
  Router:<br />
    Instances:         1<br />
  Secret Name:         tn-sql-cluster-privsecrets<br />
  Tls Use Self Signed: true<br />
Events:                &lt;none&gt;<br />
Additional Diagnostic Outputs:<br />
<br />
CRD Version:<br />
<br />
arduino<br />
Copy code<br />
$ kubectl get crd innodbclusters.mysql.oracle.com -o jsonpath=&#039;{.spec.versions[*].name}&#039;<br />
v2<br />
Operator Pod Status:<br />
<br />
sql<br />
Copy code<br />
$ kubectl get pods -n mysql-operator -o wide<br />
NAME                              READY   STATUS    RESTARTS   AGE   IP             NODE              NOMINATED NODE   READINESS GATES<br />
mysql-operator-5878c5fdcc-f9dft   1/1     Running   0          11m   10.244.3.220   k8s-node-2        &lt;none&gt;           &lt;none&gt;<br />
Events in tn-sql Namespace:<br />
<br />
csharp<br />
Copy code<br />
$ kubectl get events -n tn-sql --sort-by=&#039;.metadata.creationTimestamp&#039;<br />
No events found.<br />
RBAC Verification:<br />
<br />
csharp<br />
Copy code<br />
$ kubectl auth can-i get innodbclusters.mysql.oracle.com --as=system:serviceaccount:mysql-operator:mysql-operator-sa -n tn-sql<br />
yes<br />
Network Policies and Resource Quotas:<br />
<br />
sql<br />
Copy code<br />
$ kubectl get networkpolicy -n tn-sql<br />
No resources found.<br />
<br />
$ kubectl get resourcequota -n tn-sql<br />
No resources found.<br />
Attempts to Resolve the Issue:<br />
<br />
Set WATCH_NAMESPACE to Watch All Namespaces:<br />
<br />
Confirmed that the WATCH_NAMESPACE environment variable is set to &quot;&quot; in the operator deployment.<br />
Increased Operator Logging Level:<br />
<br />
Set the logging level to DEBUG but still did not observe any reconciliation activity or errors in the logs.<br />
Verified RBAC Configurations:<br />
<br />
Ensured that the operator&#039;s service account has the necessary permissions.<br />
Tested with a Minimal InnoDBCluster:<br />
<br />
Deployed a minimal cluster configuration but experienced the same issue.<br />
Request for Assistance:<br />
<br />
I&#039;m seeking help to determine why the MySQL Operator isn&#039;t processing the InnoDBCluster resource. Any insights or suggestions would be greatly appreciated.<br />
<br />
Questions:<br />
<br />
Has anyone successfully deployed the Oracle MySQL Operator on Kubernetes version 1.30.x?<br />
Are there any known compatibility issues between MySQL Operator version 9.0.1-2.2.1 and Kubernetes 1.30.x?<br />
Could there be any hidden configurations or steps that I&#039;m missing?<br />
Thank you in advance for your help!]]></description>
            <dc:creator>Noel Ashford</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Thu, 19 Sep 2024 00:59:47 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,724828,724828#msg-724828</guid>
            <title>MySQL Operator - Backup S3 (CloudFlare) (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,724828,724828#msg-724828</link>
            <description><![CDATA[ Dear community, I am not sure how to use backup in combination with s3.<br />
The values.yaml file also did not hinted much how to use it<br />
<br />
apiVersion: mysql.oracle.com/v2<br />
kind: InnoDBCluster<br />
metadata:<br />
  name: my-db<br />
  namespace: mysql-operator<br />
spec:<br />
  secretName: my-db-secrets<br />
  tlsUseSelfSigned: true<br />
  instances: 2<br />
  router:<br />
    instances: 1<br />
<br />
  backupProfiles:<br />
  - name: my-db-s3-backup-profile<br />
    dumpInstance:<br />
      dumpOptions:<br />
        excludeSchemas: [&quot;information_schema&quot;]<br />
      storage:<br />
        s3:<br />
          bucketName: my-db-backup<br />
          config: my-db-backup-secrets<br />
          prefix: mysql<br />
          endpoint: <a href="https://XXXX.eu.r2.cloudflarestorage.com"  rel="nofollow">https://XXXX.eu.r2.cloudflarestorage.com</a><br />
  backupSchedules:<br />
    - name: my-db-daily-s3-backup<br />
      schedule: &quot;27 12 * * *&quot; # Daily at 2 AM<br />
      backupProfileName: my-db-s3-backup-profile<br />
      deleteBackupData: false<br />
      enabled: true<br />
---<br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
  name: my-db-backup-secrets<br />
  namespace: mysql-operator<br />
type: Opaque<br />
stringData:<br />
  awsAccessKeyId: XXX<br />
  awsSecretAccessKey: XXX<br />
<br />
<br />
FYI: the target is to use S3 provided by Cloudflare<br />
<a href="https://www.cloudflare.com/en-gb/developer-platform/r2/"  rel="nofollow">https://www.cloudflare.com/en-gb/developer-platform/r2/</a><br />
<br />
FYI: Helm Chart values <a href="https://github.com/mysql/mysql-operator/blob/trunk/helm/mysql-innodbcluster/values.yaml"  rel="nofollow">https://github.com/mysql/mysql-operator/blob/trunk/helm/mysql-innodbcluster/values.yaml</a> seams to miss a : at line 154 for endpoint?<br />
<br />
thank you for your help in advance!]]></description>
            <dc:creator>Stefan Walther</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Mon, 17 Jun 2024 12:43:00 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,724042,724042#msg-724042</guid>
            <title>When access single storage from multiple pods of MySQL give error (2 replies)</title>
            <link>https://forums.mysql.com/read.php?149,724042,724042#msg-724042</link>
            <description><![CDATA[ Actually what happen is, I need a MySQL deployment which accesses the same storage as a datastore. When creating a MySQL with more than one replica, only one replica becomes healthy. This is because, at a time only one pod can access the &quot;&quot;&quot;ibdata1&quot;&quot;&quot; file. When check the logs of the pod it says, &quot;Unable to lock ./ibdata1 error: 11&quot; Give me a alternative way to access the same storage through two pods.<br />
<br />
Recreate the issue: Create PV and PVC<br />
<br />
apiVersion: v1<br />
kind: PersistentVolume<br />
metadata:<br />
  name: mysql-pv<br />
  labels:<br />
    type: local<br />
spec:<br />
  persistentVolumeReclaimPolicy: Retain<br />
  capacity:<br />
    storage: 1Gi<br />
  accessModes:<br />
  - ReadWriteOnce<br />
  hostPath:<br />
    path: &quot;/mnt/data&quot;<br />
---<br />
apiVersion: v1<br />
kind: PersistentVolumeClaim<br />
metadata:<br />
  name: mysql-pvc<br />
spec:<br />
  accessModes:<br />
  - ReadWriteOnce<br />
  resources:<br />
    requests:<br />
      storage: 1Gi<br />
Create the mysql deployment<br />
<br />
apiVersion: apps/v1<br />
kind: Deployment<br />
metadata:<br />
  name: mysql<br />
spec:<br />
  replicas: 2<br />
  selector:<br />
    matchLabels:<br />
      app: mysql<br />
  template:<br />
    metadata:<br />
      labels:<br />
        app: mysql<br />
    spec:<br />
      containers:<br />
      - image: mysql:latest<br />
        name: mysql<br />
        env:<br />
        - name: MYSQL_ROOT_PASSWORD<br />
          value: pwd<br />
        ports:<br />
        - containerPort: 3306<br />
        volumeMounts:<br />
        - name: mysql-storage<br />
          mountPath: /var/lib/mysql<br />
      volumes:<br />
      - name: mysql-storage<br />
        persistentVolumeClaim:<br />
          claimName: mysql-pvc<br />
After some seconds, you can see both pods running successfully.(k get pods)<br />
Then open the bash of from the first pod. kubectl exec --stdin --tty mysql-pod-name-0 -- /bin/bash. Then open the mysql in that bash. mysql -u root -ppwd. Now you can access the MySQL. Play with it and create a db for reference.<br />
<br />
Then exit from the pod-0 and open the bash of the second pod by kubectl exec --stdin --tty mysql-pod-name-1 -- /bin/bash. You can access the bash of pod-1. Now open the MySQL there. mysql -u root -ppwd. You will get an error like<br />
<br />
bash-4.4# mysql -u root -ppwd<br />
mysql: [Warning] Using a password on the command line interface can be insecure.<br />
ERROR 2002 (HY000): Can&#039;t connect to local MySQL server through socket &#039;/var/run/mysqld/mysqld.sock&#039; (2<br />
In other way, try delete the pod-0(the workable pod).(kubectl delete pod pod_name_0). Then you can see another new pod(named pod-2) created. Now do the same procedure to access the mysql from both pods, you can observe that accessing the mysql from pod-1 will works with the already created db. and the pod-2 will expose the same error we observed in the pod-1 early.]]></description>
            <dc:creator>Sivakajan Sivaparan</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Fri, 31 May 2024 11:02:35 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,723951,723951#msg-723951</guid>
            <title>Too many connection errors while connecting to InnoDB Cluster running on EKS (3 replies)</title>
            <link>https://forums.mysql.com/read.php?149,723951,723951#msg-723951</link>
            <description><![CDATA[ Hi All,<br />
<br />
We have installed mysql-operator and mysql-innodbcluster on EKS using the instructions provided in the documentation:<br />
<br />
<a href="https://dev.mysql.com/doc/mysql-operator/en/mysql-operator-innodbcluster.html"  rel="nofollow">https://dev.mysql.com/doc/mysql-operator/en/mysql-operator-innodbcluster.html</a>.<br />
<br />
After the installation is successful, we are updating the mysql service to a LoadBalancer type and trying to access the cluster using the LB Name and port 3306.<br />
<br />
It worked fine initially. But after sometime it started giving these errors:<br />
<br />
ERROR 1129 (HY000): Too many connection errors from xxxx:xxxx<br />
<br />
I tried increasing the maxconnections, flushing the hosts etc. Nothing helped.<br />
I bounced the pods on the statefulset as well.<br />
<br />
Any thoughts on what could be going wrong ?]]></description>
            <dc:creator>Vyaghri Kasibhatla</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Thu, 02 May 2024 11:54:48 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,723763,723763#msg-723763</guid>
            <title>MySQL Router performance (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,723763,723763#msg-723763</link>
            <description><![CDATA[ Hi, i have 3 nodes kubernetes cluster and i have a mysql innodb cluster inside the kubernetes with kubernates operator and when i am running 3 mysql servers with 3 routers and i am doing tests the performance is worst than when i have 1 mysql server without the router. When i a run the tests to one of my mysql server behind the router the results is also better. I thought that because i have 3 servers the results will be better but it doesn&#039;t. For example with 300 threads in 10 seconds with the routers and 3 mysql servers i have 77510 read queries and with one mysql server without router i have 81000 in 10 seconds.... (All the databases are with the default settings except the max_connections and i am using the lastest version of mysql and router 8.3.0). Also the ram and cpu in our nodes are not consume more that 60%.<br />
Thanks.]]></description>
            <dc:creator>netadmin netadmin</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Thu, 18 Apr 2024 19:27:44 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,723659,723659#msg-723659</guid>
            <title>deleteBackupData not work? (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,723659,723659#msg-723659</link>
            <description><![CDATA[ Is it that I&#039;m not using it correctly, or is there a bug? When I deleted the MBK resources, the backup files in Minio were not removed.<br />
<br />
the mbk:<br />
########<br />
apiVersion: mysql.oracle.com/v2<br />
kind: MySQLBackup<br />
metadata:<br />
  name: a-cool-one-off-mgr0410<br />
spec:<br />
  clusterName: mgr-0409-newop<br />
  backupProfile:<br />
    name: s3-backup-profile<br />
    dumpInstance:<br />
      dumpOptions:<br />
      storage:<br />
        s3:<br />
          bucketName: username<br />
          prefix: mgr-backup<br />
          config: s3-secret<br />
          endpoint: <a href="http://xxxx"  rel="nofollow">http://xxxx</a><br />
<br />
  deleteBackupData: true]]></description>
            <dc:creator>Bing Ma</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Wed, 10 Apr 2024 09:59:14 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,723645,723645#msg-723645</guid>
            <title>MySQL Operator not self-healing innodb cluster (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,723645,723645#msg-723645</link>
            <description><![CDATA[ Hello, we&#039;ve been experimenting with the community edition of the MySQL Operator and deployed an InnoDB cluster. Everything went smoothly initially, but recently, two MySQL pods failed to start up, and the MySQL router couldn&#039;t connect to our MySQL instance. Consequently, our applications lost access to the database. <br />
Further investigation revealed incomplete data replication among the pods. <br />
<br />
Has anyone encountered a similar issue? Shouldn&#039;t the operator handle such operations/failures? <br />
<br />
Additionally, any suggestions on resolving this problem would be helpful. Thank you in advance.<br />
<br />
Snippet from our mysql-operator logs:<br />
<br />
[2024-04-09 08:31:19,327] kopf.objects         [INFO    ] mysql: all={&lt;MySQLPod mysql-1&gt;, &lt;MySQLPod mysql-2&gt;, &lt;MySQLPod mysql-0&gt;}  members={&lt;MySQLPod mysql-0&gt;, &lt;MySQLPod mysql-2&gt;, &lt;MySQLPod mysql-1&gt;}  online=set()  offline={&lt;MySQLPod mysql-0&gt;, &lt;MySQLPod mysql-2&gt;, &lt;MySQLPod mysql-1&gt;}  unsure=set()<br />
[2024-04-09 08:31:19,730] kopf.objects         [INFO    ] cluster probe: status=ClusterDiagStatus.OFFLINE online=[]<br />
[2024-04-09 08:31:19,932] kopf.objects         [INFO    ] Cluster cannot be restored because there are unreachable pods: retrying after 5 seconds<br />
[2024-04-09 08:31:21,994] kopf.objects         [WARNING ] Patching failed with inconsistencies: ((&#039;remove&#039;, (&#039;status&#039;, &#039;kopf&#039;), {&#039;dummy&#039;: &#039;2024-04-09T08:31:21.670928&#039;}, None),)<br />
[2024-04-09 08:31:22,113] kopf.objects         [INFO    ] Handler &#039;on_pod_event&#039; succeeded.<br />
[2024-04-09 08:31:23,402] kopf.objects         [WARNING ] Patching failed with inconsistencies: ((&#039;remove&#039;, (&#039;status&#039;, &#039;kopf&#039;), {&#039;dummy&#039;: &#039;2024-04-09T08:31:23.217513&#039;}, None),)<br />
[2024-04-09 08:31:23,523] kopf.objects         [INFO    ] Handler &#039;on_pod_event&#039; succeeded.<br />
[2024-04-09 08:31:23,642] kopf.objects         [ERROR   ] Handler &#039;on_pod_delete&#039; failed temporarily: mysql busy. lock_owner=mysql-0 owner_context=n/a lock_created_at=2024-04-09T08:31:22.539191<br />
[2024-04-09 08:31:23,805] kopf.objects         [WARNING ] Patching failed with inconsistencies: ((&#039;remove&#039;, (&#039;status&#039;, &#039;kopf&#039;), {&#039;progress&#039;: {&#039;on_pod_delete&#039;: {&#039;started&#039;: &#039;2024-02-18T09:29:44.888978&#039;, &#039;stopped&#039;: None, &#039;delayed&#039;: &#039;2024-04-09T08:31:33.643064&#039;, &#039;purpose&#039;: &#039;delete&#039;, &#039;retries&#039;: 276494, &#039;success&#039;: False, &#039;failure&#039;: False, &#039;message&#039;: &#039;mysql busy. lock_owner=mysql-0 owner_context=n/a lock_created_at=2024-04-09T08:31:22.539191&#039;, &#039;subrefs&#039;: None}}}, None),)<br />
[2024-04-09 08:31:23,923] kopf.objects         [INFO    ] Handler &#039;on_pod_event&#039; succeeded.<br />
[2024-04-09 08:31:25,026] kopf.objects         [INFO    ] mysql busy. lock_owner=mysql-0 owner_context=n/a lock_created_at=2024-04-09T08:31:22.539191: retrying after 10 seconds<br />
[2024-04-09 08:31:25,877] kopf.objects         [INFO    ] Could not connect to mysql-2.mysql-instances.infra.svc.cluster.local:3306: error=MySQL Error (2003): mysqlsh.connect_dba: Can&#039;t connect to MySQL server on &#039;mysql-2.mysql-instances.infra.svc.cluster.local:3306&#039; (113)<br />
[2024-04-09 08:31:26,033] kopf.objects         [INFO    ] mysql-2.mysql-instances.infra.svc.cluster.local:3306: pod.phase=Running  deleting=True<br />
[2024-04-09 08:31:26,035] kopf.objects         [INFO    ] diag instance mysql-2 --&gt; InstanceDiagStatus.OFFLINE quorum=None gtid_executed=None<br />
[2024-04-09 08:31:29,376] kopf.objects         [INFO    ] Could not connect to mysql-0.mysql-instances.infra.svc.cluster.local:3306: error=MySQL Error (2003): mysqlsh.connect_dba: Can&#039;t connect to MySQL server on &#039;mysql-0.mysql-instances.infra.svc.cluster.local:3306&#039; (113)<br />
[2024-04-09 08:31:29,619] kopf.objects         [INFO    ] mysql-0.mysql-instances.infra.svc.cluster.local:3306: pod.phase=Running  deleting=True<br />
[2024-04-09 08:31:29,621] kopf.objects         [INFO    ] diag instance mysql-0 --&gt; InstanceDiagStatus.OFFLINE quorum=None gtid_executed=None<br />
[2024-04-09 08:31:29,982] kopf.objects         [INFO    ] get_cluster() error for mysql-1.mysql-instances.infra.svc.cluster.local:3306: error=Shell Error (51314): Dba.get_cluster: This function is not available through a session to a standalone instance (metadata exists, instance belongs to that metadata, but GR is not active)<br />
[2024-04-09 08:31:29,986] kopf.objects         [INFO    ] diag instance mysql-1 --&gt; InstanceDiagStatus.OFFLINE quorum=None gtid_executed=09eb7e8d-821a-11ee-88eb-5a412442a166:1-21,<br />
0a387b50-821a-11ee-89ad-3a757c649278:1-10,<br />
476b478a-821a-11ee-9b2b-5a412442a166:1-1368775:2340682-2340727,<br />
476bbbd0-821a-11ee-9b2b-5a412442a166:1-44,<br />
9bf7ae20-af14-11ee-9d0f-9e5a8ec4b826:1-537496:1000478-1001192,<br />
9bf7ba05-af14-11ee-9d0f-9e5a8ec4b826:1-57<br />
[2024-04-09 08:31:29,988] kopf.objects         [INFO    ] mysql: all={&lt;MySQLPod mysql-2&gt;, &lt;MySQLPod mysql-0&gt;, &lt;MySQLPod mysql-1&gt;}  members={&lt;MySQLPod mysql-0&gt;, &lt;MySQLPod mysql-2&gt;, &lt;MySQLPod mysql-1&gt;}  online=set()  offline={&lt;MySQLPod mysql-0&gt;, &lt;MySQLPod mysql-2&gt;, &lt;MySQLPod mysql-1&gt;}  unsure=set()<br />
[2024-04-09 08:31:30,275] kopf.objects         [INFO    ] cluster probe: status=ClusterDiagStatus.OFFLINE online=[]<br />
[2024-04-09 08:31:30,277] kopf.objects         [INFO    ] ATTEMPTING CLUSTER REPAIR<br />
[2024-04-09 08:31:30,421] kopf.objects         [ERROR   ] Handler &#039;on_pod_delete&#039; failed temporarily: Cluster cannot be restored because there are unreachable pods<br />
[2024-04-09 08:31:30,688] kopf.objects         [WARNING ] Patching failed with inconsistencies: ((&#039;remove&#039;, (&#039;status&#039;, &#039;kopf&#039;), {&#039;progress&#039;: {&#039;on_pod_delete&#039;: {&#039;started&#039;: &#039;2024-03-08T09:21:31.041514&#039;, &#039;stopped&#039;: None, &#039;delayed&#039;: &#039;2024-04-09T08:31:35.421718&#039;, &#039;purpose&#039;: &#039;delete&#039;, &#039;retries&#039;: 255087, &#039;success&#039;: False, &#039;failure&#039;: False, &#039;message&#039;: &#039;Cluster cannot be restored because there are unreachable pods&#039;, &#039;subrefs&#039;: None}}}, None),)<br />
[2024-04-09 08:31:30,809] kopf.objects         [INFO    ] Handler &#039;on_pod_event&#039; succeeded.<br />
[2024-04-09 08:31:34,119] kopf.objects         [WARNING ] Patching failed with inconsistencies: ((&#039;remove&#039;, (&#039;status&#039;, &#039;kopf&#039;), {&#039;dummy&#039;: &#039;2024-04-09T08:31:33.644117&#039;}, None),)<br />
[2024-04-09 08:31:34,237] kopf.objects         [INFO    ] Handler &#039;on_pod_event&#039; succeeded.<br />
[2024-04-09 08:31:35,431] kopf.objects         [INFO    ] mysql busy. lock_owner=mysql-2 owner_context=n/a lock_created_at=2024-04-09T08:31:34.541457: retrying after 10 seconds<br />
[2024-04-09 08:31:35,458] kopf.objects         [INFO    ] get_cluster() error for mysql-1.mysql-instances.infra.svc.cluster.local:3306: error=Shell Error (51314): Dba.get_cluster: This function is not available through a session to a standalone instance (metadata exists, instance belongs to that metadata, but GR is not active)<br />
[2024-04-09 08:31:35,463] kopf.objects         [INFO    ] diag instance mysql-1 --&gt; InstanceDiagStatus.OFFLINE quorum=None gtid_executed=09eb7e8d-821a-11ee-88eb-5a412442a166:1-21,]]></description>
            <dc:creator>Keegan Bantom</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Tue, 09 Apr 2024 08:51:33 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,723644,723644#msg-723644</guid>
            <title>MySQL Operator not self-healing innodb cluster (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,723644,723644#msg-723644</link>
            <description><![CDATA[ Hello, we&#039;ve been experimenting with the community edition of the MySQL Operator and deployed an InnoDB cluster. Everything went smoothly initially, but recently, two MySQL pods failed to start up, and the MySQL router couldn&#039;t connect to our MySQL instance. Consequently, our applications lost access to the database. <br />
Further investigation revealed incomplete data replication among the pods. <br />
<br />
Has anyone encountered a similar issue? Shouldn&#039;t the operator handle such operations/failures? <br />
<br />
Additionally, any suggestions on resolving this problem would be helpful. Thank you in advance.<br />
<br />
Snippet from our mysql-operator logs:<br />
<br />
[2024-04-09 08:31:19,327] kopf.objects         [INFO    ] mysql: all={&lt;MySQLPod mysql-1&gt;, &lt;MySQLPod mysql-2&gt;, &lt;MySQLPod mysql-0&gt;}  members={&lt;MySQLPod mysql-0&gt;, &lt;MySQLPod mysql-2&gt;, &lt;MySQLPod mysql-1&gt;}  online=set()  offline={&lt;MySQLPod mysql-0&gt;, &lt;MySQLPod mysql-2&gt;, &lt;MySQLPod mysql-1&gt;}  unsure=set()<br />
[2024-04-09 08:31:19,730] kopf.objects         [INFO    ] cluster probe: status=ClusterDiagStatus.OFFLINE online=[]<br />
[2024-04-09 08:31:19,932] kopf.objects         [INFO    ] Cluster cannot be restored because there are unreachable pods: retrying after 5 seconds<br />
[2024-04-09 08:31:21,994] kopf.objects         [WARNING ] Patching failed with inconsistencies: ((&#039;remove&#039;, (&#039;status&#039;, &#039;kopf&#039;), {&#039;dummy&#039;: &#039;2024-04-09T08:31:21.670928&#039;}, None),)<br />
[2024-04-09 08:31:22,113] kopf.objects         [INFO    ] Handler &#039;on_pod_event&#039; succeeded.<br />
[2024-04-09 08:31:23,402] kopf.objects         [WARNING ] Patching failed with inconsistencies: ((&#039;remove&#039;, (&#039;status&#039;, &#039;kopf&#039;), {&#039;dummy&#039;: &#039;2024-04-09T08:31:23.217513&#039;}, None),)<br />
[2024-04-09 08:31:23,523] kopf.objects         [INFO    ] Handler &#039;on_pod_event&#039; succeeded.<br />
[2024-04-09 08:31:23,642] kopf.objects         [ERROR   ] Handler &#039;on_pod_delete&#039; failed temporarily: mysql busy. lock_owner=mysql-0 owner_context=n/a lock_created_at=2024-04-09T08:31:22.539191<br />
[2024-04-09 08:31:23,805] kopf.objects         [WARNING ] Patching failed with inconsistencies: ((&#039;remove&#039;, (&#039;status&#039;, &#039;kopf&#039;), {&#039;progress&#039;: {&#039;on_pod_delete&#039;: {&#039;started&#039;: &#039;2024-02-18T09:29:44.888978&#039;, &#039;stopped&#039;: None, &#039;delayed&#039;: &#039;2024-04-09T08:31:33.643064&#039;, &#039;purpose&#039;: &#039;delete&#039;, &#039;retries&#039;: 276494, &#039;success&#039;: False, &#039;failure&#039;: False, &#039;message&#039;: &#039;mysql busy. lock_owner=mysql-0 owner_context=n/a lock_created_at=2024-04-09T08:31:22.539191&#039;, &#039;subrefs&#039;: None}}}, None),)<br />
[2024-04-09 08:31:23,923] kopf.objects         [INFO    ] Handler &#039;on_pod_event&#039; succeeded.<br />
[2024-04-09 08:31:25,026] kopf.objects         [INFO    ] mysql busy. lock_owner=mysql-0 owner_context=n/a lock_created_at=2024-04-09T08:31:22.539191: retrying after 10 seconds<br />
[2024-04-09 08:31:25,877] kopf.objects         [INFO    ] Could not connect to mysql-2.mysql-instances.infra.svc.cluster.local:3306: error=MySQL Error (2003): mysqlsh.connect_dba: Can&#039;t connect to MySQL server on &#039;mysql-2.mysql-instances.infra.svc.cluster.local:3306&#039; (113)<br />
[2024-04-09 08:31:26,033] kopf.objects         [INFO    ] mysql-2.mysql-instances.infra.svc.cluster.local:3306: pod.phase=Running  deleting=True<br />
[2024-04-09 08:31:26,035] kopf.objects         [INFO    ] diag instance mysql-2 --&gt; InstanceDiagStatus.OFFLINE quorum=None gtid_executed=None<br />
[2024-04-09 08:31:29,376] kopf.objects         [INFO    ] Could not connect to mysql-0.mysql-instances.infra.svc.cluster.local:3306: error=MySQL Error (2003): mysqlsh.connect_dba: Can&#039;t connect to MySQL server on &#039;mysql-0.mysql-instances.infra.svc.cluster.local:3306&#039; (113)<br />
[2024-04-09 08:31:29,619] kopf.objects         [INFO    ] mysql-0.mysql-instances.infra.svc.cluster.local:3306: pod.phase=Running  deleting=True<br />
[2024-04-09 08:31:29,621] kopf.objects         [INFO    ] diag instance mysql-0 --&gt; InstanceDiagStatus.OFFLINE quorum=None gtid_executed=None<br />
[2024-04-09 08:31:29,982] kopf.objects         [INFO    ] get_cluster() error for mysql-1.mysql-instances.infra.svc.cluster.local:3306: error=Shell Error (51314): Dba.get_cluster: This function is not available through a session to a standalone instance (metadata exists, instance belongs to that metadata, but GR is not active)<br />
[2024-04-09 08:31:29,986] kopf.objects         [INFO    ] diag instance mysql-1 --&gt; InstanceDiagStatus.OFFLINE quorum=None gtid_executed=09eb7e8d-821a-11ee-88eb-5a412442a166:1-21,<br />
0a387b50-821a-11ee-89ad-3a757c649278:1-10,<br />
476b478a-821a-11ee-9b2b-5a412442a166:1-1368775:2340682-2340727,<br />
476bbbd0-821a-11ee-9b2b-5a412442a166:1-44,<br />
9bf7ae20-af14-11ee-9d0f-9e5a8ec4b826:1-537496:1000478-1001192,<br />
9bf7ba05-af14-11ee-9d0f-9e5a8ec4b826:1-57<br />
[2024-04-09 08:31:29,988] kopf.objects         [INFO    ] mysql: all={&lt;MySQLPod mysql-2&gt;, &lt;MySQLPod mysql-0&gt;, &lt;MySQLPod mysql-1&gt;}  members={&lt;MySQLPod mysql-0&gt;, &lt;MySQLPod mysql-2&gt;, &lt;MySQLPod mysql-1&gt;}  online=set()  offline={&lt;MySQLPod mysql-0&gt;, &lt;MySQLPod mysql-2&gt;, &lt;MySQLPod mysql-1&gt;}  unsure=set()<br />
[2024-04-09 08:31:30,275] kopf.objects         [INFO    ] cluster probe: status=ClusterDiagStatus.OFFLINE online=[]<br />
[2024-04-09 08:31:30,277] kopf.objects         [INFO    ] ATTEMPTING CLUSTER REPAIR<br />
[2024-04-09 08:31:30,421] kopf.objects         [ERROR   ] Handler &#039;on_pod_delete&#039; failed temporarily: Cluster cannot be restored because there are unreachable pods<br />
[2024-04-09 08:31:30,688] kopf.objects         [WARNING ] Patching failed with inconsistencies: ((&#039;remove&#039;, (&#039;status&#039;, &#039;kopf&#039;), {&#039;progress&#039;: {&#039;on_pod_delete&#039;: {&#039;started&#039;: &#039;2024-03-08T09:21:31.041514&#039;, &#039;stopped&#039;: None, &#039;delayed&#039;: &#039;2024-04-09T08:31:35.421718&#039;, &#039;purpose&#039;: &#039;delete&#039;, &#039;retries&#039;: 255087, &#039;success&#039;: False, &#039;failure&#039;: False, &#039;message&#039;: &#039;Cluster cannot be restored because there are unreachable pods&#039;, &#039;subrefs&#039;: None}}}, None),)<br />
[2024-04-09 08:31:30,809] kopf.objects         [INFO    ] Handler &#039;on_pod_event&#039; succeeded.<br />
[2024-04-09 08:31:34,119] kopf.objects         [WARNING ] Patching failed with inconsistencies: ((&#039;remove&#039;, (&#039;status&#039;, &#039;kopf&#039;), {&#039;dummy&#039;: &#039;2024-04-09T08:31:33.644117&#039;}, None),)<br />
[2024-04-09 08:31:34,237] kopf.objects         [INFO    ] Handler &#039;on_pod_event&#039; succeeded.<br />
[2024-04-09 08:31:35,431] kopf.objects         [INFO    ] mysql busy. lock_owner=mysql-2 owner_context=n/a lock_created_at=2024-04-09T08:31:34.541457: retrying after 10 seconds<br />
[2024-04-09 08:31:35,458] kopf.objects         [INFO    ] get_cluster() error for mysql-1.mysql-instances.infra.svc.cluster.local:3306: error=Shell Error (51314): Dba.get_cluster: This function is not available through a session to a standalone instance (metadata exists, instance belongs to that metadata, but GR is not active)<br />
[2024-04-09 08:31:35,463] kopf.objects         [INFO    ] diag instance mysql-1 --&gt; InstanceDiagStatus.OFFLINE quorum=None gtid_executed=09eb7e8d-821a-11ee-88eb-5a412442a166:1-21,]]></description>
            <dc:creator>Keegan Bantom</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Tue, 09 Apr 2024 08:47:55 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,723281,723281#msg-723281</guid>
            <title>MySQL InnoDB backup using flush tables (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,723281,723281#msg-723281</link>
            <description><![CDATA[ Hi All<br />
<br />
I am trying to design a backup strategy to backup the files of MySQL using storage snapshot. It primarily has InnoDB engine files.It&#039;s mentioned in the MySQL blog that the &quot;flush command with read lock&quot; will flush the table contents to disk for InnoDB but keep the table open. It blocks the write queries for the innodb tables. I see the same is repeatedly mentioned in the steps to setup replication blog as well. Though same lock command is executed, it is recommending to shutdown the server if innodb files are available.<br />
<br />
What could be writing to innodb files even when the write queries are blocked and make the data inconsistent?<br />
<br />
<br />
1. <a href="https://dev.mysql.com/doc/refman/8.0/en/flush.html"  rel="nofollow">https://dev.mysql.com/doc/refman/8.0/en/flush.html</a><br />
<br />
Snippet from above article<br />
<br />
&quot;The descriptions here that indicate tables are flushed by closing them apply differently for InnoDB, which flushes table contents to disk but leaves them open. This still permits table files to be copied while the tables are open, as long as other activity does not modify them. &quot;<br />
<br />
<br />
<br />
2. <a href="https://dev.mysql.com/doc/refman/8.0/en/replication-snapshot-method.html"  rel="nofollow">https://dev.mysql.com/doc/refman/8.0/en/replication-snapshot-method.html</a><br />
<br />
Snippet from above article<br />
<br />
If you are using InnoDB tables, and also to get the most consistent results with a raw data snapshot, shut down the source server during the process, as follows:<br />
<br />
Thanks<br />
Praveen]]></description>
            <dc:creator>Praveen Kumar B A</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Tue, 12 Mar 2024 14:10:51 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,723135,723135#msg-723135</guid>
            <title>MySQL InnoDB backup using flush tables (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,723135,723135#msg-723135</link>
            <description><![CDATA[ Hi All<br />
<br />
I am trying to design a backup strategy to backup the files of MySQL using storage snapshot. It primarily has InnoDB engine files.It&#039;s mentioned in the MySQL blog that the &quot;flush command with read lock&quot;  will flush the table contents to disk for InnoDB but keep the table open. It blocks the write queries for the innodb tables.  I see the same is repeatedly mentioned in the steps to setup replication blog as well. Though same lock command is executed, it is recommending to shutdown the server if innodb files are available.<br />
<br />
What could be writing to innodb files even when the write queries are blocked and make the data inconsistent?<br />
<br />
<br />
1. <a href="https://dev.mysql.com/doc/refman/8.0/en/flush.html"  rel="nofollow">https://dev.mysql.com/doc/refman/8.0/en/flush.html</a><br />
<br />
Snippet from above article<br />
<br />
&quot;The descriptions here that indicate tables are flushed by closing them apply differently for InnoDB, which flushes table contents to disk but leaves them open. This still permits table files to be copied while the tables are open, as long as other activity does not modify them. &quot;<br />
<br />
<br />
<br />
2. <a href="https://dev.mysql.com/doc/refman/8.0/en/replication-snapshot-method.html"  rel="nofollow">https://dev.mysql.com/doc/refman/8.0/en/replication-snapshot-method.html</a><br />
<br />
Snippet from above article<br />
<br />
If you are using InnoDB tables, and also to get the most consistent results with a raw data snapshot, shut down the source server during the process, as follows:<br />
<br />
Thanks<br />
Guru]]></description>
            <dc:creator>GURU PRASHANTH THANAKODI</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Mon, 04 Mar 2024 14:34:57 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,722284,722284#msg-722284</guid>
            <title>MySQL enterprise Edition license (2 replies)</title>
            <link>https://forums.mysql.com/read.php?149,722284,722284#msg-722284</link>
            <description><![CDATA[ We’re thinking of virtualize a MySQL Enterprise database on VMware and we’d like to know the correct way to license it.<br />
<br />
There are four servers running the VMware environment, and one VM (with MySQL Enterprise) that can run on any of the servers, depending on the circumstances. Should we have to purchase four licenses (one for each server the VM can run in), or just one for the VM?]]></description>
            <dc:creator>Maqsood Pasha</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Mon, 19 Feb 2024 12:34:45 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,720080,720080#msg-720080</guid>
            <title>Unable to start metrics sidecar innodbcluster (1 reply)</title>
            <link>https://forums.mysql.com/read.php?149,720080,720080#msg-720080</link>
            <description><![CDATA[ I recently upgraded the mysql-operator from 8.0.33 to 8.3.0 to use the added metrics feature for prometheus monitoring. I want to deploy the sidecars in my existing innodbclusters in kubernetes. The operator is updated and the existing innodbclusters seem to deploy with their previous manifest. <br />
<br />
However, when applying the manifest with a metrics object, the metrics sidecar keeps crashing. I also tried this with a new test cluster. The applied manifest for this cluster:<br />
<br />
apiVersion: mysql.oracle.com/v2<br />
kind: InnoDBCluster<br />
metadata:<br />
  name: ictest<br />
  namespace: testing<br />
spec:<br />
  secretName: ictest-secret<br />
  tlsUseSelfSigned: true<br />
  datadirVolumeClaimTemplate:<br />
    accessModes:<br />
    - ReadWriteOnce<br />
    storageClassName: ceph-nvme-block<br />
  metrics:<br />
    enable: true<br />
    image: prom/mysqld-exporter<br />
---<br />
apiVersion: v1<br />
kind: Secret<br />
metadata:<br />
  name: ictest-secret<br />
  namespace: testing<br />
stringData:<br />
  rootUser: mysql<br />
  rootHost: &quot;%&quot;<br />
  rootPassword: pass<br />
<br />
<br />
Otherwise, the other containers seem functional and the cluster seems operational.<br />
<br />
Is something wrong with this innodbcluster configuration? Can someone share their experience with correctly deploying the sidecar?]]></description>
            <dc:creator>Thomas de Gier</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Thu, 01 Feb 2024 08:46:06 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,713543,713543#msg-713543</guid>
            <title>How to setup Database version control for MySQL (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,713543,713543#msg-713543</link>
            <description><![CDATA[ Hi,<br />
 <br />
How to setup Database version control for MySQL. Please suggest.<br />
<br />
I am using WAMP server for PHP web application development<br />
<br />
Thanks]]></description>
            <dc:creator>Shanthini G</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Mon, 08 Jan 2024 11:52:47 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,710692,710692#msg-710692</guid>
            <title>Does this operator support mariadb? I saw skip_log_error in configuration (2 replies)</title>
            <link>https://forums.mysql.com/read.php?149,710692,710692#msg-710692</link>
            <description><![CDATA[ 00-basic.cnf: |<br />
    # Basic configuration.<br />
    # Do not edit.<br />
    [mysqld]<br />
    plugin_load_add=auth_socket.so<br />
    loose_auth_socket=FORCE_PLUS_PERMANENT<br />
    skip_log_error<br />
    log_error_verbosity=3<br />
<br />
#####<br />
skip_log_error is a mariadb config item,I think]]></description>
            <dc:creator>Bing Ma</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Wed, 22 Nov 2023 15:46:20 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,710688,710688#msg-710688</guid>
            <title>Does the operator support multiple replicas? I found that the replica field is not exposed in values.yaml (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,710688,710688#msg-710688</link>
            <description><![CDATA[ The default value of replica is 1.]]></description>
            <dc:creator>Bing Ma</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Tue, 21 Nov 2023 02:15:20 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,710672,710672#msg-710672</guid>
            <title>when I use kubectl get ic, the ROUTERS column is empty (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,710672,710672#msg-710672</link>
            <description><![CDATA[ is this correct? and I can see the router&#039;s pod.I think the &#039;ROUTERS&#039; column exists, should show sth, something should I fix?<br />
===<br />
k get ic<br />
NAME      STATUS   ONLINE   INSTANCES   ROUTERS   AGE<br />
mgr1116   ONLINE   1        1                     18h<br />
mgr1117   ONLINE   1        1                     6h50m<br />
<br />
===<br />
<br />
k get po |grep route<br />
mgr1116-router-7c67d57d77-x688h                 1/1     Running     0               18h<br />
mgr1117-router-57f579fcf8-jfg95                 1/1     Running     0               6h46m]]></description>
            <dc:creator>Bing Ma</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Fri, 17 Nov 2023 09:07:14 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,710447,710447#msg-710447</guid>
            <title>Your operator cannot adapt to multiple architectures (1 reply)</title>
            <link>https://forums.mysql.com/read.php?149,710447,710447#msg-710447</link>
            <description><![CDATA[ There is a serious problem here. Your operator cannot adapt to multiple architectures because your image has a special flag - aarch64 under arm64. When I deploy it on an arm64 machine, I will still pull the amd64 image. , I don&#039;t know if you understand what I said.<br />
=====================================<br />
<br />
your images:<br />
<a href="https://container-registry.oracle.com/ords/f?p=113:4:115719165018774:::RP,4:P4_REPOSITORY,AI_REPOSITORY,P4_REPOSITORY_NAME,AI_REPOSITORY_NAME:1504,1504,MySQL%20Operator%20for%20Kubernetes,MySQL%20Operator%20for%20Kubernetes&amp;cs=3D2OkKt3XK5lfiJTde5Ah8ueejHm-VsPYuVJdSkPNB5oq0cqc1bTVw68FyKPrfLpa5dqcvQqfBz4bjU3o_YlLbA"  rel="nofollow">https://container-registry.oracle.com/ords/f?p=113:4:115719165018774:::RP,4:P4_REPOSITORY,AI_REPOSITORY,P4_REPOSITORY_NAME,AI_REPOSITORY_NAME:1504,1504,MySQL%20Operator%20for%20Kubernetes,MySQL%20Operator%20for%20Kubernetes&amp;cs=3D2OkKt3XK5lfiJTde5Ah8ueejHm-VsPYuVJdSkPNB5oq0cqc1bTVw68FyKPrfLpa5dqcvQqfBz4bjU3o_YlLbA</a>]]></description>
            <dc:creator>Bing Ma</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Thu, 09 Nov 2023 14:49:27 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,710443,710443#msg-710443</guid>
            <title>when I set imageRepository and metrics field, the metics.image does not use the imageRepository (1 reply)</title>
            <link>https://forums.mysql.com/read.php?149,710443,710443#msg-710443</link>
            <description><![CDATA[ is this a feature or a bug?<br />
because others will follow imageRepository]]></description>
            <dc:creator>Bing Ma</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Thu, 09 Nov 2023 14:52:02 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,710428,710428#msg-710428</guid>
            <title>I wish I could have more options when setting up scheduled backups (1 reply)</title>
            <link>https://forums.mysql.com/read.php?149,710428,710428#msg-710428</link>
            <description><![CDATA[ like spec.successfulJobsHistoryLimit and spec.failedJobsHistoryLimit fields]]></description>
            <dc:creator>Bing Ma</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Thu, 09 Nov 2023 14:50:22 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,710426,710426#msg-710426</guid>
            <title>When customizing services, I hope to have more configurable items (1 reply)</title>
            <link>https://forums.mysql.com/read.php?149,710426,710426#msg-710426</link>
            <description><![CDATA[ when use nodeport: hope the port can be specified<br />
<br />
when use loadbalancer: spec.loadBalancerIP can be set, spec.externalTrafficPolicy and spec.internalTrafficPolicy can be set.]]></description>
            <dc:creator>Bing Ma</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Thu, 09 Nov 2023 14:45:19 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?149,710302,710302#msg-710302</guid>
            <title>Does this operator support shrink/expand instance number？ (no replies)</title>
            <link>https://forums.mysql.com/read.php?149,710302,710302#msg-710302</link>
            <description><![CDATA[ I shrink one ic&#039;s instance number from 3 to 1, it&#039;s status changed to ONLINE_PARTIAL.<br />
<br />
I expand one ic&#039;s instance number from 1 to 3, then shrink to 1, the ONLINE number is still three.<br />
<br />
NAME       STATUS           ONLINE   INSTANCES   ROUTERS   AGE<br />
ic1025-1   ONLINE_PARTIAL   1        1           1         18h<br />
ic1025-2   ONLINE           3        1           1         15h]]></description>
            <dc:creator>Bing Ma</dc:creator>
            <category>MySQL &amp; Kubernetes</category>
            <pubDate>Thu, 26 Oct 2023 02:24:11 +0000</pubDate>
        </item>
    </channel>
</rss>
