MySQL Forums
Forum List  »  MySQL & Kubernetes

MySQL Operator Not Processing InnoDBCluster Resources in Kubernetes 1.30.1
Posted by: Noel Ashford
Date: September 18, 2024 06:59PM

Hello everyone,

I'm experiencing an issue with deploying a MySQL InnoDBCluster using the Oracle MySQL Operator in a Kubernetes cluster. Despite the operator pod running and having the necessary permissions, it isn't processing the InnoDBCluster resource, and no pods or other resources are being created in the target namespace.

Environment Details:

Kubernetes Version:

Client Version: v1.30.3
Server Version: v1.30.1
MySQL Operator Version: 9.0.1-2.2.1

Namespace for Operator: mysql-operator

Namespace for InnoDBCluster: tn-sql

Issue Description:

The MySQL Operator pod is running in the mysql-operator namespace without any restarts or errors.
The InnoDBCluster CustomResourceDefinition (CRD) is installed and established.
All necessary secrets are created and contain the required data.
The operator's service account (mysql-operator-sa) has the necessary permissions, confirmed using kubectl auth can-i commands.
The InnoDBCluster resource (tn-sql-cluster) is created in the tn-sql namespace.
No pods, services, or other resources are being created by the operator in the tn-sql namespace.
The operator logs show no activity beyond the initial startup; no errors or exceptions are logged.
Steps Taken So Far:

RBAC Configuration:

Created a ClusterRole (mysql-operator-cluster-admin-role) with permissions for all necessary API groups and resources.
Created a ClusterRoleBinding (mysql-operator-rolebinding) linking the ClusterRole to the operator's service account.
Verified that the operator's service account has the necessary permissions using kubectl auth can-i commands.
Operator Deployment:

Deployed the operator using an Ansible playbook (see playbook below).
Set the WATCH_NAMESPACE environment variable to watch all namespaces ("").
Increased the operator's logging level to DEBUG.
Resource Definitions:

Created all necessary secrets for the cluster admin, root, metrics, router, and backup users.
Deployed the InnoDBCluster resource with the required specifications.
Diagnostics:

Checked operator logs for any errors or reconciliation activity.
Verified that the InnoDBCluster resource exists but has no status updates.
Confirmed there are no network policies or resource quotas interfering.
Ensured that the CRD is established and uses the correct API version.
Sanitized Ansible Playbook:

yaml
Copy code
---
- name: Deploy MySQL Operator and Create InnoDBCluster with Dynamic PVCs, Router, and Prometheus Metrics
hosts: localhost
connection: local
gather_facts: no
collections:
- kubernetes.core
vars:
mysql_namespace: tn-sql
storage_class: csi-sc-cinderplugin
mysql_root_password: "YourRootPassword"
mysql_metrics_user: "metrics"
mysql_metrics_password: "YourMetricsPassword"
mysql_router_user: "mysqlrouter"
mysql_router_password: "YourRouterPassword"
mysql_backup_user: "mysqlbackup"
mysql_backup_password: "YourBackupPassword"
mysql_instances: 3
mysql_routers: 1
metrics_enabled: true

tasks:
- name: Create Namespaces
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Namespace
metadata:
name: "{{ item }}"
loop:
- "{{ mysql_namespace }}"
- mysql-operator

- name: Create Secrets for MySQL Users
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Secret
metadata:
name: tn-sql-cluster-privsecrets
namespace: "{{ mysql_namespace }}"
type: Opaque
stringData:
clusterAdminUsername: "admin"
clusterAdminPassword: "{{ mysql_root_password }}"
rootUser: "root"
rootHost: "%"
rootPassword: "{{ mysql_root_password }}"
metricsUser: "{{ mysql_metrics_user }}"
metricsPassword: "{{ mysql_metrics_password }}"

- name: Create Secrets for Router and Backup Users
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Secret
metadata:
name: tn-sql-cluster-router
namespace: "{{ mysql_namespace }}"
type: Opaque
stringData:
routerUsername: "{{ mysql_router_user }}"
routerPassword: "{{ mysql_router_password }}"

- name: Create Secret for Backup User
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Secret
metadata:
name: tn-sql-cluster-backup
namespace: "{{ mysql_namespace }}"
type: Opaque
stringData:
backupUser: "{{ mysql_backup_user }}"
backupPassword: "{{ mysql_backup_password }}"

- name: Create Service Account for MySQL Operator
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: ServiceAccount
metadata:
name: mysql-operator-sa
namespace: mysql-operator

- name: Create ClusterRole with Admin Permissions
kubernetes.core.k8s:
state: present
definition:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: mysql-operator-cluster-admin-role
rules:
- apiGroups: ["mysql.oracle.com"]
resources:
- "innodbclusters"
- "innodbclusters/status"
- "mysqlbackups"
- "mysqlbackups/status"
verbs:
- "*"
- apiGroups: ["apps"]
resources:
- "statefulsets"
- "deployments"
- "daemonsets"
- "replicasets"
verbs:
- "*"
- apiGroups: [""]
resources:
- "pods"
- "services"
- "configmaps"
- "secrets"
- "persistentvolumeclaims"
verbs:
- "*"
- apiGroups: ["batch"]
resources:
- "jobs"
- "cronjobs"
verbs:
- "*"
- apiGroups: ["events.k8s.io"]
resources:
- "events"
verbs:
- "get"
- "list"
- "watch"
- apiGroups: ["apiextensions.k8s.io"]
resources:
- "customresourcedefinitions"
verbs:
- "get"
- "list"
- "watch"
- apiGroups: ["zalando.org"]
resources:
- "clusterkopfpeerings"
verbs:
- "*"

- name: Create ClusterRoleBinding for MySQL Operator
kubernetes.core.k8s:
state: present
definition:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: mysql-operator-rolebinding
subjects:
- kind: ServiceAccount
name: mysql-operator-sa
namespace: mysql-operator
roleRef:
kind: ClusterRole
name: mysql-operator-cluster-admin-role
apiGroup: rbac.authorization.k8s.io

- name: Deploy MySQL Operator CRDs
kubernetes.core.k8s:
state: present
src: "https://raw.githubusercontent.com/mysql/mysql-operator/9.0.1-2.2.1/deploy/deploy-crds.yaml";

- name: Deploy MySQL Operator
kubernetes.core.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-operator
namespace: mysql-operator
labels:
name: mysql-operator
spec:
replicas: 1
selector:
matchLabels:
name: mysql-operator
template:
metadata:
labels:
name: mysql-operator
spec:
serviceAccountName: mysql-operator-sa
containers:
- name: mysql-operator
image: container-registry.oracle.com/mysql/community-operator:9.0.1-2.2.1
args:
- mysqlsh
- --log-level=DEBUG
- --pym
- mysqloperator
- operator
env:
- name: MYSQL_CLUSTER_DOMAIN
value: "cluster.local"
- name: MYSQL_OPERATOR_K8S_CLUSTER_DOMAIN
value: "cluster.local"
- name: WATCH_NAMESPACE
value: ""
readinessProbe:
exec:
command:
- cat
- /tmp/mysql-operator-ready
initialDelaySeconds: 5
periodSeconds: 10

- name: Wait for MySQL Operator to be Ready
kubernetes.core.k8s_info:
kind: Deployment
namespace: mysql-operator
name: mysql-operator
register: operator_deployment
until: operator_deployment.resources[0].status.availableReplicas | default(0) | int >= 1
retries: 20
delay: 15

- name: Deploy PersistentVolumeClaims for MySQL Instances
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tn-sql-cluster-pvc-{{ item }}
namespace: "{{ mysql_namespace }}"
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
storageClassName: "{{ storage_class }}"
loop: "{{ range(0, mysql_instances) | list }}"

- name: Deploy InnoDBCluster
kubernetes.core.k8s:
state: present
definition:
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: tn-sql-cluster
namespace: "{{ mysql_namespace }}"
spec:
secretName: tn-sql-cluster-privsecrets
tlsUseSelfSigned: true
instances: "{{ mysql_instances }}"
router:
instances: "{{ mysql_routers }}"
backup:
enabled: true
metricsExporter:
enabled: "{{ metrics_enabled }}"
user: "{{ mysql_metrics_user }}"
password: "{{ mysql_metrics_password }}"
serviceMonitor:
enabled: true

- name: Wait for InnoDBCluster to be Ready
kubernetes.core.k8s_info:
api_version: "mysql.oracle.com/v2"
kind: InnoDBCluster
namespace: "{{ mysql_namespace }}"
name: tn-sql-cluster
register: cluster_info
retries: 30
delay: 20
until: cluster_info.resources | length > 0 and cluster_info.resources[0].get('status', {}).get('cluster', {}).get('status') == "ONLINE"

Operator Logs:

The operator logs show only the initial startup information and do not display any reconciliation activity or errors. Here's a snippet:

css
Copy code
[2024-09-18 05:17:19,091] kopf.activities.star [INFO ] Activity 'on_startup' succeeded.
[2024-09-18 05:17:19,092] kopf._core.engines.a [INFO ] Initial authentication has been initiated.
[2024-09-18 05:17:19,094] kopf.activities.auth [INFO ] Activity 'login_via_client' succeeded.
[2024-09-18 05:17:19,094] kopf._core.engines.a [INFO ] Initial authentication has finished.
InnoDBCluster Resource Status:

yaml
Copy code
Name: tn-sql-cluster
Namespace: tn-sql
API Version: mysql.oracle.com/v2
Kind: InnoDBCluster
Metadata:
Creation Timestamp: 2024-09-18T05:33:41Z
Generation: 1
Resource Version: 26739894
UID: 63a0bd37-e553-4100-a88f-61e7a41254d1
Spec:
Base Server Id: 1000
Instances: 3
Router:
Instances: 1
Secret Name: tn-sql-cluster-privsecrets
Tls Use Self Signed: true
Events: <none>
Additional Diagnostic Outputs:

CRD Version:

arduino
Copy code
$ kubectl get crd innodbclusters.mysql.oracle.com -o jsonpath='{.spec.versions[*].name}'
v2
Operator Pod Status:

sql
Copy code
$ kubectl get pods -n mysql-operator -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-operator-5878c5fdcc-f9dft 1/1 Running 0 11m 10.244.3.220 k8s-node-2 <none> <none>
Events in tn-sql Namespace:

csharp
Copy code
$ kubectl get events -n tn-sql --sort-by='.metadata.creationTimestamp'
No events found.
RBAC Verification:

csharp
Copy code
$ kubectl auth can-i get innodbclusters.mysql.oracle.com --as=system:serviceaccount:mysql-operator:mysql-operator-sa -n tn-sql
yes
Network Policies and Resource Quotas:

sql
Copy code
$ kubectl get networkpolicy -n tn-sql
No resources found.

$ kubectl get resourcequota -n tn-sql
No resources found.
Attempts to Resolve the Issue:

Set WATCH_NAMESPACE to Watch All Namespaces:

Confirmed that the WATCH_NAMESPACE environment variable is set to "" in the operator deployment.
Increased Operator Logging Level:

Set the logging level to DEBUG but still did not observe any reconciliation activity or errors in the logs.
Verified RBAC Configurations:

Ensured that the operator's service account has the necessary permissions.
Tested with a Minimal InnoDBCluster:

Deployed a minimal cluster configuration but experienced the same issue.
Request for Assistance:

I'm seeking help to determine why the MySQL Operator isn't processing the InnoDBCluster resource. Any insights or suggestions would be greatly appreciated.

Questions:

Has anyone successfully deployed the Oracle MySQL Operator on Kubernetes version 1.30.x?
Are there any known compatibility issues between MySQL Operator version 9.0.1-2.2.1 and Kubernetes 1.30.x?
Could there be any hidden configurations or steps that I'm missing?
Thank you in advance for your help!

Options: ReplyQuote


Subject
Written By
Posted
MySQL Operator Not Processing InnoDBCluster Resources in Kubernetes 1.30.1
September 18, 2024 06:59PM


This forum is currently read only. You can not log in or make any changes. This is a temporary situation.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.