MySQL Forums
Forum List  »  InnoDB clusters

InnoDB Cluster creating Unsafe Accounts
Posted by: Adam Nelson
Date: June 23, 2017 04:51AM

I'm testing the viability of using an InnoDB Cluster for the backend of several applications. To that end, I'm in the process of building one from scratch to test the build process and the performance. For my setup I've got three barebones servers with nothing but CentOS7 and MySQL (community-edition and shell downloaded directly from the MySQL repo) loaded on them. I've run the configure checks and made all the changes. I run the command to create the cluster and it is successful on the first node (innodbNode1). I run addInstance for the second node and is stays in recovery for 15 minutes until it goes into "(MISSING)" state. I check the logs on the second node (innodbNode2) and this is the error in the log :

41 [ERROR] Slave SQL for channel 'group_replication_recovery': Query caused different errors on master and slave. Error on master: message (format)='Your password does not satisfy the current policy requirements' error code=1819; Error on slave:actual message='no error', error code=0. Default database:''. Query:'CREATE USER IF NOT EXISTS 'mysql_innodb_cluster_rp430167485'@'%' IDENTIFIED WITH 'mysql_native_password' AS '<secret>'', Error_code: 3002
41 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'innodbNode1-bin.000002' position 2058

Per the logs the attempt at joining the cluster occurs 10 times. For the first 3 attempts this is the error message that occurs. It clearly states that the new node (innodbNode2) is attempting to create an account that does not meet the password policy requirements. I didn't change the password requirements when I installed MySQL ... I installed the RPM, changed the root password, and made the changes recommended by mysqlsh. This appears to be some type of bug in the clustering process, as it should by default create a password that would pass the highest default validation levels. After attempt 3 it appears to have randomly picked a password that works, because the error message changes ... on attempts 4 through 10 the error becomes :

59 [Note] Slave I/O thread for channel 'group_replication_recovery': connected to master 'mysql_innodb_cluster_rp430507194@innodbNode1:3306',replication started in log 'FIRST' at position 4
60 [ERROR] Slave SQL for channel 'group_replication_recovery': Error executing row event: 'Table 'mysql_innodb_cluster_metadata.clusters' doesn't exist', Error_code: 1146
60 [Warning] Slave: Table 'mysql_innodb_cluster_metadata.clusters' doesn't exist Error_code: 1146

Based off this error message it appears that innodbNode2 got a row event from innodbNode1 to add/modify a row in the table "mysql_innodb_cluster_metadata.clusters". However ... that table doesn't exist on innodbNode2. It's as if innodbNode2 didn't start at the beginning of the binlog, it didn't create the table, it just tried to update it. This of course kills the replication process.

I would love to know where I've gone wrong in this process ... can anyone point me in a direction here?

Options: ReplyQuote


Subject
Views
Written By
Posted
InnoDB Cluster creating Unsafe Accounts
1329
June 23, 2017 04:51AM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.