Re: Discrepancy between mysqlsh and performance_schema with 8.0.11
Hi Grant,
Thank you for testing MySQL InnoDB Cluster and Group Replication.
As you may have noticed, GR is part of InnoDB Cluster but it's not InnoDB Cluster... The shell gives you an overview of the cluster, for example, when you have:
MySQL mysql2:33060+ JS cluster.status()
{
"clusterName": "minneapolis",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql2:3306",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures. 1 member is not active",
"topology": {
"mysql1:3306": {
"address": "mysql1:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "(MISSING)"
},
"mysql2:3306": {
"address": "mysql2:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"mysql3:3306": {
"address": "mysql3:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
},
"groupInformationSourceMember": "mysql://clusteradmin@mysql2:3306"
}
As you can see mysql1 is MISSING, because in the cluster we know about it (metadata in mysql_innodb_cluster_metadata). But for GR the member is gone:
MySQL mysql2:33060+ SQL select * from performance_schema.replication_group_members\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: 0b3162ac-4cf2-11e8-9251-08002718d305
MEMBER_HOST: mysql3
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: SECONDARY
MEMBER_VERSION: 8.0.11
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: f0ab5929-4cf1-11e8-9a90-08002718d305
MEMBER_HOST: mysql2
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: PRIMARY
MEMBER_VERSION: 8.0.11
That said, in your case, it seems to me that in your last test, you add mysql2 (a copy of mysql3 ? or just the same GR configuration?). In this case mysql2 is maybe part of the GROUP but has not been referenced to the metadata.
InnoDB Cluster (from the Shell) configures GR, but the reverse path is not working. Or at least not a mix.
What you can do is create a cluster from the GR using the option **adoptFromGR**:
adoptFromGR: boolean value used to create the InnoDB cluster based on
existing replication group.
so in your case drop the metadata (dba.dropMetadataSchema()) and recreate cluster from GR.
And finally your last message is related of the resolving of the ip that is 127.0.0.1, so it's considered as a sandbox instance..., you can force it using the option localAddress (string value with the Group Replication local address to be used instead of the automatically generated one), check dba.help('createCluster')
What I prefer to do is to modify this in /etc/hosts of each node and specify the address of each nodes and use their name, this is on my machine for example:
192.168.87.2 mysql1
192.168.87.3 mysql2
192.168.87.4 mysql3
I hope this helps you.
Regards,