<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel>
        <title>MySQL Forums - Replication</title>
        <description>Forum for MySQL Replication.</description>
        <link>https://forums.mysql.com/list.php?26</link>
        <lastBuildDate>Fri, 17 Apr 2026 21:23:38 +0000</lastBuildDate>
        <generator>Phorum 5.2.23</generator>
        <item>
            <guid>https://forums.mysql.com/read.php?26,741676,741676#msg-741676</guid>
            <title>MYSQL 8.4.8 CE - Group Replication - read_only ON (2 replies)</title>
            <link>https://forums.mysql.com/read.php?26,741676,741676#msg-741676</link>
            <description><![CDATA[ Hello,<br />
<br />
I have an issue with a cluster from 3 nodes and Group replication, oracle linux 9.7 and MYSQL CE 8.4.8.<br />
<br />
Every time I restart the mysqld service the read_only and super_read_only remain ON.<br />
<br />
If I check the cluster all nodes are online and primary and no errors (it`s a multi master).<br />
<br />
What`s the problem ?<br />
<br />
I`ve tried to set the /etc/my.cnf with setting:<br />
read_only = OFF<br />
super_read_only = OFF<br />
<br />
<br />
but no success.<br />
<br />
Once I restart mysqld service read_only came back to ON.<br />
<br />
Thank you.]]></description>
            <dc:creator>Mircea Ispasoiu</dc:creator>
            <category>Replication</category>
            <pubDate>Mon, 23 Mar 2026 16:21:41 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,741286,741286#msg-741286</guid>
            <title>MySQL Replication - change replica iP (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,741286,741286#msg-741286</link>
            <description><![CDATA[ Hi,<br />
<br />
The database is MySQL version 8.4.6<br />
<br />
There is source and target database (the replica).<br />
<br />
The replica will be relocated to different building and location, and will be using new IP. The question:<br />
<br />
a) what is needed to be done before shutting down the replica database.<br />
b) what is needed to be done after the restart of the replica database.<br />
<br />
On the source / production database and on the replica database.<br />
<br />
I have experience in Oracle/Data Guard but could use some tips. This will be a new database (for me) and a one off thing, so additional queries about getting info/setting would be helpful too.<br />
<br />
Is it true that this is the only thing needed to be done?<br />
<br />
-- Remove old grant<br />
REVOKE REPLICATION SLAVE ON *.* FROM &#039;repl_user&#039;@&#039;old_ip&#039;;<br />
<br />
-- Add new grant<br />
GRANT REPLICATION SLAVE ON *.* TO &#039;repl_user&#039;@&#039;new_ip&#039; IDENTIFIED BY &#039;your_password&#039;;<br />
FLUSH PRIVILEGES;<br />
<br />
Also, since for NySQL IP for users to access from is stored, is this something I need to take into account in case of future &quot;swing&quot; (from source to replica) so this is also what needed to be updated?<br />
<br />
Thanks.]]></description>
            <dc:creator>Azhar Mat Zin</dc:creator>
            <category>Replication</category>
            <pubDate>Fri, 10 Oct 2025 08:21:10 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,741188,741188#msg-741188</guid>
            <title>Setting up replication, no errors but no data replicates (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,741188,741188#msg-741188</link>
            <description><![CDATA[ Trying to configure replication. <br />
 We used the existing dev/test DB VM server to make a clone and create a DB replica server.  I dropped the two target DB&#039;s that we want to replicate.  I used mysqldump to dump the existing database and recreate them on the replica_server.<br />
<br />
 If I stop and start replica this is the only message related to replication that I see on the replica_server:<br />
2025-09-09T16:23:39.240573Z 25 [System] [MY-014001] [Repl] Replica receiver thread for channel &#039;&#039;: connected to source &#039;replication_user@152.x.x.x5:3310&#039; with server_uuid=5f89cf28-6f3c-11ed-83c5-00505683041f, server_id=1. Starting replication from file &#039;mysql_bin.000002&#039;, position &#039;4478572&#039;.<br />
<br />
##### relevant information from the master server<br />
mysql&gt; show master status\G<br />
*************************** 1. row ***************************<br />
             File: mysql_bin.000002<br />
         Position: 4938118<br />
     Binlog_Do_DB:<br />
 Binlog_Ignore_DB:<br />
Executed_Gtid_Set: 5f89cf28-6f3c-11ed-83c5-00505683041f:1-10430<br />
1 row in set (0.00 sec)<br />
<br />
mysql&gt;<br />
mysql&gt; show variables like &#039;server_id&#039;;<br />
+---------------+-------+<br />
| Variable_name | Value |<br />
+---------------+-------+<br />
| server_id     | 1     |<br />
+---------------+-------+<br />
1 row in set (0.00 sec)<br />
<br />
mysql&gt; show variables like &#039;log_bin&#039;;<br />
+---------------+-------+<br />
| Variable_name | Value |<br />
+---------------+-------+<br />
| log_bin       | ON    |<br />
+---------------+-------+<br />
1 row in set (0.00 sec)<br />
<br />
mysql&gt;<br />
mysql&gt; show variables like &#039;version&#039;;<br />
+---------------+--------+<br />
| Variable_name | Value  |<br />
+---------------+--------+<br />
| version       | 8.0.41 |<br />
+---------------+--------+<br />
1 row in set (0.00 sec)<br />
<br />
### relevant information from mysql.cnf<br />
# Enable GTID based replication<br />
server-id = 1<br />
log_bin = mysql_bin<br />
gtid_mode=ON<br />
enforce-gtid-consistency=ON<br />
<br />
<br />
##### relevant information from the replica server. I cut some of the blank <br />
##### variables, they have never come up in any google searches<br />
### The replica<br />
mysql&gt; show replica status\G<br />
*************************** 1. row ***************************<br />
             Replica_IO_State: Waiting for source to send event<br />
                  Source_Host: 152.x.x.x5<br />
                  Source_User: replication_user<br />
                  Source_Port: 3310<br />
                Connect_Retry: 60<br />
              Source_Log_File: mysql_bin.000002<br />
          Read_Source_Log_Pos: 4960447<br />
               Relay_Log_File: replicaserver-relay-bin.000011<br />
                Relay_Log_Pos: 482241<br />
        Relay_Source_Log_File: mysql_bin.000002<br />
           Replica_IO_Running: Yes<br />
          Replica_SQL_Running: Yes<br />
              Replicate_Do_DB: app1_devdb,app2_devdb<br />
                   Last_Errno: 0<br />
                   Last_Error:<br />
                 Skip_Counter: 0<br />
          Exec_Source_Log_Pos: 4960447<br />
              Relay_Log_Space: 4961088<br />
              Until_Condition: None<br />
        Seconds_Behind_Source: 0<br />
Source_SSL_Verify_Server_Cert: No<br />
                Last_IO_Errno: 0<br />
                Last_IO_Error:<br />
               Last_SQL_Errno: 0<br />
               Last_SQL_Error:<br />
  Replicate_Ignore_Server_Ids:<br />
             Source_Server_Id: 1<br />
                  Source_UUID: 5f89cf28-6f3c-11ed-83c5-00505683041f<br />
             Source_Info_File: mysql.slave_master_info<br />
                    SQL_Delay: 0<br />
          SQL_Remaining_Delay: NULL<br />
    Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates<br />
           Source_Retry_Count: 86400<br />
           Retrieved_Gtid_Set: 5f89cf28-6f3c-11ed-83c5-00505683041f:1688-10459<br />
            Executed_Gtid_Set: 01bddbdf-8a85-11f0-9d64-005056909086:1-8,<br />
5f89cf28-6f3c-11ed-83c5-00505683041f:1-3539156,<br />
b390a816-acc1-11ea-9f97-005056aca952:1-301238<br />
                Auto_Position: 0<br />
        Get_Source_public_key: 0<br />
1 row in set (0.01 sec)<br />
<br />
#### NOTES from above I don&#039;t know where these two came from<br />
#### 01bddbdf-8a85-11f0-9d64-005056909086<br />
#### b390a816-acc1-11ea-9f97-005056aca952<br />
<br />
mysql&gt; show variables like &#039;server_id&#039;;<br />
+---------------+-------+<br />
| Variable_name | Value |<br />
+---------------+-------+<br />
| server_id     | 2     |<br />
+---------------+-------+<br />
1 row in set (0.00 sec)<br />
<br />
mysql&gt; show variables like &#039;log_bin&#039;;<br />
+---------------+-------+<br />
| Variable_name | Value |<br />
+---------------+-------+<br />
| log_bin       | ON    |<br />
+---------------+-------+<br />
1 row in set (0.00 sec)<br />
<br />
mysql&gt; show variables like &#039;version&#039;;<br />
+---------------+--------+<br />
| Variable_name | Value  |<br />
+---------------+--------+<br />
| version       | 8.0.41 |<br />
+---------------+--------+<br />
1 row in set (0.00 sec)<br />
<br />
mysql&gt;<br />
<br />
### the relevant section mysql.cnf<br />
# relay log location<br />
relay_log = replicaserver-relay-bin<br />
relay_log_index = /mysql01/data/replicaserver-relay-bin.index<br />
<br />
# Enable GTID based replication<br />
server-id = 2<br />
gtid_mode=ON<br />
enforce-gtid-consistency=ON<br />
read-only=1<br />
<br />
# Only Replicate the following DBs.<br />
replicate_do_db = app1_devdb<br />
replicate_do_db = app2_devdb<br />
<br />
<br />
Additional info, after the initial attempt I have reset the master and the replica server.  I have reset just the replica.  <br />
After making changes on the master and them not showing up on the replica, I have stopped replica, did mysqldump to copy over the updated DB and used mysql change replication source updating the source logfile and source log position with the new information.<br />
<br />
I saw this as a basic test of replication: create table test_replication (id INT);   But the table does not replicate to the replica server.<br />
<br />
From linux CLI I tested the replication user from replica server to master using:<br />
mysql -H 152.x.x.x5 -P 3310 -u replica_user -p   and I can successfully connect.]]></description>
            <dc:creator>David Horan</dc:creator>
            <category>Replication</category>
            <pubDate>Tue, 09 Sep 2025 19:37:11 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,740875,740875#msg-740875</guid>
            <title>Slave SQL for channel &#039;&#039;: Worker 1 failed executing transaction &#039;ANONYMOUS&#039; (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,740875,740875#msg-740875</link>
            <description><![CDATA[ Hi Expert, <br />
<br />
Appreciate if some one can assist me to fix this issue, Mysql Replication Master database, one table have huge deletion every time , which creates the replication inconsistency on slave during configuration ,how to fix this issue <br />
<br />
<br />
025-06-23T10:50:21.550781Z 7 [ERROR] [MY-010584] [Repl] Slave SQL for channel &#039;&#039;: Worker 1 failed executing transaction &#039;ANONYMOUS&#039; at master log mysql-bin.000142, end_log_pos 557157769; Could not execute Delete_rows event on table salesforce.customer_details_log; Can&#039;t find record in &#039;customer_details_log&#039;, Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event&#039;s master log FIRST, end_log_pos 557157769, Error_code: MY-001032<br />
<br />
<br />
<br />
Thanks]]></description>
            <dc:creator>Sohail Jafferi</dc:creator>
            <category>Replication</category>
            <pubDate>Thu, 26 Jun 2025 05:50:39 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,740805,740805#msg-740805</guid>
            <title>MYSQL 5.7.44 HA cluster active/active problems (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,740805,740805#msg-740805</link>
            <description><![CDATA[ Hello,<br />
<br />
I have an issue and I hope to find a solution.<br />
<br />
I need to setup an active/active DB cluster, with 2 linux servers nodes that must write at the same time into mysql 5.7.44 database (aprox 1 TB size).<br />
<br />
The filesystem on my databse are MyISAM and InnoDB.<br />
Database doesnt have Primary Keys and NO posibility to add it.<br />
<br />
Is there any posibility to set up a functional active/active HA system with these specifications?<br />
<br />
Everywhere I look it`s says that is impossible cause of primary keys missing.<br />
<br />
Thank you.]]></description>
            <dc:creator>Mircea Ispasoiu</dc:creator>
            <category>Replication</category>
            <pubDate>Wed, 11 Jun 2025 10:25:58 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,740597,740597#msg-740597</guid>
            <title>question about loose-group_replication_start_on_boot when all members reboot simultaneously (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,740597,740597#msg-740597</link>
            <description><![CDATA[ hello. i did some searching and couldn&#039;t find mention of this. apologies if there is somewhere else...<br />
<br />
all of this has taken place on 2 servers with matching system specs:<br />
Ubuntu 24.04.2 LTS<br />
kernel: 6.8.0-58-generic<br />
mysql  Ver 8.0.41-0ubuntu0.24.04.1 for Linux on x86_64 ((Ubuntu))<br />
<br />
i setup a multi-primary replication group with 2 members. basic, mostly default boiler plate config options used:<br />
<br />
[mysqld]<br />
## binlog_format = MIXED<br />
<br />
# General replication settings<br />
disabled_storage_engines=&quot;MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY&quot;<br />
gtid_mode = ON<br />
enforce_gtid_consistency = ON<br />
master_info_repository = TABLE<br />
relay_log_info_repository = TABLE<br />
binlog_checksum = NONE<br />
log_slave_updates = ON<br />
log_bin = binlog<br />
binlog_format = ROW<br />
transaction_write_set_extraction = XXHASH64<br />
loose-group_replication_bootstrap_group = OFF<br />
loose-group_replication_start_on_boot = ON<br />
loose-group_replication_ssl_mode = REQUIRED<br />
loose-group_replication_recovery_use_ssl = 1<br />
<br />
# Shared replication group configuration<br />
loose-group_replication_group_name = &quot;xxx&quot;<br />
loose-group_replication_ip_whitelist = &quot;xxx&quot;<br />
loose-group_replication_group_seeds = &quot;xxx&quot;<br />
<br />
# Single or Multi-primary mode? Uncomment these two lines<br />
# for multi-primary mode, where any host can accept writes<br />
loose-group_replication_single_primary_mode = OFF<br />
loose-group_replication_enforce_update_everywhere_checks = ON<br />
<br />
# Host specific replication configuration<br />
server_id = x<br />
bind-address = &quot;xxx&quot;<br />
report_host = &quot;xxx&quot;<br />
loose-group_replication_local_address = &quot;xxx&quot;<br />
<br />
the config parameters i&#039;m concerned with is &quot;loose-group_replication_start_on_boot = ON&quot;.<br />
<br />
if both servers reboot in too close proximity, and thus boot without another master being online, they come up with replication stopped. i believe this is to be expected. when rebooted one at a time, and with enough time in between each reboot that the whole repl group doesn&#039;t go down at once, it seems to come back up ok. <br />
<br />
is there any way anyone has accomplished getting the repl group to start on its own in this scenario without logging into the cli and issuing the commands to re-bootstrap, start it? i&#039;m curious. thanks.]]></description>
            <dc:creator>Fabian Santiago</dc:creator>
            <category>Replication</category>
            <pubDate>Sat, 19 Apr 2025 21:20:13 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,739950,739950#msg-739950</guid>
            <title>Any workaround for MYSQL 8.0 MGR rolling upgrade (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,739950,739950#msg-739950</link>
            <description><![CDATA[ The Problem like: <br />
<br />
I perform the rolling upgrade for mysql minor version 8.0.35 to 8.0.36. <br />
<br />
The steps like:<br />
1) upgrade the standby B to   to 8.0.36. <br />
2) upgrade the standby C to   to 8.0.36.<br />
3) Now am trying switchover from primary A to standby B by using function group_replication_set_as_primary .<br />
But got error: <br />
ERROR 3910 (HY000): The function &#039;group_replication_set_as_primary&#039; failed. Error processing configuration start message: The appointed primary member has a version that is greater than the one of some of the members in the group.<br />
<br />
So is there any workaround for the rolling upgrade for MGR? <br />
<br />
<br />
<br />
Thanks<br />
Jason]]></description>
            <dc:creator>Jason Chen</dc:creator>
            <category>Replication</category>
            <pubDate>Tue, 31 Dec 2024 03:43:57 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,739681,739681#msg-739681</guid>
            <title>Last_SQL_Errno: 1062, Last_SQL_Error: Coordinator stopped because there were error(s) in the worker(s) (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,739681,739681#msg-739681</link>
            <description><![CDATA[ My mysql replication (master - slave) is failing with following error:<br />
<br />
Last_SQL_Errno: 1062<br />
<br />
Last_SQL_Error: Coordinator stopped because there were error(s) in the worker(s). The most recent failure being: Worker 1 failed executing transaction &#039;ANONYMOUS&#039; at source log mysql-bin.000115, end_log_pos 468117420. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any.<br />
<br />
I checked the bin log file and I see following<br />
<br />
2024-12-03T21:48:34.088063Z 7 [ERROR] [MY-010584] [Repl] Replica SQL for channel &#039;&#039;: Worker 1 failed executing transaction &#039;ANONYMOUS&#039; at source log mysql-bin.000123, end_log_pos 515; Could not execute Write_rows event on table mydb.wp_options; Duplicate entry &#039;_transient_jetpack_update_remote_package_last_query&#039; for key &#039;wp_options.option_name&#039;, Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event&#039;s source log mysql-bin.000123, end_log_pos 515, Error_code: MY-001062<br />
<br />
2024-12-03T21:48:34.088873Z 6 [ERROR] [MY-010586] [Repl] Error running query, replica SQL thread aborted. Fix the problem, and restart the replica SQL thread with &quot;START REPLICA&quot;. We stopped at log &#039;mysql-bin.000123&#039; position 157<br />
<br />
The problematic database is a WordPress database.<br />
<br />
I did reset replica and restarted but still getting same error.<br />
<br />
How do I fix it and also avoid this happening in the future as I&#039;ve many wordPress databases and I&#039;ll have more in the future.]]></description>
            <dc:creator>Jon Adams</dc:creator>
            <category>Replication</category>
            <pubDate>Tue, 03 Dec 2024 22:51:33 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,730506,730506#msg-730506</guid>
            <title>Plugin instructed the server to rollback the current transaction (1 reply)</title>
            <link>https://forums.mysql.com/read.php?26,730506,730506#msg-730506</link>
            <description><![CDATA[ I have a 3 node MySQL cluster with group replication enabled. I am using MySQL version 8.0.39-0ubuntu0.24.04.2. I have setup the replication in multi-primary mode. However, my application is only connecting to one node and reading and writing from/to that single node. The rest of the 2 nodes are just replicating data and ready to serve writes anytime I point my application to it. This cluster works most of the time but when I apply large load to it, a number of transactions fail with the message &quot;plugin instructed the server to rollback the current transaction&quot;. On reading about this, I could understand the issue can arise due to conflicting writes in multiple nodes. However, in my current setup, write is ONLY going to one node. There should not be a conflicting write from another node. When I change the setup to single primary, this error disappears. How do I solve this issue ?]]></description>
            <dc:creator>Krishnadas K P</dc:creator>
            <category>Replication</category>
            <pubDate>Thu, 29 May 2025 10:16:39 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,726514,726514#msg-726514</guid>
            <title>Resync mysql replica in a GTID setup gives error HA_ERR_FOUND_DUPP_KEY with AUTOPOSITION (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,726514,726514#msg-726514</link>
            <description><![CDATA[ Hello<br />
<br />
I use a procedure based on the page <a href="http://web.archive.org/web/20230323121432/https://blog.pythian.com/mysql-streaming-xtrabackup-slave-recovery/"  rel="nofollow">http://web.archive.org/web/20230323121432/https://blog.pythian.com/mysql-streaming-xtrabackup-slave-recovery/</a> for years to sync a new replica or resync a broken replica for year without issue. This uses xtrabackup to save and send the mysql data to the slave, apply the redo log, and restart the slave with MASTER_AUTO_POSITION. Simple and effective.<br />
xtrabackup is convenient for us, due to the size of the databases we have to transfer. we use this way &quot;xtrabackup --backup --stream=xbstream --parallel=$(NB_PROC/2)&quot;<br />
<br />
mysql: 5.7.42<br />
xtrabackup: 2.4.29<br />
<br />
lately when I start the slave I get such error<br />
==============================================<br />
                   Last_Errno: 1062<br />
                   Last_Error: Could not execute Write_rows event on table db_2.webhook_message_status; Duplicate entry &#039;3304591&#039; for key &#039;PRIMARY&#039;, Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event&#039;s master log mysql-bin.001615, end_log_pos 5021<br />
                 Skip_Counter: 0<br />
          Exec_Master_Log_Pos: 4692<br />
              Relay_Log_Space: 143514642<br />
...<br />
        Seconds_Behind_Master: NULL<br />
Master_SSL_Verify_Server_Cert: No<br />
                Last_IO_Errno: 0<br />
                Last_IO_Error: <br />
               Last_SQL_Errno: 1062<br />
               Last_SQL_Error: Could not execute Write_rows event on table db_2.webhook_message_status; Duplicate entry &#039;3304591&#039; for key &#039;PRIMARY&#039;, Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event&#039;s master log mysql-bin.001615, end_log_pos 5021<br />
  Replicate_Ignore_Server_Ids: <br />
             Master_Server_Id: 189087<br />
                  Master_UUID: 144c0164-3223-11ef-8319-74563c5c838d<br />
             Master_Info_File: mysql.slave_master_info<br />
==============================================<br />
<br />
Looking on slave I confirm the entry with id 3304591 is already there<br />
<br />
<br />
==============================================<br />
+---------+---------------------+---------+--------+-----------------------------+---------------------+<br />
| id      | message_external_id | site_id | status | context                     | created_at          |<br />
+---------+---------------------+---------+--------+-----------------------------+---------------------+<br />
| 3304591 | xxxxxxxxxxxxxxxxxxx |     483 | read   | {&quot;code&quot;:200,&quot;title&quot;:&quot;read&quot;} | 2024-09-19 07:52:00 |<br />
+---------+---------------------+---------+--------+-----------------------------+---------------------+<br />
==============================================<br />
<br />
so it seems the slave does not know anymore to properly position itself.<br />
<br />
As the workaround I get the content of xtrabackup_binlog_info<br />
<br />
==============================================<br />
mysql-bin.001615        73610932        144c0164-3223-11ef-8319-74563c5c838d:1-14071690<br />
==============================================<br />
<br />
and I did this<br />
<br />
==============================================<br />
mysql&gt; reset master<br />
mysql&gt; set global GTID_PURGED=&quot;144c0164-3223-11ef-8319-74563c5c838d:1-14071690&quot;<br />
mysql&gt; start slave<br />
==============================================<br />
<br />
and now it works<br />
==============================================<br />
             Master_Server_Id: 189087<br />
                  Master_UUID: 144c0164-3223-11ef-8319-74563c5c838d<br />
             Master_Info_File: mysql.slave_master_info<br />
                    SQL_Delay: 0<br />
          SQL_Remaining_Delay: NULL<br />
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates<br />
           Master_Retry_Count: 86400<br />
...<br />
           Retrieved_Gtid_Set: 144c0164-3223-11ef-8319-74563c5c838d:14047201-14248907<br />
            Executed_Gtid_Set: 144c0164-3223-11ef-8319-74563c5c838d:1-14248907<br />
                Auto_Position: 1<br />
==============================================<br />
<br />
as an alternative solution I use pt-slave-restart to bypass all error id 1062, so after a while, the slave eventually is in sync, but I don&#039;t feel confident about the replica data integrity.<br />
<br />
Do you have an idea about what could be the cause of this problem? It used to work fine, during the last 3 years we use without an issue. We did not changed major version of MySQL, or anytool involved.<br />
is the &quot;workaround&quot; I&#039;m doing is fine ? Do I have all master data on slave ?<br />
<br />
best]]></description>
            <dc:creator>Baptiste Mille-Mathias</dc:creator>
            <category>Replication</category>
            <pubDate>Wed, 25 Sep 2024 10:28:40 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,725655,725655#msg-725655</guid>
            <title>MySQL Cluster won&#039;t start on docker (3 replies)</title>
            <link>https://forums.mysql.com/read.php?26,725655,725655#msg-725655</link>
            <description><![CDATA[ Hello,<br />
I try to create a MySql InnoDB Cluster with containers.<br />
I have 2 VM with Ubuntu 22.04, In each I have a container with MySql 8.0.39<br />
I want to create a cluster with the 2 containers. But I failed to create.<br />
<br />
dba.createCluster(clusterName, {localAddress:&#039;mysqlClstr01:3306&#039;})<br />
A new InnoDB Cluster will be created on instance &#039;mysqlClstr01:3306&#039;.<br />
<br />
Disabling super_read_only mode on instance &#039;mysqlClstr01:3306&#039;.<br />
Validating instance configuration at mysqlClstr01:3306...<br />
<br />
This instance reports its own address as mysqlClstr01:3306<br />
<br />
Instance configuration is suitable.<br />
* Checking connectivity and SSL configuration...<br />
<br />
Creating InnoDB Cluster &#039;mysqlCluster&#039; on &#039;mysqlClstr01:3306&#039;...<br />
<br />
Adding Seed Instance...<br />
ERROR: Unable to start Group Replication for instance &#039;mysqlClstr01:3306&#039;.<br />
The MySQL error_log contains the following messages:<br />
  2024-08-23 14:40:57.147907 [System] [MY-013587] Plugin group_replication reported: &#039;Plugin &#039;group_replication&#039; is starting.&#039;<br />
  2024-08-23 14:40:57.148434 [System] [MY-011565] Plugin group_replication reported: &#039;Setting super_read_only=ON.&#039;<br />
  2024-08-23 14:40:57.149175 [Error] [MY-011735] Plugin group_replication reported: &#039;[GCS] There is no local IP address matching the one configured for the local node (mysqlClstr01:3306).&#039;<br />
  2024-08-23 14:40:57.149375 [Error] [MY-011674] Plugin group_replication reported: &#039;Unable to initialize the group communication engine&#039;<br />
  2024-08-23 14:40:57.149392 [Error] [MY-011637] Plugin group_replication reported: &#039;Error on group communication engine initialization&#039;<br />
Dba.createCluster: Group Replication failed to start: MySQL Error 3096 (HY000): mysqlClstr01:3306: The START GROUP_REPLICATION command failed as there was an error when initializing the group communication layer. (RuntimeError)<br />
<br />
I tried to pass an argument in my docker-compose to set the local address group-replication-local-address without success.<br />
<br />
My docker compose:<br />
volumes:<br />
  mysqlData:<br />
    name: mysqlData<br />
    external: true<br />
  mysqlLog:<br />
    name: mysqlLog<br />
    external: true<br />
services:<br />
  mysql-server-1:<br />
    env_file:<br />
      - mysql-server.env<br />
    image: mysql:8.0<br />
    container_name:  mysql-server-1<br />
    volumes:<br />
      - mysqlData:/var/lib/mysql<br />
      - mysqlLog:/var/log/mysql<br />
    network_mode: host<br />
    ports:<br />
      - &quot;3301:3306&quot;<br />
      - &quot;3306:3306&quot;<br />
      - &quot;33060:33060&quot;<br />
      - &quot;33061:33061&quot;<br />
      - &quot;33062:33062&quot;<br />
    command: [&quot;mysqld&quot;,&quot;--server_id=1&quot;,&quot;--binlog_checksum=NONE&quot;,&quot;--gtid_mode=ON&quot;,&quot;--enforce_gtid_consistency=ON&quot;,&quot;--log_bin&quot;,&quot;--log_replica_updates=ON&quot;,&quot;--user=mysql&quot;,&quot;--host_cache_size=0&quot;, &quot;--authentication_policy=mysql_native_password&quot;,&quot;--group-replication-local-address=&#039;mysqlClstr01:33061&#039;&quot;]<br />
    restart: always<br />
  mysql-shell:<br />
    env_file:<br />
      - mysql-shell.env<br />
    image: mysql-shell<br />
    container_name:  mysql-shell-1<br />
    volumes:<br />
        - ./scripts/:/scripts/<br />
    network_mode: host<br />
<br />
<br />
Could you please help me?]]></description>
            <dc:creator>Thierry LESIRE</dc:creator>
            <category>Replication</category>
            <pubDate>Fri, 06 Sep 2024 06:39:21 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,725119,725119#msg-725119</guid>
            <title>Replication ignore table issue (1 reply)</title>
            <link>https://forums.mysql.com/read.php?26,725119,725119#msg-725119</link>
            <description><![CDATA[ Hi all,<br />
I am having trouble with replication after a hardware upgrade. I have to say that I initially used mysql (years ago), all worked fine, I then used Mariadb for a while without issues, but now returned to mysql, mainly because it is default in the linux distribution I am provided with on the new hardware. Replication worked there for a few weeks, but now it is stuck and I have no idea what to do.<br />
The setup it one master, one slave. The master is running fine and logging as required.<br />
The slave server runs fine as well, no issues with the mysql instance other than the replication. The Slave_IO_Running says yes running and the relay log files are up to date.<br />
Problem is the Slave_SQL_Running says no, and there is an error. If I do:<br />
select * from performance_schema.replication_applier_status_by_worker;<br />
which seems to be the best way to get a meaningful error message, I get:<br />
Worker 1 failed executing transaction &#039;ANONYMOUS&#039; at source log binlog.000021, end_log_pos 36111459; Could not execute Update_rows event on table nmrshiftdb.SESSIONS; Can&#039;t find record in &#039;SESSIONS&#039;, Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event&#039;s source log binlog.000021, end_log_pos 36111459<br />
There is other stuff, but this is the core message, I believe. SESSIONS is a table name in my database.<br />
If I take the binlog from master and look at it, the command at that position is a commit. The command before is an &quot;update SESSIONS&quot;, and before that there is begin.<br />
From this, I take that the update is the problem. This statement works on the server, and the tables are identical on master and slave.<br />
The first issue is: This table should be ignored. If I do a &quot;show slave status&quot;, I get &quot;x.y,x.SESSIONS&quot; in &quot;Replicate_Ignore_Table&quot;, where x is my database name and y is another table. It seems the ignore table is ignored (haha). Is there any explanation for this? Something I can do? I am lost here.  I tried leaving out the database name from ignore table and played with it, but nothing seems to work, sessions is still executed, or attempted to be executed. Plus it seems it worked initially, since the table is heavily used, I can&#039;t imagine that it was not used when the replication worked initially.<br />
Secondly (and this is more an observation) looking at the binlog, the update statement is not the original one, but it has all columns in the update and the where statement. Since the table is different on master and slave (that&#039;s why I use ignore), it makes sense the replication &quot;can&#039;t find the record&quot;. I wasn&#039;t aware replication works like this, I thought it logs the original statement (which is &quot;update SESSIONS set x=... where y=...&quot;, whereas the logged statement has where x=...,y=...,z=...). I can see that if the table would be the same, the replicated statement would work, so this is not really the issue here.<br />
<br />
So in short, are there any ideas why my ignore table doesn&#039;t work? If you need more information, please ask.<br />
Thanks]]></description>
            <dc:creator>Stefan Kuhn</dc:creator>
            <category>Replication</category>
            <pubDate>Sun, 14 Jul 2024 22:53:30 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,725100,725100#msg-725100</guid>
            <title>Replication Startup and related performance parameters (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,725100,725100#msg-725100</link>
            <description><![CDATA[ We have been running on Mysql 5.1 for a long time. Recently with an OS update are running on 8.0.32.<br />
<br />
It appears 5.1 was a little quicker not much but a second or two to send data from the active to the standby database.<br />
<br />
I would like to find a way to check settings to either confirm or deny this. I found SOURCE_HEARTBEAT_PERIOD and it is unset in the V8 I am guessing I need to set this to get polling set up to check a little quicker for new data.<br />
<br />
I would appreciate any guidance anyone can give.]]></description>
            <dc:creator>John Maag</dc:creator>
            <category>Replication</category>
            <pubDate>Wed, 10 Jul 2024 13:33:51 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,724946,724946#msg-724946</guid>
            <title>Troubleshoorting replication slowness (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,724946,724946#msg-724946</link>
            <description><![CDATA[ I am new to mysql but experienced oracle dba.<br />
<br />
We have ported from mysql 5 to mysql 8 and am seeing lags in replication meaning when we check status, it is reporting as broken briefly.<br />
<br />
I would appreciate anyone who can point me in some basic checks. It is not hardware performance. disk, network, cpu, memory all show basically nothing happening on both nodes.<br />
<br />
FYI, the process of replication worked seamlessly on v5..]]></description>
            <dc:creator>John Maag</dc:creator>
            <category>Replication</category>
            <pubDate>Wed, 26 Jun 2024 20:33:06 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,724623,724623#msg-724623</guid>
            <title>Master-Master MySQL DB Replication (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,724623,724623#msg-724623</link>
            <description><![CDATA[ Hello, I am having two nodes of Zabbix which are in pacemaker cluster having resources for virtual IP address, Zabbix server and for front end. In both the nodes, we have MySQL server installed. And we have master-master DB replication established between the MySQL server components present in the two nodes of Zabbix. The approach which I followed to establish the replication is, did some configuration in the mysql server configuration file, ran the sql query &quot;change master to&quot; and provided the master_host, master_user, master_password, master_log_file and master_log_pos values. and started the slaves in both the zabbix nodes.<br />
<br />
The issue which I am facing is once after failover switch occurs, at that time I am seeing some deleted data&#039;s are appearing as a fresh data with new timestamp in the Zabbix MySQL DB tables (problem, history_text and events table etc,.) of Zabbix nodes.<br />
<br />
So any inputs to resolve this kind of issue will be more appreciable. Thanks.]]></description>
            <dc:creator>Rudresh SN</dc:creator>
            <category>Replication</category>
            <pubDate>Tue, 04 Jun 2024 06:43:37 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,723504,723504#msg-723504</guid>
            <title>Mysql 8.0 point in time recovery error - sequence number is inconsistent (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,723504,723504#msg-723504</link>
            <description><![CDATA[ Using relay logs to speed up the process of point in time recovery I run into an error.<br />
I’m testing this method to recover binlogs.<br />
<a href="https://lefred.be/content/howto-make-mysql-point-in-time-recovery-faster/"  rel="nofollow">https://lefred.be/content/howto-make-mysql-point-in-time-recovery-faster/</a><br />
<br />
But for each relay binlog being recovered I get one error. <br />
ERROR] [MY-010411] [Repl] Transaction’s sequence number is inconsistent with that of a preceding one: sequence_number (1) &lt;= previous sequence_number (1929667)<br />
The previous sequence number is different each time but the preceding sequence of (1) is always the same.<br />
I thought it might be that slave_preserve_commit_order was not enabled, but even with it enabled I still encounter that error.<br />
Note: binlog_order_commits set is not set to ON for the Master if that also needs to be set to prevent this issue?<br />
<br />
How do I tell what sequence number in the binlog is causing the issue given the replica and master status info below to calculate the offset? Also why is there a sequence number error? <br />
<br />
Here is information from replica status and master status.<br />
Master_Log_File: bin-log.006410<br />
Read_Master_Log_Pos: 708184140<br />
Relay_Log_File: bin-log.000002<br />
Relay_Log_Pos: 1073742162<br />
Exec_Master_Log_Pos: 1073742162<br />
<br />
Master status<br />
File: bin-log.006410<br />
Position: 708184140<br />
<br />
The start of the master binlog has this position info.<br />
at 4<br />
#240326 10:46:13 server id 12345678 end_log_pos 126 CRC32 0x7e689634 Start: binlog v 4, server v 8.0.32-24 created 240326 10:46:13 at startup<br />
Warning: this binlog is either in use or was not closed properly.<br />
ROLLBACK/!/;<br />
at 126<br />
<br />
The end has this info.<br />
at 708184051<br />
#240323 21:34:38 server id 12345678 end_log_pos 708184140 CRC32 0x72090bb8 Query thread_id=2369892 exec_time=221900 error_code=0<br />
SET TIMESTAMP=1711254878/!/;<br />
COMMIT<br />
Thanks for the help.]]></description>
            <dc:creator>Ron Tai</dc:creator>
            <category>Replication</category>
            <pubDate>Wed, 27 Mar 2024 15:15:03 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,723503,723503#msg-723503</guid>
            <title>mysql 8.0.32 mysqlbinlog point in time recovery very slow after upgrade (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,723503,723503#msg-723503</link>
            <description><![CDATA[ After upgrading from mysql 8.0.21 to mysql 8.0.32 and using mysqlbinlog to test point in time recovery it takes over 24 hours with 8.0.32, but with 8.0.21 it took less than 6 hours. I am recovering the same amount of binlogs and each binlog is 1 GB large. What might be causing such a large difference in time?<br />
I am using this method.<br />
mysqlbinlog binlog.000001 binlog.000002 | mysql -u root -p<br />
<br />
Thanks!]]></description>
            <dc:creator>Ron Tai</dc:creator>
            <category>Replication</category>
            <pubDate>Wed, 27 Mar 2024 15:01:35 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,723339,723339#msg-723339</guid>
            <title>Difficult to debug replication failure between MySQL 5.7 primary and MySQL 8 replica (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,723339,723339#msg-723339</link>
            <description><![CDATA[ We have a MySQL primary running on 5.7.40. We are in process of testing the upgrade to 8.0.36 (this is on AWS RDS). We had 2 MySQL-8 replicas and 2 MySQL-5.7 replicas replicating from the primary. At almost the same time, both the MySQL 8.0 replicas stopped replicating and complained on `HA_ERR_FOUND_DUPP_KEY`<br />
<br />
```<br />
 Replica SQL for channel &#039;&#039;: Worker 2 failed executing transaction &#039;87953f5d-7595-11ed-830d-02f4790d85ab:57805008598&#039; at source log mysql-bin-changelog.676514, end_log_pos 52858646; Could not execute Write_rows event on table ebdb.bike_issues; Duplicate entry &#039;177235118&#039; for key &#039;bike_issues.PRIMARY&#039;, Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event&#039;s source log mysql-bin-changelog.676514, end_log_pos 52858646, Error_code: MY-001062<br />
```<br />
<br />
Weirdly though the replicas complained on different key values, but the timestamp was nearly the same (few ms apart). The MySQL 5.7 replicas were fine, so probably there was no hiccups on the primary side. Nothing shows up in the logs of primary around this time either.<br />
<br />
The table, it complained on, is very commonly written table and we had this MySQL 8 replicas running for over a week now, without any replication issues. We do Row-based, GTID based replication (gtid_mode ON, enforce_gtid_consistency ON)<br />
<br />
I was able to resume replication by setting `slave_exec_mode` to `IDEMPOTENT` temporarily. When I check the error logs after the replication was in sync, I didn&#039;t see errors for the first replica&#039;s key in the second replica&#039;s error logs &amp; vice versa i.e. they both failed on different keys and did not overlap. Could this be some issue on the replication receiver part? Or possibly some bug due to version mismatch? Or some MySQL variable mismatch?<br />
<br />
How can I debug this further? What could have possibly caused this blip?]]></description>
            <dc:creator>Rahul Agrawal</dc:creator>
            <category>Replication</category>
            <pubDate>Sat, 16 Mar 2024 03:33:42 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,723324,723324#msg-723324</guid>
            <title>fail to run clone command on mysql8.0.31 (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,723324,723324#msg-723324</link>
            <description><![CDATA[ Hi, <br />
   I&#039;m trying to make clone work on my local test environment. <br />
   I have two mysql servers, one run as primary and another run as slave. <br />
   1. First, I create the one account named &#039;clone_user&#039; for clone <br />
mysql&gt; show grants for clone_user;<br />
+-----------------------------------------------------------+<br />
| Grants for clone_user@%                                   |<br />
+-----------------------------------------------------------+<br />
| GRANT USAGE ON *.* TO `clone_user`@`%`                    |<br />
| GRANT BACKUP_ADMIN,CLONE_ADMIN ON *.* TO `clone_user`@`%` |<br />
+-----------------------------------------------------------+<br />
    2. While group_replication is running on slave, I run clone command. It failed <br />
mysql&gt; clone instance from <a href="mailto:&#99;&#108;&#111;&#110;&#101;&#95;&#117;&#115;&#101;&#114;&#64;&#49;&#48;&#46;&#53;&#48;&#46;&#49;&#48;&#46;&#50;&#52;&#57;">&#99;&#108;&#111;&#110;&#101;&#95;&#117;&#115;&#101;&#114;&#64;&#49;&#48;&#46;&#53;&#48;&#46;&#49;&#48;&#46;&#50;&#52;&#57;</a>:3308 identified by &#039;***&#039;;<br />
ERROR 3875 (HY000): The clone operation cannot be executed when Group Replication is running.<br />
<br />
   3. So I stop group_replication and reset slave <br />
     mysql&gt; stop group_replication;<br />
     Query OK, 0 rows affected (4.08 sec)<br />
<br />
     mysql&gt; reset slave;<br />
     Query OK, 0 rows affected, 1 warning (0.03 sec)<br />
   4. Run clone again, it fails again because the slave is on super-read-only mode<br />
<br />
    mysql&gt; clone instance from <a href="mailto:&#99;&#108;&#111;&#110;&#101;&#95;&#117;&#115;&#101;&#114;&#64;&#49;&#48;&#46;&#53;&#48;&#46;&#49;&#48;&#46;&#50;&#52;&#57;">&#99;&#108;&#111;&#110;&#101;&#95;&#117;&#115;&#101;&#114;&#64;&#49;&#48;&#46;&#53;&#48;&#46;&#49;&#48;&#46;&#50;&#52;&#57;</a>:3308 identified by &#039;***&#039;;<br />
ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement<br />
<br />
   Please help if you have any idea. Thank you very much]]></description>
            <dc:creator>Liang Cheng</dc:creator>
            <category>Replication</category>
            <pubDate>Thu, 14 Mar 2024 11:49:06 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,723248,723248#msg-723248</guid>
            <title>Any workaround to execute one transaction on slave (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,723248,723248#msg-723248</link>
            <description><![CDATA[ Hi, All<br />
   We have one MySQL8 innodb cluster environment. For some unclear reason, the cluster hit some problems. We tried to restart all servers, however the slaves could not be restarted. We checked all 3 servers, the clone is installed on slave2, but not intalled on both slave1 and primary db.  We can install clone plugin on primary server, however since the group_replication is failed to start on slave1 and it&#039;s in read-only mode, we could not install clone plugin.  <br />
  I think sometimes we may have other similar cases, that is, slave is missing one record and we know that if we can insert such record we can bring this broken slave back to the cluster, but it&#039;s not easy to do this, at least on MySQL8.<br />
  Please share your ideas if you know how to do(workaround) it. Thanks a lot.<br />
  <br />
   <br />
<br />
<br />
2024-03-07T03:34:55.348541Z 2 [System] [MY-011511] [Repl] Plugin group_replication reported: &#039;This server is working as secondary member with primary member address m5128:3308.&#039;<br />
2024-03-07T03:34:56.349399Z 0 [Warning] [MY-013470] [Repl] Plugin group_replication reported: &#039;This member will start distributed recovery using clone. It is due to no ONLINE member has the missing data for recovering in its binary logs.&#039;<br />
2024-03-07T03:34:57.353749Z 0 [System] [MY-013471] [Repl] Plugin group_replication reported: &#039;Distributed recovery will transfer data using: Cloning from a remote group donor.&#039;<br />
2024-03-07T03:34:57.354259Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: &#039;Group membership changed to m3127:3308, m5128:3308 on view 17097824372187244:2.&#039;<br />
2024-03-07T03:34:57.354703Z 32 [System] [MY-011566] [Repl] Plugin group_replication reported: &#039;Setting super_read_only=OFF.&#039;<br />
2024-03-07T03:34:57.381436Z 32 [ERROR] [MY-011569] [Repl] Plugin group_replication reported: &#039;Internal query: CLONE INSTANCE FROM &#039;mysql_innodb_cluster_1&#039;@&#039;m5128&#039;:3308 IDENTIFIED BY &#039;*****&#039; REQUIRE NO SSL; result in error. Error number: 3862&#039;]]></description>
            <dc:creator>Liang Cheng</dc:creator>
            <category>Replication</category>
            <pubDate>Sun, 10 Mar 2024 06:21:08 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,721089,721089#msg-721089</guid>
            <title>MySQL Master-Replica incremental backups (1 reply)</title>
            <link>https://forums.mysql.com/read.php?26,721089,721089#msg-721089</link>
            <description><![CDATA[ I have a client who had me create a Mysql 8.0.35 row-based (not gtid) replication setup.  They want the following:<br />
<br />
MySQL -<br />
15 minute transaction logs<br />
Nightly incrementals<br />
Weekly fulls<br />
<br />
Should I &quot;cp&quot; replica binary logs or use mysqlbinlog (syntax ?) ?<br />
<br />
Currently, I am running mysqldump backups every hour on the Replica.<br />
<br />
date=$(date +%Y-%m-%d-%H.%M.%S)<br />
/usr/bin/mysql --vertical -e &quot;SHOW REPLICA STATUS&quot; \<br />
    | egrep &#039;Relay_Source_Log_File|Exec_Source_Log_Pos&#039; &gt; /mnt/sql_backups/all-databases_dumpall_02_replica_status_logfile_pos_$(date +%Y-%m-%d-%H.%M.%S).txt<br />
/usr/bin/mysqladmin stop-replica<br />
/usr/bin/mysqldump --all-databases &gt; /mnt/sql_backups/all-databases_dumpall_02_$(date +%Y-%m-%d-%H.%M.%S).sql<br />
/usr/bin/mysqladmin start-replica]]></description>
            <dc:creator>Bob Stoneman</dc:creator>
            <category>Replication</category>
            <pubDate>Wed, 28 Feb 2024 07:32:09 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,718181,718181#msg-718181</guid>
            <title>Failover/Switchover steps for Single Source Single Replica configuration (5 replies)</title>
            <link>https://forums.mysql.com/read.php?26,718181,718181#msg-718181</link>
            <description><![CDATA[ I am looking for Failover/Switchover steps for a Single Source Single Replica configuration.  Everything I have found is at minimum Single Source Two Replicas.<br />
<br />
I need to know how to Failover/Switchover when a Single Source fails/crashes and there is only a Single Replica to Failover/Switchover to :)]]></description>
            <dc:creator>Bob Stoneman</dc:creator>
            <category>Replication</category>
            <pubDate>Sun, 28 Jan 2024 19:33:41 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,715423,715423#msg-715423</guid>
            <title>Configure replication on the slave/replica server using the filename and position (1 reply)</title>
            <link>https://forums.mysql.com/read.php?26,715423,715423#msg-715423</link>
            <description><![CDATA[ I&#039;m running the following while setting up replication and I get the following error every time ?<br />
<br />
CHANGE REPLICATION SOURCE TO SOURCE_HOST=&#039;10.50.11.11&#039;,<br />
SOURCE_USER=&#039;cloud-repl&#039;, SOURCE_PASSWORD=&#039;*****************&#039;,<br />
SOURCE_LOG_FILE=&#039;mysql-bin.000051&#039;, SOURCE_LOG_POS=40474608;<br />
<br />
Replica Server error log:<br />
2024-01-20T18:48:36.203277Z 75 [Warning] [MY-013360] [Server] Plugin mysql_native_password reported: &#039;&#039;mysql_native_password&#039; is deprecated and will be removed in a future release. Please use caching_sha2_password instead&#039;<br />
2024-01-20T18:49:29.833413Z 76 [Warning] [MY-013360] [Server] Plugin mysql_native_password reported: &#039;&#039;mysql_native_password&#039; is deprecated and will be removed in a future release. Please use caching_sha2_password instead&#039;<br />
2024-01-20T18:50:49.893578Z 76 [ERROR] [MY-010717] [Repl] Error reading replica worker configuration<br />
2024-01-20T18:50:49.893647Z 76 [ERROR] [MY-010418] [Repl] Error creating applier metadata: Failed to initialize the worker info structure.<br />
<br />
<br />
I can&#039;t find anything on this anywhere.  Any guidance would be appreciated :)]]></description>
            <dc:creator>Bob Stoneman</dc:creator>
            <category>Replication</category>
            <pubDate>Sat, 20 Jan 2024 19:22:00 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,711921,711921#msg-711921</guid>
            <title>Removing master status (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,711921,711921#msg-711921</link>
            <description><![CDATA[ Not sure how it happened, I copyied a KVM VPS while shut down where there was a MySQL 5.7 database I want to test upgrade to 8.0 on FreeBSD 13.2. While it also has a jailed mysql process running on port 3307 setup as a slave to another master, the database running on the default port 3306 is a standalone database with no replication set up at all. <br />
<br />
After getting the copied QEMU disk booted with its own IP address and the jail disabled on boot, I find in phpmyadmin where the Replication is set as a master and slave configurations. I went back and checked the source VPS to confirm it has no Replication configuration at all. I did the following to stop and reset the slave...<br />
<br />
STOP SLAVE;<br />
RESET SLAVE;<br />
RESET SLAVE ALL;<br />
<br />
That removed the slave configuration, but the master remains...<br />
<br />
root@localhost [(none)]&gt; show master status;<br />
+---------------+----------+--------------+------------------+---------------------------------------------+<br />
| File          | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                           |<br />
+---------------+----------+--------------+------------------+---------------------------------------------+<br />
| binlog.000003 |      157 |              |                  | ca3f72cd-03be-11ed-97ff-5254006b63d5:1-9387 |<br />
+---------------+----------+--------------+------------------+---------------------------------------------+<br />
<br />
I found this should work? But not...<br />
<br />
root@localhost [(none)]&gt; CHANGE MASTER TO MASTER_HOST=&#039;&#039;;<br />
ERROR 1210 (HY000): Incorrect arguments to SOURCE_HOST<br />
root@localhost [(none)]&gt; show master status;<br />
<br />
Not sure how this Replication got configured with this mysqld config? I tried to comment out the relay-log, sync_binlog and sync_relay_log.<br />
<br />
[mysqld]<br />
user                            = mysql<br />
port                            = 3306<br />
socket                          = /tmp/mysql.sock<br />
bind-address                    = *<br />
basedir                         = /usr/local<br />
datadir                         = /www/db/mysql<br />
tmpdir                          = /www/db/mysql_tmpdir<br />
secure-file-priv                = /www/db/mysql_secure<br />
relay-log                       = /www/db/log/mysql-relay-bin.log<br />
log-output                      = FILE<br />
general-log                     = 1<br />
general-log-file                = /www/db/log/general.log<br />
relay-log-recovery              = 1<br />
slow-query-log                  = 1<br />
sync_binlog                     = 1<br />
sync_relay_log                  = 1<br />
binlog_cache_size               = 16M<br />
expire_logs_days                = 30<br />
default_password_lifetime       = 0<br />
enforce-gtid-consistency        = 1<br />
gtid-mode                       = OFF<br />
safe-user-create                = 1<br />
lower_case_table_names          = 1<br />
explicit-defaults-for-timestamp = 1<br />
myisam-recover-options          = BACKUP,FORCE<br />
open_files_limit                = 32768<br />
table_open_cache                = 16384<br />
table_definition_cache          = 8192<br />
net_retry_count                 = 16384<br />
key_buffer_size                 = 256M<br />
max_allowed_packet              = 64M<br />
query_cache_type                = 0<br />
query_cache_size                = 0<br />
long_query_time                 = 0.5<br />
innodb_buffer_pool_size         = 1G<br />
innodb_data_home_dir            = /www/db/mysql<br />
innodb_log_group_home_dir       = /www/db/mysql<br />
innodb_data_file_path           = ibdata1:128M:autoextend<br />
innodb_temp_data_file_path      = ibtmp1:128M:autoextend<br />
innodb_flush_method             = O_DIRECT<br />
innodb_log_file_size            = 256M<br />
innodb_log_buffer_size          = 16M<br />
innodb_write_io_threads         = 8<br />
innodb_read_io_threads          = 8<br />
innodb_autoinc_lock_mode        = 2<br />
skip-symbolic-links<br />
tls_version                     = TLSv1.2<br />
<br />
Any other pointers regarding the upgrade are much appreciated.]]></description>
            <dc:creator>Robert Fitzpatrick</dc:creator>
            <category>Replication</category>
            <pubDate>Fri, 01 Dec 2023 19:08:39 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,710721,710721#msg-710721</guid>
            <title>Replication fails with &quot;Got fatal error 1236 from source when reading data from binary log: &#039;Cannot replicate because the source purged required binary logs&quot; (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,710721,710721#msg-710721</link>
            <description><![CDATA[ I have 2 master instances (M1 and M2) with 1 replica each (R1 and R2) - all running MySQL 8.0.34 version with GTID enabled in all. All schema names are unique in M1 and M2 and there is no overlap. Also, R1 is configured to ignore replicating one schema (say myExcludedSchemaName).<br />
<br />
I wanted to copy a few  tables from M2 to M1. So I executed the following mysqldump command in R2:<br />
<br />
sudo mysqldump -umyUserName -p myExcludedSchemaName myTableName1 myTableName2 myTableName3 myTableName4 --lock-tables=false &gt; partialDump.sql<br />
<br />
There was a warning when this command was executed:<br />
Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don&#039;t want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events.<br />
<br />
However, the dump file got generated. So I tried to restore these tables in M1 using the following command:<br />
<br />
mysql -umyUserName -p myExcludedSchemaName &lt; partialDump.sql<br />
<br />
This failed with the following error:<br />
<br />
ERROR 3546 (HY000) at line 24: @@GLOBAL.GTID_PURGED cannot be changed: the added gtid set must not overlap with @@GLOBAL.GTID_EXECUTED<br />
<br />
So I regenerated the dump again by passing another flag: --set-gtid-purged=OFF<br />
<br />
There were no warnings when the dump was generated or restored in M1. However, the replica has stopped after this. The I/O thread has stopped with the following error:<br />
<br />
Got fatal error 1236 from source when reading data from binary log: &#039;Cannot replicate because the source purged required binary logs. Replicate the missing transactions from elsewhere, or provision a new replica from backup. Consider increasing the source&#039;s binary log expiration period. The GTID sets and the missing purged transactions are too long to print in this message. For more information, please see the source&#039;s error log or the manual for GTID_SUBTRACT&#039;<br />
<br />
How can I resolve this error?]]></description>
            <dc:creator>Shobhana Sriram</dc:creator>
            <category>Replication</category>
            <pubDate>Fri, 24 Nov 2023 17:00:18 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,710662,710662#msg-710662</guid>
            <title>Transaction inconsistency in Group Replication with AFTER mode (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,710662,710662#msg-710662</link>
            <description><![CDATA[ Hi<br />
<br />
Can someone explain following behavior? I expected that since paxos is used, it will not be possible that such data inconsistency occurs. However, I can constantly reproduce this behavior. <br />
<br />
I have 3 node MySQL group replication cluster, with AFTER consistency.<br />
When a node is abruptly shut down or disconnected from cluster, ongoing transactions on primary are rolled back due to certification failure. However, transactions are applied on secondary node. There are no writes done on secondary node. This will lead to data inconsistency issues wherein a client connected to master will see connection terminated from primary (if primary goes down), but the data would actually be present in cluster since secondary node had applied the transaction.<br />
<br />
I verified this for following scenarios:<br />
1. one of secondary nodes is shut down abruptly<br />
2. primary is shut down abruptly<br />
<br />
In both cases, secondary nodes had more transactions than primary. I verified his using gtid-executed.  Eg:<br />
Old: efa6f74a-73f0-11ee-8925-0a67e5184ce8:1,<br />
fa4e6db0-3475-46c9-8e9c-2fab646ed636:1-1584925:2492207-2511313 <br />
New: efa6f74a-73f0-11ee-8925-0a67e5184ce8:1,<br />
fa4e6db0-3475-46c9-8e9c-2fab646ed636:1-1584935:2492207-2511313 <br />
<br />
<br />
| group_replication_advertise_recovery_endpoints      | DEFAULT                                              |<br />
| group_replication_allow_local_lower_version_join    | OFF                                                  |<br />
| group_replication_auto_increment_increment          | 7                                                    |<br />
| group_replication_autorejoin_tries                  | 3                                                    |<br />
| group_replication_bootstrap_group                   | OFF                                                  |<br />
| group_replication_clone_threshold                   | 9223372036854775807                                  |<br />
| group_replication_communication_debug_options       | GCS_DEBUG_NONE                                       |<br />
| group_replication_communication_max_message_size    | 10485760                                             |<br />
| group_replication_components_stop_timeout           | 31536000                                             |<br />
| group_replication_compression_threshold             | 1000000                                              |<br />
| group_replication_consistency                       | AFTER                                             |<br />
| group_replication_enforce_update_everywhere_checks  | OFF                                                  |<br />
| group_replication_exit_state_action                 | READ_ONLY                                            |<br />
| group_replication_flow_control_applier_threshold    | 25000                                                |<br />
| group_replication_flow_control_certifier_threshold  | 25000                                                |<br />
| group_replication_flow_control_hold_percent         | 10                                                   |<br />
| group_replication_flow_control_max_quota            | 0                                                    |<br />
| group_replication_flow_control_member_quota_percent | 0                                                    |<br />
| group_replication_flow_control_min_quota            | 0                                                    |<br />
| group_replication_flow_control_min_recovery_quota   | 0                                                    |<br />
| group_replication_flow_control_mode                 | QUOTA                                                |<br />
| group_replication_flow_control_period               | 1                                                    |<br />
| group_replication_flow_control_release_percent      | 50                                                   |<br />
| group_replication_force_members                     |                                                      |<br />
| group_replication_group_name                        | fa4e6db0-3475-46c9-8e9c-2fab646ed636                 |<br />
| group_replication_group_seeds                       | 10.83.54.xx:33061,10.83.57.xx:33061,10.83.38.xx:33061 |<br />
| group_replication_gtid_assignment_block_size        | 1000000                                              |<br />
| group_replication_ip_allowlist                      | 10.0.0.0/8                                           |<br />
| group_replication_ip_whitelist                      | 10.0.0.0/8                                           |<br />
| group_replication_local_address                     | 10.83.54.xx:33061                                    |<br />
| group_replication_member_expel_timeout              | 5                                                    |<br />
| group_replication_member_weight                     | 70                                                   |<br />
| group_replication_message_cache_size                | 1073741824                                           |<br />
| group_replication_poll_spin_loops                   | 0                                                    |<br />
| group_replication_recovery_complete_at              | TRANSACTIONS_APPLIED                                 |<br />
| group_replication_recovery_compression_algorithms   | uncompressed                                         |<br />
| group_replication_recovery_get_public_key           | OFF                                                  |<br />
| group_replication_recovery_public_key_path          |                                                      |<br />
| group_replication_recovery_reconnect_interval       | 60                                                   |<br />
| group_replication_recovery_retry_count              | 10                                                   |<br />
| group_replication_recovery_ssl_ca                   |                                                      |<br />
| group_replication_recovery_ssl_capath               |                                                      |<br />
| group_replication_recovery_ssl_cert                 |                                                      |<br />
| group_replication_recovery_ssl_cipher               |                                                      |<br />
| group_replication_recovery_ssl_crl                  |                                                      |<br />
| group_replication_recovery_ssl_crlpath              |                                                      |<br />
| group_replication_recovery_ssl_key                  |                                                      |<br />
| group_replication_recovery_ssl_verify_server_cert   | OFF                                                  |<br />
| group_replication_recovery_tls_ciphersuites         |                                                      |<br />
| group_replication_recovery_tls_version              | TLSv1,TLSv1.1,TLSv1.2,TLSv1.3                        |<br />
| group_replication_recovery_use_ssl                  | OFF                                                  |<br />
| group_replication_recovery_zstd_compression_level   | 3                                                    |<br />
| group_replication_single_primary_mode               | ON                                                   |<br />
| group_replication_ssl_mode                          | DISABLED                                             |<br />
| group_replication_start_on_boot                     | OFF                                                  |<br />
| group_replication_tls_source                        | MYSQL_MAIN                                           |<br />
| group_replication_transaction_size_limit            | 150000000                                            |<br />
| group_replication_unreachable_majority_timeout      | 0                                                    |<br />
| innodb_replication_delay                            | 0                                                    |<br />
| replication_optimize_for_static_plugin_config       | OFF                                                  |<br />
| replication_sender_observe_commit_only              | OFF <br />
<br />
How to repeat:<br />
1. Set up 3 node group replication cluster<br />
2. set group_replication_consistency=AFTER  on all nodes<br />
3. kill mysql process on primary<br />
4. check gtid_executed on old primary and new primary]]></description>
            <dc:creator>Ankur Shukla</dc:creator>
            <category>Replication</category>
            <pubDate>Thu, 16 Nov 2023 09:54:42 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,710082,710082#msg-710082</guid>
            <title>Replication Error (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,710082,710082#msg-710082</link>
            <description><![CDATA[ Hello All, <br />
<br />
I got an issue while configure replication on mysql 8.0.33 community version. <br />
I see the error log in replica server as below.<br />
<br />
2023-09-30T08:57:41.613111Z 23 [System] [MY-010597] [Repl] &#039;CHANGE REPLICATION SOURCE TO FOR CHANNEL &#039;&#039; executed&#039;. Previous state source_host=&#039;10.11.21.114&#039;, source_port= 17567, source_log_file=&#039;mysql-bin.000003&#039;, source_log_pos= 157, source_bind=&#039;&#039;. New state source_host=&#039;10.11.21.114&#039;, source_port= 17567, source_log_file=&#039;mysql-bin.000004&#039;, source_log_pos= 157, source_bind=&#039;&#039;.<br />
2023-09-30T08:58:04.915579Z 24 [Warning] [MY-010584] [Repl] Replica I/O for channel &#039;&#039;: Get source SERVER_ID failed with error: , Error_code: MY-001159<br />
2023-09-30T08:58:04.916362Z 24 [Warning] [MY-013120] [Repl] Replica I/O for channel &#039;&#039;: Source command COM_REGISTER_REPLICA failed: failed registering on source, reconnecting to try again, log &#039;mysql-bin.000004&#039; at position 157, Error_code: MY-013120<br />
2023-09-30T08:58:23.899753Z 24 [Warning] [MY-010584] [Repl] Replica I/O for channel &#039;&#039;: Get source clock failed with error: , Error_code: MY-001159<br />
<br />
Can you help to check. <br />
<br />
Thanks, <br />
Piseth]]></description>
            <dc:creator>Piseth Ben</dc:creator>
            <category>Replication</category>
            <pubDate>Sat, 30 Sep 2023 09:28:35 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,710076,710076#msg-710076</guid>
            <title>Replication replay errors? (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,710076,710076#msg-710076</link>
            <description><![CDATA[ I have some weirdness I am struggling with.<br />
<br />
I have one slave database (8.0.32) replicating multiple remotes (different sites) databases.<br />
<br />
I have a specific remote master database (8.0.34) where the slave has stopped and thrown errors twice in the last week.<br />
  <br />
However, these errors are really weird.  They include &#039;duplicate entries&#039; for tables with auto_increment ids.  Entries that where NEVER in the slave database to begin with. <br />
<br />
Its almost as if the slave database added a bunch of entries and then started to replay the logs trying to re-enter those rows?<br />
<br />
If I delete those rows on the slave database then I can restart the slave.<br />
<br />
Thoughts?<br />
<br />
ps.  Three times :-(]]></description>
            <dc:creator>Jason K</dc:creator>
            <category>Replication</category>
            <pubDate>Fri, 29 Sep 2023 19:47:45 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,709910,709910#msg-709910</guid>
            <title>[REPLICATION] : replica hangs on Applying batch of row changes (update) (1 reply)</title>
            <link>https://forums.mysql.com/read.php?26,709910,709910#msg-709910</link>
            <description><![CDATA[ HI,<br />
My replica worker is waiting indefinitely on this message :<br />
show processlist :<br />
11 | system user      |                 | TMP  | Query   | 4437 | Applying batch of row changes (update)      | NULL     <br />
<br />
the performance_schema.replication_applier_status_by_worker table indicate the last transaction applied<br />
<br />
show replica status :<br />
| Replica_IO_State                 | Source_Host | Source_User      | Source_Port | Connect_Retry | Source_Log_File  | Read_Source_Log_Pos | Relay_Log_File              | Relay_Log_Pos | Relay_Source_Log_File | Replica_IO_Running | Replica_SQL_Running | Replicate_Do_DB                | Replicate_Ignore_DB | Replicate_Do_Table | Replicate_Ignore_Table | Replicate_Wild_Do_Table | Replicate_Wild_Ignore_Table | Last_Errno | Last_Error | Skip_Counter | Exec_Source_Log_Pos | Relay_Log_Space | Until_Condition | Until_Log_File | Until_Log_Pos | Source_SSL_Allowed | Source_SSL_CA_File | Source_SSL_CA_Path | Source_SSL_Cert | Source_SSL_Cipher | Source_SSL_Key | Seconds_Behind_Source | Source_SSL_Verify_Server_Cert | Last_IO_Errno | Last_IO_Error | Last_SQL_Errno | Last_SQL_Error | Replicate_Ignore_Server_Ids | Source_Server_Id | Source_UUID                          | Source_Info_File        | SQL_Delay | SQL_Remaining_Delay | Replica_SQL_Running_State                   | Source_Retry_Count | Source_Bind | Last_IO_Error_Timestamp | Last_SQL_Error_Timestamp | Source_SSL_Crl | Source_SSL_Crlpath | Retrieved_Gtid_Set | Executed_Gtid_Set                           | Auto_Position | Replicate_Rewrite_DB | Channel_Name | Source_TLS_Version | Source_public_key_path | Get_Source_public_key | Network_Namespace |<br />
+----------------------------------+-------------+------------------+------------<br />
| Waiting for source to send event | ugdaspdt03  | replication_user |        3306 |            60 | mysql-bin.000005 |            15980987 | ugdaspdt04-relay-bin.000006 |      35688101 | mysql-bin.000001      | Yes                | Yes                 | btspdrp2,btspdch2,btspdrf1,TMP |                     |                    |                        |                         |                             |          0 |            |            0 |            35826685 |      4362053769 | None            |                |             0 | No                 |                    |                    |                 |                   |                |                  4801 | No                            |             0 |               |              0 |                |                             |                1 | 5696678d-b671-11ed-a9c1-00505697b35f | mysql.slave_master_info |         0 |                NULL | Waiting for dependent transaction to commit |              86400 |             |                         |                          |                |                    |                    | 5696678d-b671-11ed-a9c1-00505697b35f:1-1226 |             0 |           <br />
<br />
<br />
I tried to modify my config and to run only one worker but it&#039;s the same.<br />
<br />
Did you face this problem ?<br />
Thanks]]></description>
            <dc:creator>christophe offroy</dc:creator>
            <category>Replication</category>
            <pubDate>Mon, 20 Nov 2023 11:43:57 +0000</pubDate>
        </item>
        <item>
            <guid>https://forums.mysql.com/read.php?26,709838,709838#msg-709838</guid>
            <title>MySQL Replication Observability (no replies)</title>
            <link>https://forums.mysql.com/read.php?26,709838,709838#msg-709838</link>
            <description><![CDATA[ MySQL Replication Observability<br />
- <a href="https://blogs.oracle.com/mysql/post/mysql-8-and-replication-observability"  rel="nofollow">https://blogs.oracle.com/mysql/post/mysql-8-and-replication-observability</a>]]></description>
            <dc:creator>Edwin Desouza</dc:creator>
            <category>Replication</category>
            <pubDate>Thu, 24 Aug 2023 15:55:21 +0000</pubDate>
        </item>
    </channel>
</rss>
