MySQL Cluster Manager 1.4.3 has been released
Posted by: Sreedhar S
Date: July 10, 2017 05:04AM
Date: July 10, 2017 05:04AM
Dear MySQL Users, MySQL Cluster Manager 1.4.3 has been released and can be downloaded from the My Oracle Support (MOS) website. It will also be available on Oracle Software Delivery Cloud at http://edelivery.oracle.com with the next monthly update MySQL Cluster Manager is an optional component of the MySQL Cluster Carrier Grade Edition, providing a command-line interface that automates common management tasks, including the following online operations: - Configuring and starting MySQL Cluster - Upgrades - Adding and removing cluster nodes - Adding and removing site hosts - Configuration changes - Backup and restore MySQL Cluster Manager is a commercial extension to the MySQL family of products. More details can be found at http://www.mysql.com/products/cluster/mcm/ A brief summary of changes in MySQL Cluster Manager version 1.4.3 is listed below: Changes in MySQL Cluster Manager 1.4.3 (2017-07-10) This section documents all changes and bug fixes that have been applied in MySQL Cluster Manager 1.4.3 since the release of MySQL Cluster Manager version 1.4.2. Functionality Added or Changed * Agent: CPU usage during idle time for the mcmd agents has been significantly reduced. (Bug #26227736) * Agent: A new error code, Error 7030, has been created for failed ndb_mgmd commands and mysqld queries. (Bug #26160968) * Agent: Added support for the --skip-networking option for mysqld nodes, allowing mysqld nodes of a managed cluster to communicate with client applications using named pipes or shared memory on Windows platforms, and socket files on Unix-like platforms. Notice that, however, communication between mcmd agents and mcm clients using named pipes, shared memory, or socket files remain unsupported. (Bug #25992390, Bug #25974499) * Client: The start cluster --initial command now reinitializes the SQL nodes (if their data directories are empty) as well as the data nodes of an NDB Cluster. A new option, --skip-init, has been introduced, for specifying a comma-separated list of the SQL nodes for which reinitialization is to be skipped. (Bug #25856285, Bug #85713) * Client: Checksum verification has been added for all cluster reconfiguration plans created by the mcmd agents. Checksums for plans created locally are shared among all agents, and when the checksums do not match, the reconfiguration is aborted. This prevents agents from executing different plans. (Bug #23225839) * Files have been removed from the MySQL Cluster Manager + NDB Cluster bundled package, in order to reduce the package size significantly. (Bug #25916635) Bugs Fixed * Agent: When the list nextnodeid command was run against a cluster with the maximum number of nodes allowed, the mcmd agent quit unexpectedly. With this fix, the situation is properly handled. (Bug #26286531) * Agent: For a cluster with NoOfReplicas=1, trying to stop a data node with the stop process command would cause the agent to quit unexpectedly. (Bug #26259780) * Agent: When a data node was killed by an arbitrator in a situation of network partitioning, an mcmd failed to handle the exit report from the node and quit unexpectedly. It was due to a mishandling of the nodegroup information, which this fix corrects. (Bug #26192412) * Agent: A cluster could not be started if a relative path had been used for the --manager-directory option to set the location of the agent repository. (Bug #26172299) * Agent: When executing a user command, the mcmd agent could hang if the expected reply from another agent never arrived. This fix improves the timeout handling to avoid such hangs. (Bug #26168339) * Agent: While running the import config command, the mcmd agents that were present during the earlier dryrun for the import would become silent and then unavailable. This was due to some hostname resolution issues, which has been addressed by this fix. (Bug #26089906) * Agent: A collect log command sometimes failed at the middle with an ERROR 1003 Internal error: No clients connected. It was because the mcmd agent reset the copy completion marker prematurely; the behavior has been stopped by this fix. (Bug #26086958) * Agent: When the mcmd agents' clocks ran out of sync due to time drifts on virtual machines running Windows operations systems and then the clocks ran in sync again, communications among the agents failed. This fix prevents the problem by making the agents use a monotonic timer for their communication. (Bug #26084090) * Agent: The dropping or recreating of a node group that took place when adding data nodes could sometimes fail with an assertion error ("Polled nodegroup info is inconsistent"). This fix relaxes the assertion, which allows the node group reconfiguration to be completed. (Bug #26051753) References: See also: Bug #20104357. * Agent: During an execution of a set command, if no mysqld node is available for querying cluster information, an mcmd agent timed out while waiting for the "prepared" message from another agent, even after the message was already sent. This was due to the fact that the two agents had inconsistent plans of execution for the set command. This fix prevents the inconsistency. (Bug #26021616) References: This issue is a regression of: Bug #14230789, Bug #23148061. * Agent: A backup cluster --waitcompleted timed out sometimes right before the backup was completed when there were a lot of tables to be backed up. This was because the logical backup for the tables' metadata was taking too long in that case. With this fix, the mcmd agent is now sent progress reports of the logical backup, and the backup does not time out unless no more progress reports are received. (Bug #26000482) * Agent: When a set command involved a restart of data nodes of a cluster but one of the data nodes had been stopped, the set command failed with a timeout. With this fix, the set command is carried out successfully with a rolling restart for the data nodes. (Bug #25869325) * Agent: If a mysqld node was configured with the --skip-name-resolve option, attempts for mcmd to connect to the mysqld node would fail with the error message Host '127.0.0.1' is not allowed to connect to this MySQL server. This was because the MySQL account used by mcmd had 127.0.0.1 as its host name, which is not allowed when the --skip-name-resolve option is used with the mysqld node. This fix corrects the account host name to localhost. (Bug #25831764, Bug #85620) * Agent: When a host and its mcmd agent were restarted, mcmd might fail to restart a management or mysqld node on the host, and the show status command continuously returned the status of the node to be unknown. (Bug #25822822) * Agent: When an mcmd agent was in the process of shutting down, a user command issued then might cause the agent to quit unexpectedly. With this fix, an error message "Agent is shutting down" is returned, and the agent continues with its shutdown. (Bug #25055338) * Agent: When a set command involved a restart of data nodes of a cluster but one of the data nodes was in the failed state, mcmd restarted the data node, and then restarted it once more as part of a rolling restart, which was unnecessary. This fix eliminates the second restart. (Bug #23586651) * Client: After an agent was started and a few commands had been executed from the mcm client, the show settings command started returning the wrong value for the --log-level option. (Bug #26189795) * Client: Trying to set the mysqld node option --validate_password resulted in an error complaining that the parameter did not exist, even if the Password Validation Plugin (http://dev.mysql.com/doc/refman/5.6/en/validate-password -plugin.html) had already been installed on the mysqld node. It was due to some errors with the plug-in activation for mysqld nodes, which have now been corrected. (Bug #25797125) On behalf of the Oracle MySQL RE Team -Sreedhar S
MySQL Cluster Manager 1.4.3 has been released
July 10, 2017 05:04AM
Sorry, you can't reply to this topic. It has been closed.