MySQL Forums
Forum List  »  Announcements

MySQL Cluster 7.4.3 has been released
Posted by: Sreedhar S
Date: January 21, 2015 07:50PM

Dear MySQL Users,

MySQL Cluster 7.4.3 (Release candidate) is a public milestone
release for MySQL Cluster 7.4.

MySQL Cluster is the distributed, shared-nothing variant of MySQL.
This storage engine provides:

  - In-Memory storage - Real-time performance (with optional
    checkpointing to disk)
  - Transparent Auto-Sharding - Read & write scalability
  - Active-Active/Multi-Master geographic replication
  - 99.999% High Availability with no single point of failure
    and on-line maintenance
  - NoSQL and SQL APIs (including C++, Java, http, Memcached
    and JavaScript/Node.js)

MySQL Cluster 7.4 makes significant advances in performance;
operational efficiency (such as enhanced reporting and faster restarts
and upgrades) and conflict detection and resolution for active-active
geographic replication between MySQL Clusters.

MySQL Cluster 7.4.3 DMR can be downloaded from the "Development
Releases" tab at

  http://www.mysql.com/downloads/cluster/

where you will also find Quick Start guides to help you get your
first MySQL Cluster database up and running.

The release notes are available from

  http://dev.mysql.com/doc/relnotes/mysql-cluster/7.4/en/index.html

MySQL Cluster enables users to meet the database challenges of next
generation web, cloud, and communications services with uncompromising
scalability, uptime and agility.

As with any other pre-production release, caution should be taken when
installing on production level systems or systems with critical data.
More information on the Development Milestone Release process can be
found at

  http://dev.mysql.com/doc/mysql-development-cycle/en/development-milestone-releases.html

More details can be found at

  http://www.mysql.com/products/cluster/

Enjoy !

==============================================================================

Changes in MySQL Cluster NDB 7.4.3 (5.6.22-ndb-7.4.3)         2015.01.21

   MySQL Cluster NDB 7.4.3 is a new release of MySQL Cluster,
   based on MySQL Server 5.6 and including features under
   development for version 7.4 of the NDB storage engine, as
   well as fixing a number of recently discovered bugs in
   previous MySQL Cluster releases.

   Obtaining MySQL Cluster NDB 7.4.  MySQL Cluster NDB 7.4
   source code and binaries can be obtained from
   http://dev.mysql.com/downloads/cluster/.

   For an overview of changes made in MySQL Cluster NDB 7.4, see
   MySQL Cluster Development in MySQL Cluster NDB 7.4
   (http://dev.mysql.com/doc/refman/5.6/en/mysql-cluster-develop
   ment-5-6-ndb-7-4.html).

   This release also incorporates all bugfixes and changes made
   in previous MySQL Cluster releases, as well as all bugfixes
   and feature changes which were added in mainline MySQL 5.6
   through MySQL 5.6.22 (see Changes in MySQL 5.6.22
   (2014-12-01)
   (http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-22.h
   tml)).

   Functionality Added or Changed

     * Additional logging is now performed of internal states
       occurring during system restarts such as waiting for node
       ID allocation and master takeover of global and local
       checkpoints. (Bug #74316, Bug #19795029)

     * Cluster API: Two new example programs, demonstrating
       reads and writes of CHAR, VARCHAR, and VARBINARY column
       values, have been added to storage/ndb/ndbapi-examples in
       the MySQL Cluster source tree. For more information about
       these programs, including source code listings, see NDB
       API Simple Array Example
       (http://dev.mysql.com/doc/ndbapi/en/ndbapi-examples-array
       -simple.html), and NDB API Simple Array Example Using
       Adapter
       (http://dev.mysql.com/doc/ndbapi/en/ndbapi-examples-array
       -adapter.html).

   Bugs Fixed

     * The global checkpoint commit and save protocols can be
       delayed by various causes, including slow disk I/O. The
       DIH master node monitors the progress of both of these
       protocols, and can enforce a maximum lag time during
       which the protocols are stalled by killing the node
       responsible for the lag when it reaches this maximum.
       This DIH master GCP monitor mechanism did not perform its
       task more than once per master node; that is, it failed
       to continue monitoring after detecting and handling a GCP
       stop. (Bug #20128256)
       References: See also Bug #19858151.

     * When running mysql_upgrade on a MySQL Cluster SQL node,
       the expected drop of the performance_schema database on
       this node was instead performed on all SQL nodes
       connected to the cluster. (Bug #20032861)

     * The warning shown when an ALTER TABLE ALGORITHM=INPLACE
       ... ADD COLUMN statement automatically changes a column's
       COLUMN_FORMAT from FIXED to DYNAMIC now includes the name
       of the column whose format was changed. (Bug #20009152,
       Bug #74795)

     * The local checkpoint ScanFrag watchdog and the global
       checkpoint monitor can each exclude a node when it is too
       slow when participating in their respective protocols.
       This exclusion was implemented by simply asking the
       failing node to shut down, which in case this was delayed
       (for whatever reason) could prolong the duration of the
       GCP or LCP stall for other, unaffected nodes.
       To minimize this time, an isolation mechanism has been
       added to both protocols whereby any other live nodes
       forcibly disconnect the failing node after a
       predetermined amount of time. This allows the failing
       node the opportunity to shut down gracefully (after
       logging debugging and other information) if possible, but
       limits the time that other nodes must wait for this to
       occur. Now, once the remaining live nodes have processed
       the disconnection of any failing nodes, they can commence
       failure handling and restart the related protocol or
       protocol, even if the failed node takes an excessiviely
       long time to shut down. (Bug #19858151)
       References: See also Bug #20128256.

     * The matrix of values used for thread configuration when
       applying the setting of the MaxNoOfExecutionThreads
       configuration parameter has been improved to align with
       support for greater numbers of LDM threads. See
       Multi-Threading Configuration Parameters (ndbmtd)
       (http://dev.mysql.com/doc/refman/5.6/en/mysql-cluster-ndb
       d-definition.html#mysql-cluster-ndbd-definition-ndbmtd-pa
       rameters), for more information about the changes. (Bug
       #75220, Bug #20215689)

     * When a new node failed after connecting to the president
       but not to any other live node, then reconnected and
       started again, a live node that did not see the original
       connection retained old state information. This caused
       the live node to send redundant signals to the president,
       causing it to fail. (Bug #75218, Bug #20215395)

     * In the NDB kernel, it was possible for a
       TransporterFacade object to reset a buffer while the data
       contained by the buffer was being sent, which could lead
       to a race condition. (Bug #75041, Bug #20112981)

     * mysql_upgrade failed to drop and recreate the ndbinfo
       database and its tables as expected. (Bug #74863, Bug
       #20031425)

     * Due to a lack of memory barriers, MySQL Cluster programs
       such as ndbmtd did not compile on POWER platforms. (Bug
       #74782, Bug #20007248)

     * In spite of the presence of a number of protection
       mechanisms against overloading signal buffers, it was
       still in some cases possible to do so. This fix adds
       block-level support in the NDB kernel (in SimulatedBlock)
       to make signal buffer overload protection more reliable
       than when implementing such protection on a case-by-case
       basis. (Bug #74639, Bug #19928269)

     * Copying of metadata during local checkpoints caused node
       restart times to be highly variable which could make it
       difficult to diagnose problems with restarts. The fix for
       this issue introduces signals (including PAUSE_LCP_IDLE,
       PAUSE_LCP_REQUESTED, and PAUSE_NOT_IN_LCP_COPY_META_DATA)
       to pause LCP execution and flush LCP reports, making it
       possible to block LCP reporting at times when LCPs during
       restarts become stalled in this fashion. (Bug #74594, Bug
       #19898269)

     * When a data node was restarted from its angel process
       (that is, following a node failure), it could be
       allocated a new node ID before failure handling was
       actually completed for the failed node. (Bug #74564, Bug
       #19891507)

     * In NDB version 7.4, node failure handling can require
       completing checkpoints on up to 64 fragments. (This
       checkpointing is performed by the DBLQH kernel block.)
       The requirement for master takeover to wait for
       completion of all such checkpoints led in such cases to
       excessive length of time for completion.
       To address these issues, the DBLQH kernel block can now
       report that it is ready for master takeover before it has
       completed any ongoing fragment checkpoints, and can
       continue processing these while the system completes the
       master takeover. (Bug #74320, Bug #19795217)

     * Local checkpoints were sometimes started earlier than
       necessary during node restarts, while the node was still
       waiting for copying of the data distribution and data
       dictionary to complete. (Bug #74319, Bug #19795152)

     * The check to determine when a node was restarting and so
       know when to accelerate local checkpoints sometimes
       reported a false positive. (Bug #74318, Bug #19795108)

     * Values in different columns of the ndbinfo tables
       disk_write_speed_aggregate and
       disk_write_speed_aggregate_node were reported using
       differing multiples of bytes. Now all of these columns
       display values in bytes.
       In addition, this fix corrects an error made when
       calculating the standard deviations used in the
       std_dev_backup_lcp_speed_last_10sec,
       std_dev_redo_speed_last_10sec,
       std_dev_backup_lcp_speed_last_60sec, and
       std_dev_redo_speed_last_60sec columns of the
       ndbinfo.disk_write_speed_aggregate table. (Bug #74317,
       Bug #19795072)

     * Recursion in the internal method Dblqh::finishScanrec()
       led to an attempt to create two list iterators with the
       same head. This regression was introduced during work
       done to optimize scans for version 7.4 of the NDB storage
       engine. (Bug #73667, Bug #19480197)

     * Cluster API: It was possible to delete an
       Ndb_cluster_connection object while there remained
       instances of Ndb using references to it. Now the
       Ndb_cluster_connection destructor waits for all related
       Ndb objects to be released before completing. (Bug
       #19999242)
       References: See also Bug #19846392.

     * ClusterJ: ClusterJ reported a segmentation violation when
       an application closed a session factory while some
       sessions were still active. This was because MySQL
       Cluster allowed an Ndb_cluster_connection object be to
       deleted while some Ndb instances were still active, which
       might result in the usage of null pointers by ClusterJ.
       This fix stops that happening by preventing ClusterJ from
       closing a session factory when any of its sessions are
       still active. (Bug #19846392)
       References: See also Bug #19999242.

Regards
MySQL Release Engineering Team

Options: ReplyQuote


Subject
Views
Written By
Posted
MySQL Cluster 7.4.3 has been released
2705
January 21, 2015 07:50PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.