MySQL Forums
Forum List  »  NDB clusters

Re: Max CPU During NDBD Restart
Posted by: Mikael Ronström
Date: April 14, 2016 06:02PM

Hi,

Chris Telinde Wrote:
-------------------------------------------------------
> I'm experiencing an issue with my data nodes where
> during a rolling restart of the cluster, when I
> restart each data node one-by-one the data node
> begins to consume 100% of one of the CPU's
> resources.
>

You run with 1 LDM thread. This is the one maxing out
on 100%, this is perfectly normal. It is reading the
database from the disk stored checkpoint and inserts
this into memory. Then it runs the REDO log to catch
up with transactions performed after checkpoint. Then
it rebuilds the indexes and hash index.

If it is a node restart it might also spend some time
synchronising with other nodes.

So perfectly normal behaviour and if you had 4 LDM
threads you would restart faster since you would max
out 4 CPUs instead.

Rgrds Mikael Ronström

> My current cluster configuration is as follows:
>
> [ndbd default]
> # Options affecting ndbd processes on all data
> nodes:
> NoOfReplicas=2
> DataMemory=10240M
> IndexMemory=1024M
> MaxNoOfConcurrentOperations=5000000
> MaxNoOfExecutionThreads=2
> TimeBetweenLocalCheckpoints=6
> NoOfFragmentLogFiles=100
> TransactionDeadlockDetectionTimeout=120000
>
> [tcp default]
> OverloadLimit=1073741824
> SendBufferMemory=16M
> ReceiveBufferMemory=16M
>
> [ndb_mgmd]
> # Management Node 1
> NodeID=1
> Hostname=mgmtcluster01
> DataDir=/var/lib/mysql-cluster
>
> [ndb_mgmd]
> # Management Node 2
> NodeID=2
> Hostname=mgmtcluster02
> DataDir=/var/lib/mysql-cluster
>
> [ndbd]
> # Options for data node 1
> NodeID=3
> Hostname=datacluster01
> DataDir=/vol1/data/mysql-cluster-data
> ServerPort=50501
>
> [ndbd]
> # Options for data node 2
> NodeID=4
> Hostname=datacluster02
> DataDir=/vol1/data/mysql-cluster-data
> ServerPort=50502
>
> [ndbd]
> NodeID=5
> Hostname=datacluster03
> DataDir=/vol1/data/mysql-cluster-data
> ServerPort=50503
>
> [ndbd]
> NodeID=6
> Hostname=datacluster04
> DataDir=/vol1/data/mysql-cluster-data
> ServerPort=50504
>
> [mysqld]
> # Options for SQL node 1
> NodeID=7
> Hostname=sqlcluster01
>
> #[mysqld]
> #NodeID=8
> #Hostname=sqlcluster01
>
> [mysqld]
> # Options for SQL node 2
> NodeID=8
> Hostname=sqlcluster02
>
> #[mysqld]
> #NodeID=10
> #Hostname=sqlcluster02
>
> [mysqld]
> # Options for SQL node 3
> NodeID=9
> Hostname=sqlcluster03
>
> [mysqld]
> # Can be used for online backups
> # and for using the ndb_desc utility,
> # which requires an available nodeid.
>
> And here's my ndb_mgm -e show output:
>
> Cluster Configuration
> ---------------------
> [ndbd(NDB)] 4 node(s)
> id=3 @192.168.254.206 (mysql-5.6.25 ndb-7.4.7,
> Nodegroup: 0, *)
> id=4 @192.168.254.207 (mysql-5.6.25 ndb-7.4.7,
> Nodegroup: 0)
> id=5 @192.168.254.208 (mysql-5.6.25 ndb-7.4.7,
> Nodegroup: 1)
> id=6 @192.168.254.209 (mysql-5.6.25 ndb-7.4.7,
> Nodegroup: 1)
>
> [ndb_mgmd(MGM)] 2 node(s)
> id=1 @192.168.254.201 (mysql-5.6.25 ndb-7.4.7)
> id=2 @192.168.254.202 (mysql-5.6.25 ndb-7.4.7)
>
> [mysqld(API)] 4 node(s)
> id=7 @192.168.254.203 (mysql-5.6.25 ndb-7.4.7)
> id=8 @192.168.254.204 (mysql-5.6.25 ndb-7.4.7)
> id=9 @192.168.254.205 (mysql-5.6.25 ndb-7.4.7)
> id=10 (not connected, accepting connect from any
> host)
>
> As you can see from the configuration, I am using
> the ndbmtd process on the data nodes, specifying a
> MaxNoOfExecutionThreads=2. All servers in the
> cluster are running in VMWare, and the data nodes
> have a single CPU with 2 cores.
>
> when the data nodes restart, the ndbmtd process
> will grab onto one "core" and max it out at 100%
> until the data node is finished restarting. Is
> this normal behavior, or does this suggest that it
> may be necessary to scale the data nodes out on
> the cluster even further than the 4 existing data
> nodes?
>
> Thank you for any and all assistance.

Options: ReplyQuote


Subject
Views
Written By
Posted
1721
February 04, 2016 04:31PM
Re: Max CPU During NDBD Restart
723
April 14, 2016 06:02PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.