MySQL Forums
Forum List  »  NDB clusters

Re: MySQL Cluster: The table is full
Posted by: Mikael Ronström
Date: April 14, 2016 05:55PM

Hi,

superlifter superlifter Wrote:
-------------------------------------------------------
> I have understanding wih MySQL Cluster.
>
> I have one table:
>
> - 38 fields total
> - 22 fields are described as 22 indexes (field
> type: int)
> - Other fields double and bigint values
> - The table doesn't have defined Primary Key
>

The table will get a primary key which will be a
8 byte auto increment field which is hidden. The
primary key consumes roughly 15 bytes per row.
This memory is taken from IndexMemory.

If I understand correctly you have 22 indexes in the
table. Each such index consumes roughly 15 bytes per
row. This memory is taken from DataMemory.

There is also roughly a 25 byte overhead on each row
in Data Memory.

So with this the numbers adds up fairly well.

Not sure if you need all 22 indexes, but those
take up most of the space.

Would also assume that you would be better off
having larger data nodes (larger memory that is)
and less of them. It is rare that one needs
more data nodes compared to MySQL Servers
for performance reasons (one could obviously
have it for other reasons).

Rgrds Mikael Ronström


> My Environment (10 nodes):
>
> - data nodes: 8 (AWS EC2 instances, m4.xlarge 16GB
> RAM, 750GB HDD)
> - management nodes: 2 (AWS EC2 instances,
> m4.2xlarge 32GB RAM)
> - sql nodes: 2 (the same VM as in management
> nodes)
>
> MySQL Cluster settings (config.ini) are set
> to:

>
> .............................
> [NDBD DEFAULT]
> NoOfReplicas=2
> ServerPort=2200
> Datadir=/storage/data/mysqlcluster/
> FileSystemPathDD=/storage/data/mysqlcluster/
> BackupDataDir=/storage/data/mysqlcluster//backup/
> #FileSystemPathUndoFiles=/storage/data/mysqlcluste
> r/
> #FileSystemPathDataFiles=/storage/data/mysqlcluste
> r/
> DataMemory=9970M
> IndexMemory=1247M
> LockPagesInMainMemory=1
> 
> MaxNoOfConcurrentOperations=100000
> MaxNoOfConcurrentTransactions=16384
> 
> StringMemory=25
> MaxNoOfTables=4096
> MaxNoOfOrderedIndexes=2048
> MaxNoOfUniqueHashIndexes=512
> MaxNoOfAttributes=24576
> MaxNoOfTriggers=14336
> 
> ### Params for REDO LOG 
> FragmentLogFileSize=256M
> InitFragmentLogFiles=SPARSE
> NoOfFragmentLogFiles=39
> RedoBuffer=64M
> 
> TransactionBufferMemory=8M
> 
> TimeBetweenGlobalCheckpoints=1000
> TimeBetweenEpochs=100
> 
> TimeBetweenEpochsTimeout=0
> 
> ### Params for LCP 
> MinDiskWriteSpeed=10M
> MaxDiskWriteSpeed=20M
> MaxDiskWriteSpeedOtherNodeRestart=50M
> MaxDiskWriteSpeedOwnRestart=200M
> TimeBetweenLocalCheckpoints=20
> 
> ### Heartbeating 
> HeartbeatIntervalDbDb=15000
> HeartbeatIntervalDbApi=15000
> 
> ### Params for setting logging 
> MemReportFrequency=30
> BackupReportFrequency=10
> LogLevelStartup=15
> LogLevelShutdown=15
> LogLevelCheckpoint=8
> LogLevelNodeRestart=15
> 
> ### Params for BACKUP 
> BackupMaxWriteSize=1M
> BackupDataBufferSize=24M
> BackupLogBufferSize=16M
> BackupMemory=40M
> 
> ### Params for ODIRECT 
> #Reports indicates that odirect=1 can cause io
> errors (os err code 5) on some systems. You must
> test.
> #ODirect=1
> 
> ### Watchdog 
> TimeBetweenWatchdogCheckInitial=60000
> 
> ### TransactionInactiveTimeout  - should be
> enabled in Production 
> TransactionInactiveTimeout=60000
> ### New 7.1.10 redo logging parameters 
> RedoOverCommitCounter=3
> RedoOverCommitLimit=20
> ### REALTIME EXTENSIONS 
> #RealTimeScheduler=1
> ### REALTIME EXTENSIONS FOR 6.3 ONLY
> #SchedulerExecutionTimer=80
> #SchedulerSpinTimer=40
> 
> ### DISK DATA 
> SharedGlobalMemory=20M
> DiskPageBufferMemory=64M
> BatchSizePerLocalScan=512
> .............................
>
>
> After importing 75M records to my table I
> get the error (The table 'test_table' is
> full
) and can not import data any more to the
> table. I don't undersdtand why it is so.
>
> I look at information_scheme and can see that
> avg_record_size is 244. The full table size is:
> ~19G
>
> Also if I look at DataMemory used on each data
> node I see: ~94%. IndexMemory used is: ~22%
>
> But I have 8 data nodes with DataMemory total with
> *8*9970M = 80GB*
>
> My table is 19GB only. So even I have replicas.
> The memory used muse be: 19*2=38GB.
>
> Could somebody explain me what is the situation.
> And how can I configure the Cluster and import max
> possible records. The full table in production
> will have: 33 Billion records.
>
> For tests on the given cluster I need to test 100M
> and 1B data sets.
>
> Thanks.

Options: ReplyQuote


Subject
Views
Written By
Posted
Re: MySQL Cluster: The table is full
1034
April 14, 2016 05:55PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.