MySQL Forums
Forum List  »  MyISAM

Re: Importing dump of huge table takes forever since 5.6.15
Posted by: Rick James
Date: January 16, 2014 04:48PM

How much RAM do you have?

> MyISAM Pros (Performance, Size, ...)

True, Size is different (typically 2x-3x). But performance is an old wive's tale. InnoDB has improved over the years, and now is faster than MyISAM for most workloads.

> We are in a Insert-and-update-dominated scenario of very large tables.

Note that MyISAM will lock that very large table for the duration of each INSERT / UPDATE. InnoDB would lock only the row(s) involved, thereby usually allowing concurrent access.

> `fdate` date NOT NULL COMMENT 'Fixing Datum in Tagen seit 1.1.1970',
> `ftime` time NOT NULL COMMENT 'Fixing Zeit in Sekunde des Tages nach 7:00',

In _most_ (not all) applications, it is better to use a single DATETIME or TIMESTAMP to hold bot the date and time. SELECTs often become faster, even if more complex. Perhaps yours is an exception, since I see

> PRIMARY KEY (`id`,`fdate`)

> `bid` float DEFAULT NULL,

Not wise to use FLOAT or DOUBLE for money. DECIMAL(..., 2) is better. (The 2 assumes Euros or dollars.)

Some analysis of your settings:

( local_infile ) = ON
-- local_infile = ON is a potential security issue

( read_buffer_size ) = 256M -- for MyISAM table scan (per thread) Alloc entire space < 5.6.4; then grows.

( read_rnd_buffer_size ) = 256M -- for MyISAM random reads?

( query_cache_size ) = 256M -- Size of QC
-- Too large = too much overhead.
Recommend no more than 50M

( tmp_table_size ) = 512M -- Limit on size of temp tables used to support a SELECT
-- Decrease tmp_table_size to avoid running out of RAM. Perhaps no more than 64M.

( sort_buffer_size ) = 4,294,967,295 = 4096MB -- One per thread, malloced at full size until 5.6.4, so keep low; after that bigger is ok.
-- This may be eating into available RAM; recommend no more than 2M.

( long_query_time ) = 10.000000 = 10 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2

( expire_logs_days ) = 0 -- How soon to automatically purge binlog (after this many days)
-- Too large (or zero) = consumes disk space; too small = need to respond quickly to network/machine crash
Since you are talking about setting up a Slave, you need to have logbin=ON _now_. With that ON, the binlogs will start to pile up on disk. I like expire_logs_days = 14, but your situation may be different.

Options: ReplyQuote

Written By
Re: Importing dump of huge table takes forever since 5.6.15
January 16, 2014 04:48PM

Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.