MySQL Forums
Forum List  »  Performance

Memory/cache usage with innodb_flush_method values
Posted by: Brad Alberts
Date: September 28, 2015 08:52PM

Hi. I'm running an Apache/MySql Virtual Private Server and I thought I'd set innodb_flush_method to O_DIRECT so I could avoid the 'doubling up' of InnodDB data in the filesystem cache, and thus get more use out of my memory. I did some simple benchmarks using different values for innodb_flush_method and would appreciate help in understanding my results. :-)

I'm running a straight Centos 6.3 server on a virtual machine that has four 'cores'. Linux is version 2.6.32-279; the MySQL version is 5.1.61-4.

The VM originally had 2GB of memory and my tests were done with the small InnoDB buffer pool size of 512MB I had for that configuration. However when I ran my tests the VM had 4GB of RAM.

The InnoDB table on which I preformed the benchmarks was a copy of the largest table we had in our application, an application-level audit log, consisting of a few int fields, a timestamp and a 'text' column. A 'select count(*)' told me that there were 3,659,153 rows in the table. The information_schema.tables entry for the table had the following:

VERSION - 10
ROW_FORMAT - Compact
TABLE_ROWS - 4191347
AVG_ROW_LENGTH - 249
DATA_LENGTH - 1045430272

As you can see from the above the size of the table (DATA_LENGTH) was about 1GB. A mysqldump of the table produced a 800MB text file. An UPDATE statement which incremented one of the int columns for every row produced a binary log almost 1.5GB in size.

The Virtual Machine is (unfortunately) throttled by our provider to have a disk I/O limit of 13MB/sec. If the timings for my tests on this 1GB table seem ultra-slow to you then that would be the reason.

My simple 'benchmark' consisted of a read test and an update test. For the read test I ran a simple SQL statement summing one of the integer columns across all rows:

SELECT sum(int_field) FROM table

For my update test I just incremented the column:

UPDATE table SET int_field = int_field + 1

I ran each test twice; for three settings of innodb_flush_method - NULL, O_DSYNC & O_DIRECT. I ran each SELECT/UPDATE statement using the following sequence of steps:

1. Flush/empty Linux filesystem buffer cache - 'sync && echo 3 > /proc/sys/vm/drop_caches && free'
2. Restart mysqld - 'service mysqld restart'
3. Flush/empty the Linux filesystem buffer cache again (just to be sure mysqld would run my statements 'cold')
4. Run the read/update statement
5. Record the time taken (as reported by the 'mysql' client utility)
6. Run 'free' to see how much data had been read into the Linux filesystem cache.

I did an 'lsof' of the mysqld process to check how the InnodDB data and log files were opened with the three values for innodb_flush_method; as expected O_DSYNC showed that the log files were opened with 'SYNC' and the O_DIRECT configuration had the InnoDB data file opened 'DIRECT'.

My overall goal in all this was to prove to myself that setting innodb_flush_method to O_DIRECT would perform as well as or better than the default in addition to leaving the Linux filesystem cache alone, meaning the filesystem cache would be used for all of the PHP file access and such rather than wastefully 'mirroring' data that InnoDB was also caching for itself in its buffer pool.

Summary - unfortunately it looks like O_DIRECT ends up being 20% *slower* in reads, although it was roughly 10% faster in the bulk-update test. So I guess I won't be switching to use O_DIRECT.

During the tests I had the web server and crond shut down to try and have a quiescent system.

I can't see a way to format the results as a table so I'll do it longhand. 'bufs' is the number of buffers reported by the 'free' command; 'cached' is the KB that it reported as cached.

innodb_flush_method = NULL:

read #1 - 74s, bufs 4980, cached 1,077,308
read #2 - 73s, bufs 5048, cached 941,492

update #1 - 611s, bufs 4728, cached 2,842,408
update #2 - 650s, bufs 4000, cached 1,803,370

innodb_flush_method = O_DSYNC:

read #1 - 74s, bufs 4148, cached 936,304
read #2 - 74s, bufs 5348, cached 936,684

update #1 - 664s, bufs 2216, cached 2,856,400
update #2 - 715s, bufs 1800, cached 1,837,412

innodb_flush_method = O_DIRECT:

read #1 - 89s, bufs 4032, cached 21,188
read #2 - 90s, bufs 5204, cached 21,880

update #1 - 538s, bufs 7052, cached 2,829,996
update #2 - 566s, bufs 7648, cached 2,802,488

Discussion:

* I don't know how the Linux filesystem cache works; I guess the buffers are of varying size? There seemed to be no 'linear' relationship between the actual amount of memory cached and the number of buffers allocated. It's interesting that the number of those buffers, though, was so much higher in the O_DIRECT update tests.

* I don't know why the update tests ended up reporting - consistently - two different values for the amount of cached filesystem data - roughly 1.8GB on one test, then 2.8GB on the other (except for O_DIRECT, which had 2.8GB for both). The InnoDB table itself was roughly a gigabyte in size, and the binary log file, after the update, was 1.5GB, which is roughly consistent with the higher figure. Was something happening in some tests and not others? Or does Linux *empty* the cache sometimes?

* As desired the O_DIRECT read tests showed that InnodDB was reading data right into its own cache and not using the Linux filesystem cache. Unfortunately the reads were about 20% slower.

* The O_DSYNC updates were a little slower than default; I wonder why?

* The updates for O_DIRECT were a bit faster. The MySQL documentation says that the 'data files' are opened with O_DIRECT and the server "uses fsync() to flush both the data and log files". But surely it wouldn't bother doing an fsync() on the data file, since it is opened with O_DIRECT? Would the speed increase in updates be due simply to the lack of filesystem cache overhead in writing out dirty pages?

My questions:

1. Why are the reads slower for O_DIRECT?

2. Why would the amount of filesystem cache used for the updates be the same (2.8GB) for O_DIRECT, when the 1GB of actual data (versus log) was not presumably cached?

3. Would anyone know why there'd be a semi-consistent 1GB difference between the file cache employed for the pairs of update tests for NULL and O_DSYNC? My system was as quiescient as I could make it, and 1GB is a pretty big difference.

4. Why would the O_DSYNC updates be slower? (Just curious.)

I'm just trying to understand how MySQL/InnoDB does things. Many thanks for any advice or pointers!

Options: ReplyQuote


Subject
Views
Written By
Posted
Memory/cache usage with innodb_flush_method values
2534
September 28, 2015 08:52PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.