MySQL Forums
Forum List  »  Performance

Re: merge tables running out of file descriptors and causing mysql to crash
Posted by: Ulf Wendel
Date: May 18, 2005 03:28AM

Ahoy!

Michael Stoppelman wrote:
> I have a database with about 70 merge tables, the
> underlining MyISAM tables total to 4800 tables
> (one per day). Recently I ran into a cryptic
> thread dying bug in mysqld, so the database just
> segfaults and stops (which really is horrible). To
> combat this I set up a replica to offload the read
> activity which solved the problem for now. At peak
> usage my database was using over 600k file
> descriptors.

http://dev.mysql.com/doc/mysql/en/table-cache.html

> I've tried to use the 'flush_time' and 'flush'
> server variables to close tables more frequently.

You mean FLUSH TABLES ?

> 'table_cache' to 10 and this didn't stop mysql
> from caching 300 tables. Is there a way to tell
> mysql to close all the tables after the query is
> over? This would help my situation a lot. I've

I don't know of such a possbility. Doing this should decrease performance. AFAIK open() involves a switch from user to kernel mode on Unix and that's always slow.

Your table_cache setting of 10 is pretty low. Check the manual link above. It explains why you see the auto expansion to 300.


> Also, it seems 'open_files_limit' maxes out at
> 65535 no matter what you set open-files-limit in
> my.cnf, anyone know anything about this
> upperbound?


What's the maximum value that's allowed for your operating system/kernel? MySQL is limited to the values that are accepted by the system call setrlimits(). If setrlimits() fails, MySQL will go back to the value that's been configured before MySQL tried to set it. It might well be that your OS limit is 65k.

Ulf

Options: ReplyQuote




Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.