Re: File descriptors
show variables like '%open%';
If the table cache is too small, you will get thrashing among descriptors. And each entry in the table cache uses some RAM.
The open_files_limit is derived from the OS limit.
Back on the previous thread -- DELETEing rows does not return space to the OS, but it does free up space for subsequent INSERTs. So, if you are inserting and deleting about the same number of rows each day, the disk footprint will stay pretty constant. The drawback (MyISAM) is that the new rows will be scattered around the table (usually not an issue) and may be fragmented (broken into parts, thus requiring extra disk hits). OPTIMIZE solves that last issue (but, as you noted, takes a long time).
I do not know for a fact whether JFS will have trouble with 750K files.
When doing a SELECT on a PARTITION, _all_ partitions are opened (regardless of how many are needed) -- that is, 25 (actually 50?) files have to be opened. (This is a known 'bug'; it has not yet been addressed.) Do your SELECTs only hit one partitioned table? So, probably a few SELECTs per second might not be a problem -- the thrashing of descriptors might not be "too high".
Bottom line: you may not be 'in trouble'.
Other approaches to consider: Archive. PARTITIONs of Achive. InfoBright.
All of these take much less disk space and promise fast load. And your 1-2 SELECTs/sec should not be a problem. These Engines may be significantly faster or slower for a given SELECT, depending on various things.
Subject
Views
Written By
Posted
5437
June 02, 2009 12:00PM
3214
June 03, 2009 10:34PM
2425
June 04, 2009 05:14AM
Re: File descriptors
2843
June 04, 2009 08:23AM
2526
June 04, 2009 09:07AM
Sorry, you can't reply to this topic. It has been closed.
Content reproduced on this site is the property of the respective copyright holders.
It is not reviewed in advance by Oracle and does not necessarily represent the opinion
of Oracle or any other party.