MySQL Forums
Forum List  »  Performance

Re: MySQL Server crashes - spawning too many threads.
Posted by: Steve Terpening
Date: August 19, 2005 02:56PM

Here's the gist of the problem:
Regardless of the "max_connections" setting in MySQL, and regardless of the amount of free memory on the system (Dual-processor Apple G5 w/4G Ram), MySQL (or Mac OS) refuses to allocate more than 2500 concurrent connections. The specific error message returned from MySQL says something like "Failed allocating new thread... if you still have free memory, please check for an operating system bug...". The MySQL/J driver SQLException code/status is 1135/S1000. The only bug reference for Max OS on mysql.com is for zombie connections that don't die in a timely fashion. I'm not trying to get rid of any.

Database consists of:
1. 2 Innodb tables for read/update
2. A few MyIsam tables with read-only data
3. 1 Big Heap table w/130K rows for fast access

Application:
1. App/WebServer pools doing quick connect-query/update-drops
2. One Web client doing multi-row(30K) updates infrequently

Relevant MySQL startup params:
open_files_limit = 50,000
max_connections = 10,000

key_buffer_size = 384M
max_heap_table_size = 128M
innodb_buffer_pool_size = 8M
innodb_additional_mem_pool = 1M

thread_stack_size = 200K
net_buffer_length = 1K
read_buffer_size = 2M (tried as low 64K)
read_rnd_buffer_size = 8M (tried as low as 64K)
join_buffer_size = 100K (tried as low as 64K)
sort_buffer_size = 2M (tried as low as 64K)
myisam_sort_buffer_size = 67M (tried as low as 64K)

(All others based from my-huge.cnf)

mysqld VM size at:
1. server start: 470M
2. after load of Heap tables: 530M
3. At 2500 connections: 1.2G

Unix Kernel params I've tweaked:
Kern.maxfiles = 60,000 (default 12,288)
Kern.maxvnodes = 150,000 (default 33762)
Kern.maxprocs = 10,000 (default 2048)
Kern.maxfilesperproc = 10240 (unchangeable in sysctl)
Kern.maxprocperuid = 1000 (unchangeable in sysctl)


I've picked apart mysql.com, have an open thread in the forum, and scanned every book I could find in O'Reilly. I just don't know what else to try.

PS: My pseudo testing routine:
for (1 to 10K) { open conn, select count(*) from a 20K row InnoDB table, hold open the conn. }
caught SQLException occurs between 2450 and 2550.

TIA
- Steve Terpening

(Sorry for the format - I also mailed this to all my "connections" hoping for a hit)
Personally, I think the 1G Ram used is coming from the thread_stack (2500*200K = 500M), but there's still no reason to croak at 1G.

Don't have any relevant stats, as volumes are low right now.
Am looking up Oracle install as I send this.

Options: ReplyQuote




Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.