MySQL Forums
Forum List  »  Performance

Re: Inserting data into Database uses up lots of memory
Posted by: Rick James
Date: June 13, 2012 11:11AM

auto_commit = 0 without COMMIT is a bad way to go. It leaves a transaction open "forever". (No, I don't know if that leads to stuff clogging up memory.)

Infrequent INSERTs (< 100/sec): either auto_commit = 1 or COMMIT after every one or few.)

>100 inserts/sec: LOAD DATA (which you don't want to do) or batched INSERTs. I like batches of 100. In your case ("streaming"), I might instead close off the batch after whichever of these comes first:
* 100 rows
* 1 second
* 5MB of text in the INSERT statement.
And COMMIT (if using InnoDB).

What language is used for the "script"?

If you are using InnoDB, this is much too small:
innodb_buffer_pool_size | 8388608
Even with only 1GB to work with, I would raise that to 100M. But make sure that this does not lead to swapping. Swapping is worse than having things tuned low.
If you are not using InnoDB, that can be set to 0.
SHOW CREATE TABLE tblname; -- this will show you what ENGINE is being used for a given table.

Lower this if you don't need it this high:
max_connections | 100
You don't need it anywhere near that high:
Max_used_connections | 3
Uptime | 801629
You have used only 3 connections at once; otherwise I would point out that this is one of the few things that might look like a "memory leak".

These contradict each other:
query_cache_size | 0
query_cache_type | ON
Recommend 0 and OFF.

This leads to unnecessary work:
thread_cache_size | 0
Change it to 4.

This seems too small, but if you are not crashing, leave it:
thread_stack | 126976
Crash info will be found in mysqld.err .

This causes more I/O (safety vs speed) in InnoDB:
innodb_flush_log_at_trx_commit | 1
If you get into I/O trouble, consider setting it to 2.

If you are using MyISAM, this _may_ be too small:
key_buffer_size | 8384512
The key_buffer grows over time, up to that size. This is one of the few things that might look like a "memory leak".
Please provide
SHOW TABLE STATUS LIKE 'tblname'

qps is extremely low?
Questions | 4067
Uptime | 801629

There are various VARIABLES "%_size" that are allocated and possibly freed; some are global, some are per-connection; some are grow as needed. Still, none _should_ have caused what you describe on a 1GB machine. (Do not randomly change them; you could cause real crashes.)

Options: ReplyQuote


Subject
Views
Written By
Posted
Re: Inserting data into Database uses up lots of memory
1058
June 13, 2012 11:11AM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.