Re: Inserting data into Database uses up lots of memory
auto_commit = 0 without COMMIT is a bad way to go. It leaves a transaction open "forever". (No, I don't know if that leads to stuff clogging up memory.)
Infrequent INSERTs (< 100/sec): either auto_commit = 1 or COMMIT after every one or few.)
>100 inserts/sec: LOAD DATA (which you don't want to do) or batched INSERTs. I like batches of 100. In your case ("streaming"), I might instead close off the batch after whichever of these comes first:
* 100 rows
* 1 second
* 5MB of text in the INSERT statement.
And COMMIT (if using InnoDB).
What language is used for the "script"?
If you are using InnoDB, this is much too small:
innodb_buffer_pool_size | 8388608
Even with only 1GB to work with, I would raise that to 100M. But make sure that this does not lead to swapping. Swapping is worse than having things tuned low.
If you are not using InnoDB, that can be set to 0.
SHOW CREATE TABLE tblname; -- this will show you what ENGINE is being used for a given table.
Lower this if you don't need it this high:
max_connections | 100
You don't need it anywhere near that high:
Max_used_connections | 3
Uptime | 801629
You have used only 3 connections at once; otherwise I would point out that this is one of the few things that might look like a "memory leak".
These contradict each other:
query_cache_size | 0
query_cache_type | ON
Recommend 0 and OFF.
This leads to unnecessary work:
thread_cache_size | 0
Change it to 4.
This seems too small, but if you are not crashing, leave it:
thread_stack | 126976
Crash info will be found in mysqld.err .
This causes more I/O (safety vs speed) in InnoDB:
innodb_flush_log_at_trx_commit | 1
If you get into I/O trouble, consider setting it to 2.
If you are using MyISAM, this _may_ be too small:
key_buffer_size | 8384512
The key_buffer grows over time, up to that size. This is one of the few things that might look like a "memory leak".
Please provide
SHOW TABLE STATUS LIKE 'tblname'
qps is extremely low?
Questions | 4067
Uptime | 801629
There are various VARIABLES "%_size" that are allocated and possibly freed; some are global, some are per-connection; some are grow as needed. Still, none _should_ have caused what you describe on a 1GB machine. (Do not randomly change them; you could cause real crashes.)
Subject
Views
Written By
Posted
1834
June 02, 2012 08:32AM
928
June 06, 2012 03:51AM
950
June 12, 2012 02:11PM
870
June 14, 2012 05:41AM
885
June 14, 2012 05:42PM
904
June 11, 2012 10:36PM
1053
June 12, 2012 02:18PM
Re: Inserting data into Database uses up lots of memory
1058
June 13, 2012 11:11AM
Sorry, you can't reply to this topic. It has been closed.
Content reproduced on this site is the property of the respective copyright holders.
It is not reviewed in advance by Oracle and does not necessarily represent the opinion
of Oracle or any other party.