MySQL Forums
Forum List  »  Perl

Re: Huge memusage when operating on large hash/arrays in perl
Posted by: Bill Karwin
Date: April 20, 2006 06:02PM

I glanced at the code (http://search.cpan.org/src/MFX/Compress-LZO-1.08/LZO.xs) and I see that it's using malloc and free during the compression() function.

I'd guess that something subtle is going on with ref counts on the scalar returned by Compress::LZO::compress(), thus the memory malloc'ed in the XS code is not being free'd. Each element of your @searchvector array contains a ref to the object allocated by compress(), so none of them get to free their memory.

But when you use your fix, copying the scalar $vector2 and then overwriting it, the ref counts are reset for each searchword.

Just a guess... I don't exactly see how the memory would not be freed, since the C code looks pretty straightforward, like it's guaranteed to free the memory. But malloc and free are pretty tricky sometimes, and Perl's ref count mechanism makes that even harder.

Anyway, good that you found a workaround.

Regards,
Bill K.

Options: ReplyQuote


Subject
Written By
Posted
Re: Huge memusage when operating on large hash/arrays in perl
April 20, 2006 06:02PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.