Re: how to handle big tables, that does not fit into heap memory at once?
Posted by: Tetsuro Ikeda
Date: June 03, 2005 01:50PM

mkaktus wrote:
> Hi
> how to handle big tables, that does not fit into
> heap memory at once?
> Documentation of ResultSet says that forward-only,
> read-only result set, fetched row by row, are only
> allowed, but with totaly locking involved tables.
> (MySQL Connector/J Documentation - 1.3.2 JDBC API
> Implementation Notes, Resultset )
> How to scan/iterate big tabels row by row and make
> updates?
> ( Sun's CachedRowSetImpl didnt work either )

Hi mkaktus,

May be LIMIT clause good for you?
The LIMIT clause can be used to constrain the number of rows returned by the SELECT statement. LIMIT takes one or two numeric arguments, which must be integer constants.

With two arguments, the first argument specifies the offset of the first row to return, and the second specifies the maximum number of rows to return. The offset of the initial row is 0 (not 1):

mysql> SELECT * FROM table LIMIT 5,10; # Retrieve rows 6-15

For compatibility with PostgreSQL, MySQL also supports the LIMIT row_count OFFSET offset syntax.

To retrieve all rows from a certain offset up to the end of the result set, you can use some large number for the second parameter. This statement retrieves all rows from the 96th row to the last:

mysql> SELECT * FROM table LIMIT 95,18446744073709551615;

With one argument, the value specifies the number of rows to return from the beginning of the result set:

mysql> SELECT * FROM table LIMIT 5; # Retrieve first 5 rows

In other words, LIMIT n is equivalent to LIMIT 0,n.

-- Tetsuro

Options: ReplyQuote

Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.