Posted by: Mark Molloy
Date: October 24, 2004 09:44AM

I have a problem that doesn't seem to fit the database
patterns of use that I'm familiar with.

I will have an application with ~100K complex structures, each
containing multiple queues. (This is to start with,
eventually there could be millions of these structures.)

A given transaction may have to perform dequeue or enqueue, or
insert/delete operations from the middle of (so they're not
just queues, really) dozens of these structures. There might
be thousands of concurrent transactions, with significant
overlap between the sets of complex structures that each needs
to use.

Now, if I map these queues naively and just use the database's
locking to manage access to them, I will achieve instant
gridlock (if the database correctly detects and resolves all
deadly embraces), or instant system freeze if it does not.
Neither will do.

My thought is to have the application layer manage the queues.
Each queue would have a primary key associated with it. And,
each queue would be managed by exactly one process at any
given time. (The identity of the process corresponding to a
particular key would be stored in the database.) So, to
operate on a queue, a remote message would be sent to the
process managing that queue. (A given process would manage
many queues, but there would be many processes, on many

By having this application layer maintain state consistent
with the database, I can prevent contention. If an exclusive
lock is required but already held, the loser transaction can
rollback and try again. More often, such as with dequeue
operations, the application layer will just return the first
unlocked element. This might result in slightly imperfect
FIFO behavior, but that is fine for my application. (For
example, TX1 takes element 1, then TX2 takes element 2, then
TX1 rolls back, so that element 1 is put back onto the front
of the queue, and TX3 then gets element 1. Element 2 thus
appears to have been taken off the queue and processed before
element 1. Fine.)

Now, as each process operates on one of its queues, it has to
perform operations on the underlying database. So, dequeue
would decide based on in-memory state what the first element
of the queue was, and let the thread handling that request
have it. That thread would then have to change that element
within the database.

Because a single transaction might thus have to perform
operations within many processes, the transaction id has to be
propagated, resumed, and suspended properly by each server
process. Then, each such process has to be notified of the
transaction's outcome, so that it can keep its in-memory state
synchronized with that in the database. For example, if the
transaction rolls back, any dequeued element must be put back
onto the queue.

I have found the javax.transaction package, which appears
intended for just such usage. It has operations such as
suspend() and resume() of a transaction, and
registerSynchronization() to receive notification that the
transaction is about to commit, and then whether it actually
committed or rolled back.

My question is, which implemementations of Java will combine
well with which database systems to provide a good (robust,
performant, scalable) implementation of javax.transaction,
and particularly of the suspend(), resume(), and
registerSynchronization() capabilities? JBoss with MySQL?

Please suggest! Thanks!

Options: ReplyQuote

Written By
October 24, 2004 09:44AM

Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.