Re: Insert large longtest failed on NDB
Hi Eric
Short answer, yes I would say you should increase to at least 5x37591.
Long answer:
Each transaction coordinator (TC) only allow up to MaxNoOfConcurrentOperations write operations concurrently, and if one transaction coordinator should be able to handle 5 of your example transactions you would need to increase it to 5 times.
Each data node can act as transaction coordinator, and the mysql server will typically choose the TC on same node there the row will be stored.
Since Ndb uses hash based distribution, if 5 users do inserts each will be distributed to one random node, and the chosen TC will typically follow.
With two data nodes data, each node would on average handle 2.5 of the 5 transaction. Still the probability for all 5 transactions ending up on same data node is ~3% (1/2^5).
If the question was for say 100 concurrent transactions, you would not need to increase 100x since it would be much more unlikely that all 100 transactions ends up on same data node (1/2^100 = ~1e-30). But in this is case, 100 such big concurrent transactions is not possible due to lack of transaction and data memory.
Also using more data nodes will decrease proability that all transactions ends up on same data node. With say 4 data nodes the probability would be (1/4^5 = 1/2^10 = ~0.1%).
Regards,
Mauritz
Subject
Views
Written By
Posted
833
November 03, 2022 11:27AM
377
November 03, 2022 05:55PM
289
November 04, 2022 01:31PM
Re: Insert large longtest failed on NDB
307
November 04, 2022 04:18PM
Sorry, you can't reply to this topic. It has been closed.
Content reproduced on this site is the property of the respective copyright holders.
It is not reviewed in advance by Oracle and does not necessarily represent the opinion
of Oracle or any other party.