MyISAM use "table locking". Whenever a 'write' operation (INSERT, UPDATE, ALTER, etc) occurs on a table, these actions are taken behind the scenes:
1. wait for all operations on the table to finish -- that is, wait for _all_ locks (on that table) to be released
2. get an 'exclusive' lock on the table
3. perform the write operation
4. unlock
A non-write operation (SELECT) does the following
1. wait for any _exclusive_ lock to be released
2. get a 'shared read' lock. "Shared" means that multiple SELECTs can operate in parallel.
3. fetch the rows
4. unlock
There is also the issue of caching. A 30-second query is usually I/O intensive. That is, it is hitting the disk because the needed data is not cached in RAM. Take, for example, a table scan like "SELECT * FROM big_table". It will effectively have to fetch every block of the table from disk. If a similar statement (same big table or a different one) starts while that one is still running, and the blocks have already been bumped out of cache, then this second query will be competing access to disk. The second query will run at half speed, and so will the first.
If the table is big enough, and the requests are staggered enough, 10 such queries could each take upwards of 10*30 seconds to complete. (This may explain your problem.)
In chasing down problems like this, I look for many things:
* MaxClients on Apache (to see if there might be 10, or even 200 queries fighting for the disk)
* SHOW VARIABLES LIKE '%buffer%' (to see how much cache is available)
* How much RAM (to see how much the cache should, or should not, be increased)
* SHOW TABLE STATUS (to see how big the table is, and whether it is MyISAM or InnoDB -- hence which cache to tune). I got tired of repeating myself, so I wrote
http://mysql.rjweb.org/doc.php/memory
* The SELECT/UPDATE/etc statement(s) (to see _what_ it is doing)
* EXPLAIN SELECT (to see _how_ it is doing it)
* SHOW CREATE TABLE (to see, among other things, what INDEXes are available or missing, and whether it is MyISAM or InnoDB)
* The SlowLog (to quickly spot the naughtiest query). That is what I use, almost exclusively, at work against to monitor hundreds of machines.
Assuming a single instance of a single query is causing the problem, I try to discern
* Is it I/O bound or CPU-bound?
* What INDEX(es) could be added to help?
* Could the query be sped up by rewriting it? (Subquery -> JOIN; avoid OR; group-wise max tricks; function misapplied to indexed column; etc)
* Do the query and improvements need to be tailored for the ENGINE? (Indexes work radically differently in MyISAM vs InnoDB; sometimes this matters.)
* "Count the disk hits".
If it turns out to be interactions between multiple queries...
* Are they hitting the same table?
* Is cache being blown out?
* Do LOCKing differences between the engines matter? (InnoDB's row-level locking allows more parallelism, but it is not a panacea. MyISAM has a special case for INSERTing at end of a clean table. What about OPTIMIZE TABLE. etc)
Then there the side items I look for.
* INT(2) does not mean what you think; and it is inefficient.
* Prefix indexes.
* Redundant indexes.
* A table with no compound indexes sometimes means that the author does not understand the power of compound indexes, and usually that is the simple solution to his performance problem.
* Over-normalization; under-normalization.
* "Don't queue it, just do it" -- I rant against using MySQL for queuing. (Yeah, it is sometimes appropriate.)
* PARTITIONing does not automatically improve performance. In fact, it rarely does.
* Utf8 issues. I am still struggling with this. Details are in this work-in-progress for identifying, and fixing garbage characters:
http://mysql.rjweb.org/doc.php/charcoll
* Maybe 'vertical' partitioning would help.
* (For especially big tables) Can the data be shrunk by tweaking datatypes?
* Big deletes -
http://mysql.rjweb.org/doc.php/deletebig
Oops, maybe I should not be telling all my secrets for quickly answering performance questions on these forums.