Re: innodb or myisam for logging
Every write you do on the Master is also done on the slave. (That's how replication works.) So I focus on the slave -- which has both reads and writes
1000 INSERTs/sec -- still possible in either engine. But... You should do some form of optimization: multiple rows per INSERT statement, or LOAD DATA. In either case, something like 100 rows or 1 second's worth of data would be a good batch amount (for your situation).
My comment about summarizing as you insert leads to much less data being replicated, hence much "write" load on the slaves.
It would not be too difficult to actually try both engines. But be sure to test the performance of the slave, not the master. ALTER TABLE foo ENGINE=xxx; lets you switch engines (assuming you aren't using CONSTRAINTS or the few other differences). The cache sizes require editing my.cnf and restarting mysqld.
You could even have a different engine on the Master than on the Slave. But, again, I emphasize that the Slave is where the important benchmark will be.
Here's another approach: Gather 100-1000 rows in a MEMORY table (that is NOT replicated) on the Master, write it to a csv file, then do LOAD DATA into the real table. The slave won't see anything but the final LOAD.
If you need to scale to 10,000 rows per second (sustained), we'll have to get into more serious designs such as sharding.
Meanwhile, if your burst speed is, say, 20K per _minute_, you should be able to have a nicely running system.
DESCRIBE table is hard to read, please do SHOW CREATE TABLE tbl \G
In particular, we may need to discuss the choice of PRIMARY KEY in the case of InnoDB. And, remember, secondary keys will be the killer of performance.
Using "datetime" for leftoff -- how do you handle duplicate seconds? How do you handle data coming in slightly out of order? Also, DATETIME is 8 bytes. These are why I like an AUTO_INCREMENT, instead.