"I wanted the entire database to fit into memory" -- InnoDB uses the buffer pool for caching, so...
innodb_buffer_pool_size 8388608
That is terribly small, and will lead to performance problems due to lots of disk I/O. See this for what value to set it to:
http://mysql.rjweb.org/doc.php/memory
For your 12GB server, 2G would suffice for that table (890MB+530MB); 8G would be good if this machine is a dedicated MySQL box.
When you were using MyISAM,
key_buffer_size 134217728
was relevant. That is where the _indexes_ are cached. Most of your 12GB was useful for _data_. (My web page, above, goes into more detail.)
A result coming from the "Query cache" rarely takes more than 0.1 seconds, not 7 seconds. So I suspect that is other caching. (And increasing the buffer_pool will help.)
#1 `ip` tinytext NOT NULL,
KEY `ip` (`ip`(15))
If that is an IP-address, use VARBINARY(39), and do NOT prefix the index. IPv6 is coming string _this_ year.
Prefix indexes are rarely used by the optimizer!
#2 URL TEXT -- TEXT is probably the best. A gnarly detail: If you have a query that needs to generate a tmp table this will force it to use MyISAM (instead of MEMORY) under the covers.
#3 MyISAM and InnoDB cache all columns in RAM. (The low level details are different.) See #2 for the only(?) diff.
#4 Data_free -- What value do you have for innnod_file_per_table? If 0(OFF), that is free space in ibdata1. If 1(ON), it is in table.ibd. The free space is available to other InnoDB allocations (block by block) -- either all table, or just that one table. It is hard to return the free space to the OS. (See many threads in the InnoDB forum.)
#5 See my web page, but leave at least 1GB for Apache.
#6 `c5_id` tinytext NOT NULL,
Change it to VARCHAR(255). This will make certain queries more efficient. (This involves gnarly details about how tmp tables are handled.) Yes, you should probably increase some of those settings. Be careful, some are per-connection. And you don't want to run out of memory -- swapping is worse than having a setting too low.
#7 I've already covered what it takes to get the table in RAM, without complication. Create "compound indexes" wherever appropriate for the queries.
#8 There is still a lot of disk I/O, but not much during SELECTs (once properly tuned). Every INSERT (in InnoDB) will write to
* the 'transaction log' (iblog)
* the binlog (if turned on)
* the block where the data an PRIMARY KEY are stored (they are stored together)
* a block per secondary index.
The aggressiveness of writes for the first two is controlled by innodb_flush_log_at_trx_commit and sync_binlog.
The other two are cached heavily (in the buffer_pool!), so you should not have to wait for the write to occur (if the buffer_pool is big enough!).
Once everything is cacheable, SELECTs won't touch the disk, except maybe for transaction semantics.
#9 There is never a 'last question' ;)
"How much soda do you want?" "Just cover the ice."
64GB -- It depends. A well-designed table and application can handle a terabyte database using 8GB of RAM. A poorly one needs everything to be cached. For example, let's consider the impact of one INSERT on your table, which has
PRIMARY KEY (`id`), -- AUTO_INCREMENT, so appends to data
KEY `datetime_c` (`datetime`), -- current time? Hot spot at 'end' of index
KEY `c3_id` (`c3_id`), -- ?
KEY `ip` (`ip`(15)) -- somewhat random
A tiny buffer_pool could handle the first two indexes because only a few blocks are involved for each new record. Let's say 100 rows (of data or index) can fit in a block. This means (roughly) that 1% of the time there will be an actual write to disk. The "ip" index may have some hot spots. (80% of the activity comes from 20% of the ips?) If so, caching is somewhat useful. If you had an index that was a md5/sha1/checksum/rand(), then virtually insert would have to read-modify-write an index block.
What about SELECTs? If your typical select says "WHERE datetime_c > '...'", and if the cache is big enough to hold those records, then you are fine. (Note that the data, the id, and datetime_c are pretty much marching in lock step. The "last" thousand blocks will contain the latest 100K records.)
#10 -- Consider "summary" tables to avoid having to deal with 100M rows!