MySQL Forums
Forum List  »  Performance

Re: GEO Search with large table is very slow
Posted by: Sean Gonsman
Date: March 18, 2009 09:27AM

Thanks to Rick, Markus, and Jack for your input on this matter. I have tried several of these options and don't have a definitive answer at this moment. Here is what I have done (and please let me know your thoughts):

1.) I created a new table called properties_2_stats that only includes the columns that I need for this particular search. I did this so that the row size would be as small as possible. In the large table, there were some text fields that I thought might be slowing things down. Once I did this and recreated the same CMASearch index things were working really well. All of the queries were completed in less than .25 seconds. However, the next day, we were back to really slow queries (about 10-15 seconds each - which was better than before, but still very slow).

2.) I changed the queries so that listing_sold_date did not use a range. I used PHP to loop through all of the days in the year and updated the query like so: listing_sold_time IN ('2008-01-03','2008-01-02','2008-01-01',...). I figured that this would utilize the index better.

3.) Since the new table didn't seem to help enough I removed the CMASearch index which had all of the columns in one index and created individual indexes for each column. I tested things out and it seems to be working pretty well. However, I assume that tomorrow things will be back to normal (slowness).

In my research, I was under the impression that only one index can be used per query. MySQl picks the best one. I also read that in some cases, MySQl may use more than one index if it thinks it will work better (that is as much info as I could get out of it so it doesn't help to explain when MySQl will choose to use more than one). So I decided to make indexes with several columns. My understanding was that MySQl would use all of the columns in the index as long as every column was being used in the query. I later found out that MySQL will only use one range (eg. column > value). This is why I didn't include longitude in the index. I also noticed that after a certain number of columns were added to an index, it didn't seem to help anymore. This threshold was reached once the index contained more rows than the table.

I have done some research on the spatial indexes, but feel a little confused so far. I am also not sure if this will help since I am not doing a true radius search (I was before, but I have been trying everything possible to make this faster).

One of my biggest annoyances at this point is the apparent caching of data that falsely speeds up the queries. It is not a problem of a query cache because I slightly change the query each time. However, I think that some of the data is being stored in memory and so other queries are able to use that. Can anyone explain this? Could this be the problem? Do I need to have everything in memory for it to be quick?

So, anyway, here is my new query on the new table:

				properties_2_stats AS prop
				AND prop.beds IN (3,4)
				AND prop.baths_full IN (2,3)
				AND prop.xref_properties_class_id = '1'
				AND prop.latitude >= 33.3652434399
				AND prop.latitude <= 33.4231305601
				AND prop.longitude >= -84.6051318491
				AND prop.longitude <= -84.5296601509
				AND prop.xref_status_id = '3'
				AND prop.listing_sold_time IN ('2008-03-18','2008-03-19','2008-03-20',...<removed to save space>)

And here is the explain of the query:

id	select_type	table	type	possible_keys						key		key_len	ref	rows	Extra
1	SIMPLE		prop	range	beds,baths_full,xref_properties_class_id,listing_s...	latitude	7	NULL	16103	Using where

Here is the EXPLAIN of the table:

CREATE TABLE `properties_2_stats` (\n  `id` int(11) unsigned NOT NULL,\n  `listing_time` date default NULL,\n  `listing_sold_time` date default NULL,\n  `listing_pending_time` date default NULL,\n  `listing_status_time` date default NULL,\n  `xref_status_id` int(2) unsigned NOT NULL default '0',\n  `beds` int(2) unsigned NOT NULL default '0',\n  `baths_full` int(2) unsigned NOT NULL default '0',\n  `list_price` int(9) unsigned NOT NULL default '0',\n  `sold_price` int(9) unsigned default NULL,\n  `xref_properties_class_id` int(2) unsigned NOT NULL default '0',\n  `latitude` decimal(10,7) default NULL,\n  `longitude` decimal(10,7) default NULL,\n  PRIMARY KEY  (`id`),\n  KEY `beds` (`beds`),\n  KEY `baths_full` (`baths_full`),\n  KEY `xref_properties_class_id` (`xref_properties_class_id`),\n  KEY `listing_sold_time` (`listing_sold_time`),\n  KEY `xref_status_id` (`xref_status_id`),\n  KEY `latitude` (`latitude`),\n  KEY `longitude` (`longitude`)\n) ENGINE=MyISAM DEFAULT CHARSET=latin1

Here is the Table Status:

Name			Engine		Version		Row_format	Rows		Avg_row_length		Data_length	Max_data_length		Index_length	Data_free	Auto_increment		Create_time		Update_time		Check_time		Collation		Checksum	Create_options	Comment
properties_2_stats	MyISAM 		10		Fixed 		1250025		53			66251325	14918173765664767	109628416	0		NULL			2009-03-18 10:49:35	2009-03-18 10:49:45	2009-03-18 10:50:18	latin1_swedish_ci	NULL												

Options: ReplyQuote

Written By
Re: GEO Search with large table is very slow
March 18, 2009 09:27AM

Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.