A Cache Solution to a Mysql Problem?
Posted: Wed Oct 27, 2010 11:52 am
I'm on a very slow machine doing a query that will find around 5,000 records from a table named LIST.
The LIST table has 126 fields. Yes I know it should be normalized/refactored but that's not an option at this time.
I'll be building an "advanced search form" for this so almost 100 of those 126 fields are eligible to wind up in the query that actually gets run.
If a user chooses to only see records with photos then I will also be joining this table with another table full of image meta data (not binary files) called Media.
Media has over 20,000 records.
1 List record owns many Media records (dozens) but I'm only interested in the Media record with the highest id number for each List record. But since I can only use LIMIT 1 in the query once I can't limit the join with the media table to 1 media item. So I wind up selecting all media which mean duplicate LIST data for each media record so I then have to use GROUP By List.id to get rid of the duplicates.
After this, when I display my Search Result Items I do 4 more queries each to turn category ids in the LIST data into their textual values pulled from 2 other tables.
Anyway, point is, no matter how I structure my queries mysql always needs to write a temporary table to disk to give me my result for the main query. This is slow. Followed by a couple more queries which might not be fastest way to do things.
Until I can hire a mysql wizard to sort this out I'm wondering if caching could help me out.
I'm thinking of :
-changing my main query to only select and return the LIST.id and 4 category id fields with no joins.
-loop through those and query for the category strings and 1 media record for each LIST record.
-build html table row search result item for each LIST record
-cache each search result item individually
So then when I do a query, as I loop through the results of the main query I can just pull the cached html based on LIST.id instead of doing more individual queries or lumping the categories into the main query as 4 more joins.
Does this sound like a plan worth trying?
The LIST table has 126 fields. Yes I know it should be normalized/refactored but that's not an option at this time.
I'll be building an "advanced search form" for this so almost 100 of those 126 fields are eligible to wind up in the query that actually gets run.
If a user chooses to only see records with photos then I will also be joining this table with another table full of image meta data (not binary files) called Media.
Media has over 20,000 records.
1 List record owns many Media records (dozens) but I'm only interested in the Media record with the highest id number for each List record. But since I can only use LIMIT 1 in the query once I can't limit the join with the media table to 1 media item. So I wind up selecting all media which mean duplicate LIST data for each media record so I then have to use GROUP By List.id to get rid of the duplicates.
After this, when I display my Search Result Items I do 4 more queries each to turn category ids in the LIST data into their textual values pulled from 2 other tables.
Anyway, point is, no matter how I structure my queries mysql always needs to write a temporary table to disk to give me my result for the main query. This is slow. Followed by a couple more queries which might not be fastest way to do things.
Until I can hire a mysql wizard to sort this out I'm wondering if caching could help me out.
I'm thinking of :
-changing my main query to only select and return the LIST.id and 4 category id fields with no joins.
-loop through those and query for the category strings and 1 media record for each LIST record.
-build html table row search result item for each LIST record
-cache each search result item individually
So then when I do a query, as I loop through the results of the main query I can just pull the cached html based on LIST.id instead of doing more individual queries or lumping the categories into the main query as 4 more joins.
Does this sound like a plan worth trying?