Optimizing small query on large dataset
Posted: Tue May 20, 2008 12:04 pm
Here we are again, dealing with potentially massive datasets. For testing purposes, I've kept my database size at just under 2 million records. Basically, all I want to do is grab the last 10 rows that have not been last processed. Currently my query is taking about 40 seconds and cannot figure out how to improve performance.
First, the query.
Second, the database structure.
Third, the explain.
Now, after doing a bit of research, I can now see that the query has to scan the entire 1.9 million rows which is why it is taking so long.. but I cannot figure out why. I just want to mention all the types are currently 'foobar'. We are still playing with the database structure, so the type field will eventually become a foreign key, but I seriously doubt that is causing my problems.
Can anyone provide insight? Thanks
First, the query.
Code: Select all
SELECT id, type, userkey, name
FROM test
WHERE type = 'foobar'
ORDER BY processed ASC
LIMIT 10Code: Select all
CREATE TABLE `test` (
`id` int(10) NOT NULL AUTO_INCREMENT,
`userkey` varchar(100) NOT NULL,
`name` varchar(255) NOT NULL,
`type` varchar(100) NOT NULL,
`entered` datetime NOT NULL,
`processed` datetime NOT NULL,
PRIMARY KEY (`id`),
KEY `type` (`type`),
KEY `userkey` (`userkey`)
) ENGINE=MyISAM AUTO_INCREMENT=1924053 DEFAULT CHARSET=latin1;Code: Select all
id select_type TABLE type possible_keys KEY key_len ref rows Extra
1 SIMPLE test range type type 102 NULL 1924050 USING WHERE; USING filesortCan anyone provide insight? Thanks