I have the following query running against a MySQL db every 30 seconds:
SELECT message.id FROM message WHERE userto='13689' AND tstampviewed IS NULL AND message.status != 'VOID';
It shows up in my slow query log a lot, but it looks to me like it is optimized as best as can be.
The result of EXPLAIN:
SELECT_TYPE = Simple
TABLE = message
TYPE = ref
POSSIBLE_KEYS = userto,tst,stat
KEY = userto
KEY_LEN = 53
REF = const
ROWS = 1
EXTRA = "Using Where"
The keys userto, tst, and stat are all normal BTREE indexes, one for each of the varchar fields referenced in the whereclause of the query. It is a MyISAM table with 300K rows. Users do write to the table consistently, but reads are much more likely (10/1 ratio of reads to writes). The db server is Windows 2008 Enterprise with a lot of CPU and fast drives.
Over the past month, we keep getting max_connection errors, even though I increased the max_connections from 750 to 1500. Several times a day, it seems like a query hangs (I can't verify this because I don't have access to the process list in real time), and 1500 queries pile up behind it and max out the connections. This then obviously causes a lot of other problems.
The above query is the one that consistenly comes up in the slow query log, even though I thought it was optimized as best as possible. Can anyone tell me otherwise or point me in the right direction to solve this problem?
Thanks in advance.
The following recommendations will help you in your SQL tuning process.
You'll find 3 sections below:
ALTER TABLE `message` ADD INDEX `message_idx_userto_tstampview_status` (`userto`,`tstampviewed`,`status`);
SELECT message.id FROM message WHERE message.userto = '13689' AND message.tstampviewed IS NULL AND message.status != 'VOID'