I'm getting some intermittent performance problems on my server:
I have the following database table:
5 columns, 1 index, currently 30k rows.
Eventually the table will need to be able to handle 10 million rows or more.
I understand that on an insert operation the index table will have to be updated. But, even with 30k rows, sometimes the querys take like 2 or 3 seconds. does this sound right to you? or is my server under some other kind of load. Sometimes the same query executes in less then a tenth of a second. The table is the default type (whatever that is in mysql)
Even if it's doing a table scan it should still only take a fraction of a second to read 30k rows. Is the table locked (lot's of updates/inserts)? are you using transactions? cursors? what types of processes access the table? It may well be worth thinking about using an InnoDB table type to use row level locking. Try reading this page on locking.
If you try accessing the table with nothing else going on (or create a copy of the table and data and use that) does it return results instantly? We still need to see the table definition and a sample query that's taking a lot of time.
You may need to run a 'top' and see if there is any iowait or high load due to some other issue. If one of your HDDs is having issues it may break sooner or later, and you may also want to check the table to make sure everything is running in order, such as ANALYZE and CHECK and an OPTIMIZE after.
We have 5 databases with 2 masters and 3 slaves. If the first master goes down then the writer role is picked up by the second one. The 3 slaves are in reader role alone. In theory the 3 requests are supposed to be served by the 3 slaves. However, what we find is that when both masters goes down then the application server is unable to connect to any of the 3 slaves. Even read requests are not being serviced so essentially instead of 5 DBs we are dependent on just 2.