This test was done on a development server with no loading or no records being changed while the test was being performed. (times similar on production server during slow periods)
This delay becomes problematic as we use this table for our application which will join this table onto itself about 3 times. Doing this takes about 0.77 seconds which introduces a bottleneck when it is being accessed via our website.
It seems that when our queries involve a large number of rows, then the delay goes up signficantly. MySQL is said to work well with large tables, though, is there an issue when a large number of records are being accessed from a large table?
As you can see from show index, AID is indexed with a combined index over AID and BID whereas BID as an additional single field index. This means that the AID index is much bigger as it hold both fields data which may result in additional IO waits if the buffer does not fit into memory. Double check your mysql.cf for large enough key buffers if you want to keep to your index structure. Your machine seems to provide enough for 500 mb. If you use the default settings your main memory will not be touched!
We have 600M for the key buffer; that should be enough room. I did run a test where I created a single index on AID; though, that did not seem to improve speed. The more I look at it, the more it simply seems to be a function of having a large number of rows needing to be counted. The more rows that need to be counted, the longer it takes and the table's total row count seems to be less of a factor.