Hello everyone, first time posting here and just browsing around it looks like a great community
First the Versions :
Running on AIX
One of our customers was noticing an extremely slow response speeds. I ran some monitoring and the snapshot of the database revealed a dramaticly increasing rows read. I'm talking in the region of 3-4 million an hour! This is far too much for the system.
Is there a way of tracking down which is the offending table or query that is causing this?
Problem is the database has hundreds of tables so it would be like looking for a needle in a haystack to use snapshot on a table. Also I have no idea what the offending sql statement is that would cause it. Maybe I do need to go through every table?
Download and use the monitor I mentioned earlier. It has separate sections for SQL and Tables among others. You can then sort the lists presented by rows_read and quickly see the objects with the most.
Most of the high reads, are 1 statement reading upwards of 7,000 reads. I can understand that maybe it should be an indexed field and so on to reduce hte reads, interestingly though there are 0 sql executions?
Any reason why so many reads without an execution?
I think it is how snapshots work. You probably have the the monitor switches turned off by default. So the numbers are not being accumulated until you run the snapshot. If you keep the session live for the snapshots, and turn on the monitor switches, and run subsequent snapshots, you should see the executions go up as time progresses.