Using SQL Server 2000, SP1 with 4Gb max memory allocated to the instance. The problem is that one large table is hogging cache and it's dragging down overall query performance. I realise it's in cache because it's getting queried regulary. However, I need to know what options exist to get around this problem - to free up some cache for other tables and indexes? Of course, there is the option of archiving off some the data in the table to reduce its size and we will look at doing this although it will not be as easy as it sounds.
I can imagine that there must be many databases that have at least one large table that is getting hit regularly and is left in cache more-or-less permanently. Therefore, I can't believe I have an usual problem.
Are you sure it is this the caching of this table causing performance problems and nothing else? Have you read up on the lazy writer and understand how it decides what to keep in cache and what to release (in particular how it calculates cost)?
My point is - if this table is queried regularly and SQL Server determines that it is best left in the cache - why would you want to force SQL Server to retrieve the data from the disk each and every time the table is queried? Wouldn't you expect this to cause the performance to dramatically decrease?
I'm not sure about a lot of things! I just find it very difficult to get definite info on what SQL Server is doing/reporting in some areas. For example, in Perfmon, the Page Life Expectancy counter... I would assume that seeing this counter dip to zero regularly and struggle to get above 200 seconds would be a bad thing? If so, I have a problem. However, at the same time, I see Buffer Cache Hit Ratio look healthy at 99% +. I can't reconcile these two pieces of information.
Another example... In the article you suggested (http://www.sql-server-performance.co...ata_cache.asp), it mentions syscacheobjects and BOL confirms that 'user tables' are a possible object type. Therefore, I am confused when I query syscachetables and do not see hardly any references to user tables at all on the system in question (on other sql instances I do) and certainly no reference to the large and regularly quiried tables in question. However, if I analyse the output from DBCC MEMUSAGE, I can definitely see regularly queried user tables reported as being in cache as one would expect. Can you see why I find it so difficult to get a full picture of what is going on?
What I can say is that I've focused mainly on the info reported by DBCC MEMUSAGE and, from that, we've identified some large user tables that are hogging cache and we have evidence through Page Life Expectancy and other sources that many other important tables don't get much cache time. I get the impression that SQL Server is busy thrashing pages in and out of cache and it would be better if it could leave a bigger proportion of them there. I've also seen that better indexing on a table can improve the problem a lot. SQL Server seems to cache a lot less of a user table if the indexing is sorted out. So, in some ways, things are improving here. However, as soon as we improve the situation for one table, the cache space made available seems to be quickly taken up and hogged by another large table. We're chipping away at the indexing but it's not always possible/practical to create much of a cache space saving through improved indexing.
What I've tried to explain here is that if it were just a problem of one table staying in cache, you're right, it wouldn't be a problem... However, that is not the problem. The problem is that a few tables appear to fill most of the available cache and there is a lot of other important table data that doesn't seem to get much cache time because the cache appears relatively small and doesn't grow despite the fact that a bigger memory pool has been assigned to SQL Server via the Max Memory setting. I'm going for a service restart this weekend by the way.