The second time I run a query I would expect it to run quick because the table stays in memory. My sybase server appears to not be caching the query results or table. My server is doing physical reads from the disk every time it runs the query where it should only be doing logical reads. sp_sysmon "00:02:00" shows Cache Search Misses as 75% of total context switches. Does anybody have any idea why my server might not be caching correctly? (not using a named cache, default cache is around 3GB)
The Sybase cache policys (strict LRU and relaxed LRU (or clock)) works togehter with the MRU policy. Which one to use is decided by the optimizer, i.e. you chose strict or relaxed, but the MRU policy is always available.
The (strict) LRU policy is implemented as a page chain with one MRU-end (Most Recently Used) and on LRU-end (Lest Recently Used). Somewhere near the LRU-end is a wash marker, where dirty pages are written to disk and/or the will soon leave the cache. When a page is taken into cache by the MRU policy, the page is placed at the wash marker.
When you perform a range query, the MRU policy is chosen by the optimizer and the pages brought in to cache is very likely to be aged out from the cache. This means that they will not be in cache the next time you run the query.
*) force the optimizer to use the LRU policy. Not sure if this is possible. If it is, it may result in lower cache hits for other queries.
*) Make the wash section larger to prevent pages from aging out. This may be useless (or inefficient) if your table is large.
*) Create a special table to hold the number of records and update it through triggers on your ts_order query.
By the way, do you have an index on the attribute create_date ??
If you have one, the query should not have to access the table data, only the index. You can then bind the index to a own named cache to make the use of it even more efficient.
Thanks for the info matssve. I've been on a long vacation. Sucks to be back. Hehehe.
When I removed the pools in the default data cache my problem went away. For some reason the 4k and 16k pools were causing a problem. My 16k pool was fairly small, so I'm thinking that could have contributed to the memory issue. I configured this server based on a production server that doesn't have this problem. That's why it seemed so strange at first.
I eventually want to get the 16k pool back in there to cut down on reads, but I'm proceeding with caution.