I have table in my db thats used as a temporary table. At the end of the day it contains thousands of records which get summarized then all are deleted. I recently moved the application over to a new server with pretty much the same hardware config and noticed a big performance hit when running queries on this temp table (455 milliseconds as opposed to 1) In short, the fix was dropping the table and re-creating it.
Anybody know how this could be? I know that even if you delete records from a table, the table still seems to retain its physical size on the hard drive, could this have something to do with it?
What "fix"? You just moved the application to a new server. You don't say that you changed the code or the process in any way, so it makes no sense that it would suddenly start dropping the table.
Do you know for a fact that the table is being dropped and recreated?
How many records are in it at the end of the day? I wonder if the statistics aren't just out of date.
If it's not practically useful, then it's practically useless.
Basically, I have a table that gets 1000's of rows added to it per day then at the end of the day the records are all deleted. It seems that after 9 months of performing this task on this certain table, queries on the table started taking 450+ milliseconds to complete rather than 0-2 milliseconds (which it should be) Im wonding if anyone knows why this happens or has experienced anything similar.