I have a complex procedure (300+ lines, but it used to be over 2000). The procedure uses one table variable and three temporary tables with indexes. The temporary tables are joined for the final output.
The problem is that the process takes 24 seconds if I execute the statements from query analyzer, but executing a procedure with same logic causes the process to take nearly six times as long.
Any ideas on what might cause a process to run six times longer as a procedure than as a script? The execution plans appear to be the same, and the same login is being used for both.
Are there parameters passed to the procedure/script?
Just to be absolutely clear, if you cut and paste the meat of the procedure into Query Analyzer the meat will run faster than the procedure itself? I just want to make sure that the same client (and all network hops) are similar for both tests there.
About how much data is being joined in the three tables and returned in the resultset?
I have been experiencing the same myself .. though the difference between the script and proc execution time is not more than 5 secs ...
might have something to do with the executions plans and way they are retrieved for sp and statements
How about this: Using the meat (lightly basted, of course), put in different values for the parameters, and see if different query plans come out the other side.
I had one case where SQL Server actually chose a poor execution plan for a stored procedure. We had to make sure that the first set of parameters queried by it was the single set of parameters that gave the best overall plan. Could you be seeing a similar effect?
That would regenerate the query plan for each execution (effectively that should have reduced it to just the meat). In my problem, I had 3 different query plans of which one was outlandishly inappropriate even for the parameters it was generated by, one middling plan, and one that was acceptable for all combinations. This only works if the query plan coming out the otherside of the optimizer is significantly different for each set of parameters.
Is it possible that the index statistics on the temp tables are not considered, since they are not available at runtime? As an experiment, what happens if you add permanent tables in place of the temp tables?
If it was running fine last week, do you happen to know if any significant change has happened to the data? Purge, update or load? You cna check on the last time statistics were updated for each index by running the stats_date function. I think SQL Server automatically runs update statistics on a table if it notices a 10% difference in the number of rows, but maybe not necessarily on a mass update of values in the table you may want to join on.
Low disk space in and of itself should not matter, but it could be a second symptom of a more general problem. It could be you are getting enough data in the database(s) to overwhelm the caching algorithms, and are now seeing more paging of the data. Kindly ask the users to stop putting so many orders into the database, as it could be slowing the system down ;-).
On a more serious note, see if you can get the admin to run a few dbcc showcontig statements on the tables and indexes you are using. You may be able to rebuild the indexes, and get more rows per read of the disk.