Page 1 of 2 12 LastLast
Results 1 to 15 of 22
  1. #1
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1

    Unanswered: Procedure vs Script

    I have a complex procedure (300+ lines, but it used to be over 2000). The procedure uses one table variable and three temporary tables with indexes. The temporary tables are joined for the final output.

    The problem is that the process takes 24 seconds if I execute the statements from query analyzer, but executing a procedure with same logic causes the process to take nearly six times as long.

    Any ideas on what might cause a process to run six times longer as a procedure than as a script? The execution plans appear to be the same, and the same login is being used for both.

    blindman

  2. #2
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    Blindman, just a few questions

    Are there parameters passed to the procedure/script?

    Just to be absolutely clear, if you cut and paste the meat of the procedure into Query Analyzer the meat will run faster than the procedure itself? I just want to make sure that the same client (and all network hops) are similar for both tests there.

    About how much data is being joined in the three tables and returned in the resultset?

  3. #3
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Yes, the procedure takes about a dozen optional parameters. I am supplying three in my test.

    Yes, the meat is the same (it tastes like chicken). I just remove the CREATE PROCEDURE header and replace it with explicit variable declarations.

    For my test parameters I am only returning five lines of data, though this is culled, calculated, and aggregated from many much larger tables.

    The temporarty tables are being loaded with about 30,000 rows.

    blindman

  4. #4
    Join Date
    Feb 2002
    Posts
    2,232
    Did you run sp_recompile on the stored procedure ?

  5. #5
    Join Date
    Feb 2002
    Posts
    2,232
    Have you tried using table data type instead ?

  6. #6
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Tried recompiling. No effect.

    Bounced the server. This made both executions run slow (so cache may have something to do with it).

    Can't use table datatypes for the remaining three temporary data sets because I need to have an indexes on them or the procedure takes forever.

    blindman

  7. #7
    Join Date
    Jul 2003
    Location
    The Dark Planet
    Posts
    1,401
    I have been experiencing the same myself .. though the difference between the script and proc execution time is not more than 5 secs ...
    might have something to do with the executions plans and way they are retrieved for sp and statements
    Get yourself a copy of the The Holy Book

    order has no physical Brett in The meaning of a Kaiser . -database data

  8. #8
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    How about this: Using the meat (lightly basted, of course), put in different values for the parameters, and see if different query plans come out the other side.

    I had one case where SQL Server actually chose a poor execution plan for a stored procedure. We had to make sure that the first set of parameters queried by it was the single set of parameters that gave the best overall plan. Could you be seeing a similar effect?

  9. #9
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    I tried putting WITH RECOMPILE at the start of the procedure. Wouldn't this accomplish the same thing? It did not affect the execution time.

    blindman

  10. #10
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    That would regenerate the query plan for each execution (effectively that should have reduced it to just the meat). In my problem, I had 3 different query plans of which one was outlandishly inappropriate even for the parameters it was generated by, one middling plan, and one that was acceptable for all combinations. This only works if the query plan coming out the otherside of the optimizer is significantly different for each set of parameters.

    Is it possible that the index statistics on the temp tables are not considered, since they are not available at runtime? As an experiment, what happens if you add permanent tables in place of the temp tables?

  11. #11
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    I have to have temporary tables so that it can be multi-user.

    Plus, it appeared to be running fine last week.

    blindman

  12. #12
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    If it was running fine last week, do you happen to know if any significant change has happened to the data? Purge, update or load? You cna check on the last time statistics were updated for each index by running the stats_date function. I think SQL Server automatically runs update statistics on a table if it notices a 10% difference in the number of rows, but maybe not necessarily on a mass update of values in the table you may want to join on.

  13. #13
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Our admin tells me we are running low on drive space. Perhaps that is affecting tempdb or drive cacheing.

    blindman

  14. #14
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    Low disk space in and of itself should not matter, but it could be a second symptom of a more general problem. It could be you are getting enough data in the database(s) to overwhelm the caching algorithms, and are now seeing more paging of the data. Kindly ask the users to stop putting so many orders into the database, as it could be slowing the system down ;-).

    On a more serious note, see if you can get the admin to run a few dbcc showcontig statements on the tables and indexes you are using. You may be able to rebuild the indexes, and get more rows per read of the disk.

  15. #15
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    This is a development database on a development server, so no new data is going into it and the table sizes are not changing.

    The DBCC Showcontig is a possiblity I'll look into.

    In the meantime, I'm goint to try running it on a server with more drive space just to see if it makes a difference.

    blindman

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •