Results 1 to 3 of 3
  1. #1
    Join Date
    May 2009

    Red face Unanswered: Indexes and Stored Procedure profiling confusion


    I've been looking at optimising a stored procedure which retrieves data from a table and orders this by one of the columns in that table. The query includes an Inner Join to one table and a LEFT JOIN to another.

    In testing the duration of the query, I decided to look at what impact an index would have, by creating an index on the field that is being used to order the results of the query. This field is of type float and is not a primary or foreign key.

    It appears that each time I update the Stored Procedure by generating the "Alter Procedure" script and running it, the duration of the query is greatly increased, as is the number of reads to execute the query, even if the SQL for the stored procedure has not changed.

    If I then delete and recreate the index, or rebuild it , the duration and number of reads is much less.

    The difference between the two results is as follows: -
    * After resubmitting the stored Procedure, the query takes 7 seconds and requires in excess of 1.8 million reads

    * After rebuilding the index, the query takes less than 1 second and requires about 4000 reads.

    There are approximately 400000 items in the main table and no NULL values (all 3rd normal form). No records are being added / deleted from these tables during profiling.

    The server had to be rebooted today and that also had the effect of causing the query to take a long time to execute, until I rebuilt the index.

    Can someone point me in the right direction as to why the query may require so many more reads after re-submitting the stored procedure and why I would need to rebuild the index to improve the performance, even though nothing should be changing?

    Many thanks,

  2. #2
    Join Date
    Nov 2004
    I would have to see the code personally but here is my standard checklist for performance issues.

    What are the stats on the index before rebuild
    What is the execution plan showing me?
    What happens when I free the proc cache
    What happens if I add with recompile on the procedure (just for a few tests)
    Am I using local variables? (parameter sniffing)
    What if i use a different datatype (convert my field) ie. using int instead of float?
    Do you have any functions in your query included in the where clause?
    Is there anyone else executing the proc that would cause the execution plan to change in between my execution?
    I am not sure if any of this would solve your issue or not, I just gave it a try...

  3. #3
    Join Date
    May 2009

    Red face

    Thanks for your response.

    After using the profiler to look through your suggestions, it appears that when the query runs slowly, the profiler shows up a "sort warning", suggesting that the number of records to sort do not fit into memory.

    This would make sense with what I'm seeing with the increase number of reads, but appears to indicate that the index isn't being used after re-submitting the stored procedure; not what I would expect.

    The query in the stored procedure isn't doing very much. There's a case statement to choose the query required, then it calls a query similar to : -

    Select TOP(@numItems) id, username, val FROM T1 ORDER BY val

    In the test cases, the number of items being requested is the top 2, but could be any number, depending on what is passed into the stored procedure.

    Does anyone know why the index would not be used after resubmitting the stored procedure? In addition, the same occurs after calling "dbcc freeproccache"

    Alternatively, am I missing an obvious way to get the same result by writing the query in a different manner?


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts