Page 1 of 2 12 LastLast
Results 1 to 15 of 17
  1. #1
    Join Date
    Nov 2007
    Posts
    9

    Unanswered: Timing a SQL function

    I am trying to use statistics to get the time it takes to run a sql function. When I use SET STATISTICS TIME ON it returns multiple results (one for each insert statement in my loop). Is there any way to get results for the ENTIRE function? Here is the loop that I am timing.--> (It simply populates a calendar table)

    SET NOCOUNT ON
    DECLARE @Counter INT
    DECLARE @ActualDate DATETIME
    DECLARE @FirstDate DATETIME
    SET @Counter = 1
    SET @FirstDate = '1/1/1900'
    SET @ActualDate = @FirstDate
    WHILE @Counter < 43830
    BEGIN
    INSERT INTO Calendar(ActualDate)
    values(@ActualDate)
    SET @ActualDate = DATEADD(day, @Counter, @FirstDate)
    SET @Counter = @Counter + 1
    END

  2. #2
    Join Date
    May 2004
    Location
    Seattle
    Posts
    1,313
    you can use profiler to see how long any query on the server is taking.

    otherwise you can do something simple like this:

    declare @then datetime
    set @then = getdate()

    -- do work here, whatever you want to time
    select * from master.dbo.sysobjects

    -- how long did it take?
    select datediff(ms, @then, getdate())


    or you can use this fancy CLR timer class:
    http://weblogs.sqlteam.com/mladenp/a.../02/39124.aspx

  3. #3
    Join Date
    Nov 2007
    Posts
    9
    Thanks Jezemine. SQL Server profiler worked great! I look forward to learning how to use it better.

  4. #4
    Join Date
    Nov 2005
    Posts
    122
    Also, if you put the whole thing in a BEGIN TRAN COMMIT TRAN it will be much faster.

    To make it even faster you could use the method in this example instead of using a loop.

  5. #5
    Join Date
    Nov 2004
    Location
    on the wrong server
    Posts
    8,835
    Provided Answers: 6
    Quote Originally Posted by kaffenils
    Also, if you put the whole thing in a BEGIN TRAN COMMIT TRAN it will be much faster.
    ummmmm really? never heard that one before in my life. got a reference? in fact i have always noticed that multi step explicit transactions can hinder an applications performance because of the locks it holds on tables until the whole transaction completes. This can block anyone else trying execute code against the same tables.
    “If one brings so much courage to this world the world has to kill them or break them, so of course it kills them. The world breaks every one and afterward many are strong at the broken places. But those that will not break it kills. It kills the very good and the very gentle and the very brave impartially. If you are none of these you can be sure it will kill you too but there will be no special hurry.” Earnest Hemingway, A Farewell To Arms.

  6. #6
    Join Date
    Jan 2007
    Location
    UK
    Posts
    11,434
    Provided Answers: 10
    I was going to say!

    Transactions can't make it faster, surely?

    I like to be proven wrong though...
    George
    Home | Blog

  7. #7
    Join Date
    Feb 2004
    Location
    One Flump in One Place
    Posts
    14,912
    Quote Originally Posted by georgev
    I was going to say!

    Transactions can't make it faster, surely?

    I like to be proven wrong though...
    Well - there is always an implicit transaction. I'm not sure of kaffenils's point however IME commiting transactions on each pass of the loop speeds things up when dealing with large volumes of data. IME commiting 100 transactions of 100, 000 records per transaction is faster than committing 1 transaction of 10, 000, 000 records. kaffenils's point seems to be the opposite....

  8. #8
    Join Date
    Nov 2005
    Posts
    122
    Don't belive me? Well, then try it for yourself. Encapsulate the whole statement with BEGIN COMMIT TRAN and time it. Then run it without the BEGIN COMMIT TRAN (implicit transactions).

    To me it makes sense that 1 transaction with 47000 INSERTs takes less time than 47000 transactions with 1 INSERT.

    Don't have any links to it, but I was told on a tuning and optimization course held by a previous developer of SQL Server. I'll find a link to his site later.
    Last edited by kaffenils; 01-06-08 at 14:12.

  9. #9
    Join Date
    Jan 2007
    Location
    UK
    Posts
    11,434
    Provided Answers: 10
    Code:
    CREATE TABLE #myTable (
        someDate datetime
    )
    
    CREATE TABLE #resultTable (
        trans bit
      , time  int
    )
    
    DECLARE @date datetime
    DECLARE @then datetime
    DECLARE @i int
    DECLARE @j int
    
    SET @j = 0
    SET @date =	DateAdd(d, DateDiff(d, 0, GetDate()),0)
    
    SET NOCOUNT ON
    
    WHILE @j < 1000 BEGIN
    
      SET @i = 0
      SET @then = GetDate()
    
      WHILE @i <= 3650 BEGIN
        INSERT INTO #myTable (someDate)
        VALUES(DateAdd(d, @i, @date))
        
        SET @i = @i + 1
      END
    
      INSERT INTO #resultTable (trans, time)
      VALUES (0, DateDiff(ms, @then, GetDate()))
    
      SET @i = 0
      SET @then = GetDate()
    
      BEGIN TRANSACTION
        WHILE @i <= 3650 BEGIN
          INSERT INTO #myTable (someDate)
          VALUES(DateAdd(d, @i, @date))
          
          SET @i = @i + 1
        END
      COMMIT TRANSACTION
    
      INSERT INTO #resultTable (trans, time)
      VALUES (1, DateDiff(ms, @then, GetDate()))
    
      SET @j = @j + 1
    
    END
    
    SET NOCOUNT OFF
    
    SELECT trans
         , Avg(time) As [avgTime]
    FROM   #resultTable
    GROUP
        BY trans
    
    DROP TABLE #myTable
    DROP TABLE #resultTable
    Code:
    trans avgTime
    ----- -----------
    0     99
    1     100
    
    (2 row(s) affected)
    Last edited by gvee; 01-06-08 at 16:17.
    George
    Home | Blog

  10. #10
    Join Date
    Feb 2004
    Location
    One Flump in One Place
    Posts
    14,912
    George - you'll probably need a bigger number than that to see an appreciable.
    kaffenils - I agree - I would expect that to be faster. But IME there is a critical mass where you are better off batching the changes (once you are hitting several million updates). I didn't make it clear in my post that I was talking about this sort of thing rather than singleton inserts.

  11. #11
    Join Date
    Feb 2004
    Location
    One Flump in One Place
    Posts
    14,912
    Also, I suspect inserting into a heap will skew results. I would imagine kaff's solution would work best when making a series of random inserts into a clustered index. If the key is monotonically increasing then I would guess there would not be a huge gain either. Not sure without testing though.

  12. #12
    Join Date
    Jan 2007
    Location
    UK
    Posts
    11,434
    Provided Answers: 10
    It was a quick test - showed no real difference between the two methods...
    However you are completely right, there are no indices (index pl?); but the performance impact of adding them would surely affect both method equally..

    Feel free to post contrary to this; I just don't have the time right now to play with the above script.
    George
    Home | Blog

  13. #13
    Join Date
    Feb 2004
    Location
    One Flump in One Place
    Posts
    14,912
    Ok - I am working on a similar basis to selects. You know that once a query returns a certain percentage of a table there is no point using a nonclustered index to access the clustered index\ leaf level. For singletons yes but more than about 5% of the table a scan is more efficient. Similarly here, the loop forces 1000 singleton inserts whereas the single transaction allows SQL Server to consider the set as a whole.

    I'm not sure about earlier versions but SQL Server 2005 has two strategies for modifying data - table level and index level. It cannot select the most efficient of these strategies for the entire set of data without knowing what the entire set of data is. However, index level modification becomes more attractive to the engine the larger the set of data being modified.

    So - that's in a nutshell why I think Kaff is on to something.

  14. #14
    Join Date
    Nov 2005
    Posts
    122
    The number is too low. Try with a higher number like 100.000 or even 1.000.000

    This runs in about 1.5 seconds on my test server.
    Code:
    use tempdb
    
    create table x(id int)
    set nocount on
    declare @i int, @start datetime, @end datetime
    set @i=0
    set @start=getdate()
    begin tran
    while @i<100000
    begin
    	insert into x(id) values(@i)
    	set @i=@i+1
    end
    commit tran
    set @end=getdate()
    
    print datediff(ms,@start,@end)
    
    drop table x
    And this takes about 10 seconds.
    Code:
    use tempdb
    
    create table x(id int)
    set nocount on
    declare @i int, @start datetime, @end datetime
    set @i=0
    set @start=getdate()
    while @i<100000
    begin
    	insert into x(id) values(@i)
    	set @i=@i+1
    end
    set @end=getdate()
    
    print datediff(ms,@start,@end)
    
    
    drop table x
    Another nice thing to know is that INSERT/DELETE/UPDATE in tempdb is faster than other databases because the transaction log is treated differently.
    Ref http://www.microsoft.com/technet/pro...ithtempdb.mspx

    However, there is no need to REDO them because the contents of tempdb do not persist across SQL Server restarts. Because the transaction log does not need to be flushed, transactions are committed faster in tempdb than in user databases.

  15. #15
    Join Date
    Jan 2007
    Location
    UK
    Posts
    11,434
    Provided Answers: 10
    I think you'll find that my code ran 7,300,000 inserts

    I've run yours and your transaction one does indeed perform quicker.

    Is this because the transaction log treats this as a bulk insert and therefore only logs once?

    Thanks for the tip about tempdb - I never knew that!
    George
    Home | Blog

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •