Results 1 to 15 of 15
  1. #1
    Join Date
    Jan 2007
    Location
    California, USA
    Posts
    520

    Unanswered: How to time various procedure methods?

    I have several ideas I want to try, but I want to be able to time each individual method so I can choose the quickest one. These ideas are on how to update SQL tables from an Access database. Currently I'm more interested in how to time these tests than in discussing the details of the test.
    Thanks,

  2. #2
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    Interesting problem, since you need to take as many moving pieces out of the problem as you can, without removing too many of them thereby poluting your measurement.

    In order to get you the best possible answer, I'm going to move this thread to the MS-Access forum. There are way too many quirks in MS-Access that I might miss, and one or more of them have probably worked this down to a science by now.

    -PatP

  3. #3
    Join Date
    May 2005
    Location
    Nevada, USA
    Posts
    2,888
    Provided Answers: 6
    I use this api:

    Public Declare Function timeGetTime Lib "winmm.dll" () As Long

    Basically assign that value to a variable, run your process, assign that value to another variable. The difference between the 2 variables is the elapsed time in milliseconds.
    Paul

  4. #4
    Join Date
    Feb 2004
    Location
    One Flump in One Place
    Posts
    14,912
    Quote Originally Posted by pbaldy
    I use this api:

    Public Declare Function timeGetTime Lib "winmm.dll" () As Long

    Basically assign that value to a variable, run your process, assign that value to another variable. The difference between the 2 variables is the elapsed time in milliseconds.
    Ok - gotta ask - why is Now() and use of datediff not sufficient?

    BTW - Vic I do more or less the same.
    Testimonial:
    pootle flump
    ur codings are working excelent.

  5. #5
    Join Date
    Jan 2007
    Location
    California, USA
    Posts
    520
    Paul, thanks.

    pootle, Now() gets down to the seconds, where timeGetTime() will show 1000 for one second. Therefore, we have much more precision with timeGetTime(). Therefore, I don't have to run 1,000 or 10,000 loops, but more like 10 or 100 and get the same results. I really don't want to say "same results", but at least in the same ballpark.

    pootle, not sure what you mean by you do the same, more or less?

  6. #6
    Join Date
    Jan 2007
    Location
    California, USA
    Posts
    520
    PatP,
    And if I would like to do this same type of timing within SQL Server, how would I do it? I'm assuming within a stored procedure, but how within a stored procedure, or ??????

    Thanks,

  7. #7
    Join Date
    Feb 2004
    Location
    One Flump in One Place
    Posts
    14,912
    Beg your pardon I see. Most of the stuff I time is pretty long (at my current place we measure lots of stuff in hours and a couple of really major processes in days - funny how you get used to it and forget about finer granularity) and\ or I run lots of iterations.

    You can use GETDATE() in sql server - assign to a datetime datatype and you have acccuracy to every 3.3 milliseconds. In fact - all of our procs are based on a template that, amongst other things, records the start and end time of every execution of every proc.

    This tells you:
    1) What are the most often run procs
    2) What are the longest running procs
    3) Which procs' execution times have grown over time
    Testimonial:
    pootle flump
    ur codings are working excelent.

  8. #8
    Join Date
    Nov 2004
    Location
    Norway
    Posts
    441
    VB(A) timing - QueryPerformanceCounter?
    http://support.microsoft.com/kb/172338
    Roy-Vidar

  9. #9
    Join Date
    May 2005
    Location
    Nevada, USA
    Posts
    2,888
    Provided Answers: 6
    Quote Originally Posted by pootle flump
    Ok - gotta ask - why is Now() and use of datediff not sufficient?
    What Vic said Senor Flump, better precision. You must be dealing with some massive data, to have processes take that long. You work for Ebay or something?
    Paul

  10. #10
    Join Date
    Feb 2004
    Location
    One Flump in One Place
    Posts
    14,912
    Quote Originally Posted by pbaldy
    What Vic said Senor Flump, better precision. You must be dealing with some massive data, to have processes take that long. You work for Ebay or something?
    Nah - I just design total crap

    They are VLDBs (or at least they were a few years ago - I recently read that a modern VLDB is 10TB+) that are loaded monthly. They sit there doing nothing for 30 or so days and then suddenly have several billion rows BCPed in & processed. As such I'm a little off kilter when it comes to timescales
    Testimonial:
    pootle flump
    ur codings are working excelent.

  11. #11
    Join Date
    May 2005
    Location
    Nevada, USA
    Posts
    2,888
    Provided Answers: 6
    I very much doubt the "total crap" part. Several billion...that makes the couple million rows I deal with seem rather insignificant.
    Paul

  12. #12
    Join Date
    Feb 2004
    Location
    One Flump in One Place
    Posts
    14,912
    TBH it is not really any different. The challenges are just the same. In a way it is easier because if you have mucked something up somewhere you really know about it. There's no "was that 2 milliseconds slower that time?". You get an inkling something is wrong when you come in the next morning and the query is still running
    Testimonial:
    pootle flump
    ur codings are working excelent.

  13. #13
    Join Date
    Jan 2007
    Location
    California, USA
    Posts
    520
    pootle,

    Thank you so much for keeping all your design crap at work. What you do here is really good. Actually, I agree with Paul, in that I also doubt the "total crap" part.

  14. #14
    Join Date
    Dec 2004
    Location
    Madison, WI
    Posts
    3,926
    The biggest killer for me with a few million records was having relational tables joined together on 2 fields. Of course, indexing tremendously increased return times. These were all sql server tables linked into an Access front-end.

    If it's worth anything, the one thing I did to cut the speed down quite a bit was to also store a lot of the fields I would normally just have in a relational table in the main data table itself (I know...I broke the rules of normalization and duplicated data). But these days with gigabytes of disk space, is it totally necessary to break everything apart and link 5-6 tables in a query to return all the results on totaling data? I would store the text field of data along with the ID field of the relational table in the main data table, using triggers to handle changes. It was like a cross between a data cube and a data warehouse philosophy with a "flat" type main data table. This way I could either total on the main data table grouping everything I needed to or I could total on just the relational table depending on the situation. If I only needed totals that entailed just the relational table, I could use just that table. If I needed totals which encompassed data from several relational tables, I only had to use the main data table. I got it down to where I could group and total on just 1 table verses having multiple tables in the query and users only had to wait seconds for totals on 5 million or so records. I compared using multiple relational tables in the query verses a query with just 1 table and wow, what a difference.

    You probably think this philosophy is totally crazy and idiotic (PK's idiot theme) but it really worked very well, was lightning fast and my totals were always exact or within a .003% discrepency. The .003% was cause by users selecting wrong fields or entering bad values.
    Last edited by pkstormy; 08-07-07 at 22:55.
    Expert Database Programming
    MSAccess since 1.0, SQL Server since 6.5, Visual Basic (5.0, 6.0)

  15. #15
    Join Date
    Feb 2004
    Location
    One Flump in One Place
    Posts
    14,912
    Well - you know my philosophy on that
    For a very similar reason though I avoid surrogate keys when there is a good natural key to be had.
    Testimonial:
    pootle flump
    ur codings are working excelent.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •