Results 1 to 2 of 2
  1. #1
    Join Date
    Jan 2003

    Unanswered: Query run time variations

    Hello All,

    I am running a query that kicks off a few triggers, involving a couple of servers (SQL 7.0). These triggers do invoke distributed transactions across linked servers. The first time I run this query it takes nearly 30 seconds, the second time I run it, it takes only one second or less. It seems after a period of time where there is no interaction with the linked servers or databases, the triggers/queries take the usual 30 seconds again. The only thing I can gather is that when the first query is run it makes, and appears to hold a connection to these servers/databases, thus improving the speed second time around. Is this true?

    As I am incorporating these triggers into an app, I was wondering if there was anything I can configure or do to ensure that the query is at it's fastest everytime, or that I have a connection to these linked servers so that all queries are of the same lightning speed.

    If anyone can help with this one it would be greatly appreciated.

  2. #2
    Join Date
    Jan 2003
    London, England
    I'm no expert on this but I think the sql-server caches the data ande the connections you use. I have a procedure (running on one stand-alone server with no links or nothing) that takes about 20 seconds to run returning data for one month. When I loop it and ask for data for each month for a full year (same procedure is called 12 times) it takes some 22 seconds. There has to be some caching involved here... there is just no other way...
    "Real programmers don't document, if it was hard to write it should be hard to understand!"

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts