Page 1 of 2 12 LastLast
Results 1 to 15 of 18
  1. #1
    Join Date
    Oct 2005
    Posts
    183

    Unanswered: NOLOCK query waits forever

    Back in the days of SQL 7.0 I used a lot of ODBC SELECT querying form VB applications, in which I implemented NOLOCK in order to prevent the primary business applications from being locked out of tables once the queries were run.

    Now, quite a few years later, I'm busying myself converting a lot of old Access based forms and queries to TSQL on SQL-Server 2000, and wonder aimlessly why NOLOCK queries (simple select ones) are imensely slower than a standars select clause.

    Code:
    SELECT * FROM employees
    This would be much much faster than the code below, but users would get "The current record could not be accessed, as it is being used by another user", evidently because I'm locking the record while producing the output.

    Code:
    SELECT * FROM employees (nolock)
    So this could should - as I remember it - do a dirty read on table, not obstructing other users and give me a snapshot of date as they are, although they might be locked for edit.

    Could anyone explain to me why the nOLOCK query fials to give me any output? It is as if the nolock request is waiting for the table/records to free? In which case I'll never be able to run a query.

    Cheers in advance, Trin
    IT squid: networks, servers, firewalls, routers and I dabble a little with SQL-server and Access as well....

  2. #2
    Join Date
    Jan 2004
    Location
    In a large office with bad lighting
    Posts
    1,040
    are you running this from a front end app or from query analyzer?

    try it in QA and set the results window to results to text. start your query ... do you start getting results?

    if it continues to run, open another window in QA and execute sp_who2. inspect the results for any blocking, or high cpu and/or disk io operations.

    report back with results.

    -- This is all just a Figment of my Imagination --

  3. #3
    Join Date
    Oct 2005
    Posts
    183
    I'm running it through QA.

    I've tried setting the transaction isolation level, but from what I can read, its just a question of setting the default read committed/uncommitted.

    55,RUNNABLE,sa,GRAMEX-D13,EMP_UDV,SELECT,2123079,4007,04/22 13:50:58,SQL Query Analyzer,55

    So its pretty CPU intenstive for a select. 5 columns, no where clause, no grouping or casting of output. Not even a SORT BY.

    That just doesn't make sense, why I get the results pretty much instantaneously without the NOLOCK's.

    I appreciate you looking at the matter, as I'm tearing my hair out and gulping down coffee like never before :-S

    Cheers, Trin

    EDIT: No blocking btw, just a single dbcc inputbuffer(58), everything else is background or sleeping. I've just checked that transaction log backups or maintenance plans aren't active either. All quiet on the western front.
    Last edited by Trinsan; 04-22-08 at 09:06.
    IT squid: networks, servers, firewalls, routers and I dabble a little with SQL-server and Access as well....

  4. #4
    Join Date
    Nov 2004
    Location
    on the wrong server
    Posts
    8,835
    Provided Answers: 6
    is employees (if that is indeed it's real name) a big heap?
    “If one brings so much courage to this world the world has to kill them or break them, so of course it kills them. The world breaks every one and afterward many are strong at the broken places. But those that will not break it kills. It kills the very good and the very gentle and the very brave impartially. If you are none of these you can be sure it will kill you too but there will be no special hurry.” Earnest Hemingway, A Farewell To Arms.

  5. #5
    Join Date
    Oct 2005
    Posts
    183
    Roughly half a million rows.

    A select * from in QA without the nolock displays the whole bunch in 1 minute 32 seconds, with NOLOCK I simply stopped the query after 30 minutes.
    I mean, no point in letting it run if it takes >30 minuts using NOLOCK, when it - as far as I know - ought to be just as fast, or faster.

    Mean anything to you?
    IT squid: networks, servers, firewalls, routers and I dabble a little with SQL-server and Access as well....

  6. #6
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    The CPU number in the sp_who2 output is cumulative over the life of the connection. It is not the last query to be run. Run
    Code:
    sp_who2 @@spid
    a few times in a row to see that in action.

    DBCC INPUTBUFFER is usually what Enterprise Manager runs to get the command running by any spid.

  7. #7
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    Try running this query to get some information on the spid that is running the long running query:
    Code:
    select spid, ecid, hostprocess, open_tran, blocked, waittype, cpu, physical_io, hostname, loginame, db_name(dbid), waitresource, cmd, program_name
    from master..sysprocesses p
    where p.spid in (select blocked from master..sysprocesses) or (p.waittype > 0 and p.spid > 40)
    order by spid, ecid
    The waittype values can be looked up in a KB article, # 822101, and should shed some light on what is going on.

  8. #8
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Quote Originally Posted by Trinsan
    That just doesn't make sense, why I get the results pretty much instantaneously without the NOLOCK's.
    Are you sure? Perhaps in one instance you are getting results back as SQL Server generates them, while in the other the server is waiting until the entire result set is generated before returning anything to QA.
    So maybe you are just getting SOME results instantaneously.
    Try using both methods to select into temporary tables (no data output to QA) and see if there is a difference in execution times.
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  9. #9
    Join Date
    Feb 2004
    Location
    One Flump in One Place
    Posts
    14,912
    Quote Originally Posted by blindman
    Try using both methods to select into temporary tables (no data output to QA) and see if there is a difference in execution times.
    Good point. You could instead use COUNT(*) - alternative to building temp tables.
    Testimonial:
    pootle flump
    ur codings are working excelent.

  10. #10
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    No, that may just grab index data from the system table, or the optimizer may take other shortcuts. To fully test, you need to be sure the actual dataset is generated, and you do that by loading the results into a temporary table.
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  11. #11
    Join Date
    Aug 2007
    Posts
    17
    the key problem is that the table you query from is alwarys reading or writing.so I have a suggestion.

    "select * into employees_query from employees."
    then you can create a trigger on "employees" for insert and delete(if update is used rarely) .when "employees" is inserted,the trigger also insert into "employees_query " table.delete the same.

    Then you can query from "employees_query ",it's much faster than from "employees".
    Last edited by marydan; 04-23-08 at 04:12.

  12. #12
    Join Date
    Feb 2004
    Location
    One Flump in One Place
    Posts
    14,912
    Quote Originally Posted by blindman
    No, that may just grab index data from the system table, or the optimizer may take other shortcuts. To fully test, you need to be sure the actual dataset is generated, and you do that by loading the results into a temporary table.
    Interesting - I've never seen it take stuff from the system tables but yes - it might scan, perhaps, a narrow nonclustered index instead of the table. I'm pretty confident a COUNT(some_non_indexed_column) would work but hey - it is a minor point in the extreme (not that that normally stops me )
    Testimonial:
    pootle flump
    ur codings are working excelent.

  13. #13
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Quote Originally Posted by marydan
    the key problem is that the table you query from is alwarys reading or writing.so I have a suggestion.

    "select * into employees_query from employees."
    then you can create a trigger on "employees" for insert and delete(if update is used rarely) .when "employees" is inserted,the trigger also insert into "employees_query " table.delete the same.

    Then you can query from "employees_query ",it's much faster than from "employees".
    Please don't do this....
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  14. #14
    Join Date
    Aug 2007
    Posts
    17
    well,another suggestion:
    Generally,a massive table is relative to something like 'datetime'.so we may devide the table into several tables by date.

    for example,there is a table tt(name,tel,buy_num,insert_dt).It is used to record a supermarket consume details.we are reading and writing it all the time.so it's slow obviously.

    then we create another table named tt_buffer1 ,which is used to store
    "select * from tt where insert_dt <= dateadd(day,-3,getdate())"
    then "delete from tt where insert_dt <= dateadd(day,-3,getdate())" .
    do it every day on 23:00:00.

    Table tt would just store fewer data,that is aways written or read. we can query the whole data by "tt union tt_buffer1".

  15. #15
    Join Date
    Aug 2007
    Posts
    17
    Quote Originally Posted by blindman
    Please don't do this....
    You are right.The I/0 pressure is aggravated in fact.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •