this package "SQL Server Live Lock" (Live_Lock_Monitor)
determines existence of persisent locking situations on SQL Server
that are NOT for any reasons resolved by SQL Server (see LiveLocks vs. Dead-Locks details below).
It monitors activity of SQL Server, detects "persistent locking" event
(existince of live lock for several minutes),
analyzes "live locking chain", determines credentials of the process that created the locking
("head of the locking chain"), raises error on SQL Server level,
can be configured to notify DBA (by Net Send or e-mail).
The package also determines the SQL Server response time and periodically logs actual response time (configurable).
It raises SQL Server Error if response time is too slow (configurable).
Note: this package was NOT created for dead-lock monitoring (although it can accidentally capture details of dead-locks).
It detects and monitors details of "Persistent live Locking" (or "Live Lock" as it is called)").
Unlike Dead-locks the "Live-locks" are NOT automatically resolved.
This Package (Publication) is a FREE-WARE.
It can be freely distributed and used for non-profit purposes.
Contact with the author and owner of the Package:
You may ask questions, re. the package and notify re. its shortcomings
or propose additional functionality.
The author provides SQL Server DBA, Developer, ETL and DB Design services,
including part-time Database Monitoring and Administration
and has certifications of: MCDBA, MSCD (SQL Server 7.0) and OCP (Oracle 8i).
Details of functionality:
MONITORING OF live LOCKING:
This package creates separate Database, called "DBA".
It also programmatically creates new Job "Server_HeartBeat_Monitor", schedules it (execution every minute, configurable)
and runs it (and monitors results of its execution).
It does not affect any existing databases (or any objects).
The package distuinguishes between scenarios of "Dead-lock" (may be LOGGED, but no errors will be raised)
and "live Lock".
If live Lock is "Persistent" (for a specified amount of time)
then SQL Server error is raised and (if configured) Production DBA is notified.
When locking situation is detected, the package anylizes the "locking chain"
(all processes that are affected), gets to the "head of the locking chain"
(the processes that has created the locking in the first place).
The package LOGS this data (for future DBA reference and auditing) in the table, named: tbMonitor
The functionality if driven by value: persist_Lock_Count (you will see sequentially increasing
values, if the SAME user continues to lock other processes (this is definition of a "persistent lock")
In a configurable amount of time (by default 5 mins)
the SQL Server error is raised (with all pertinent data, example):
sp_HeartBeat DBA package, PERSISTING live LOG DETECTED: Locking User Name:xxx lock_Started:xxx
SPID:xx Login:xxx Computer that Created lock:xxx DB:xxx Command:xxx Lock Duration(Sec):xxx Program:xxx
It also can notify DBA (configurable).
After completion of live locking (we do not go into ways and methods here),
in about 1 minute - another SQL Server Error is raised:
DATABASE LOCK WAS JUST CLEARED/RELEASED... .
You can see lock_Started and lock_Cleared columns in tbMonitor_Lock table. See also:
persist_Lock_Count column in the same table.
MONITORING RESPONSE TIME:
After installing (technically, in 10 minutes) the package does "automatic calibration":
it determines average response time (on the query, that is periodically issued by the package).
You will see error in sql server log with such message:
"sp_HeartBeat DBA package, determined SQL Server average response time= 112 milliseconds."
(your response time value will depend upon your hardware, and should be in the range of 30 ms
for a typical production server).
The package memorizes this value in the table, after that it determines slow response time
as times of this memorized value (thus making detection of slow response time NOT dependent upon hardware used).
Once every minute (configurable) the package issues a simple SQL query (inside a scheduled job).
If the server does NOT respond for any reasons in a specified amount of time (configurable)
the SQL Server error is raised (example: time is in ms):
SQL Server message: sp_HeartBeat DBA package: VERY_SLOW_RESPONSE= 1130.
This command will get lock and duration data for you:
the package was created and tested on SQL Server 2000 SP1 (then used under SP2, SP4).
It was not tested under SQL Server 7.0 (but probably will work).
End of description of funcionality.
See some back-ground materials:
Live locking (Active Locking) vs. Dead-locks and SQL Server Accessibility
Dead lock is when several processes are waiting for each other resources (e.g. pages of the table).
Dead locks are found by the at run-time and automatically resolved (one of the processes is killed, error is logged).
Active lock (live lock) - several processes are waiting for a resource (e.g. table or set of table pages or extents) that is temporarily unavailable.
During live lock other processes, attempting to reach the table, are automatically lined up (queued). They are expecting when the first process (transaction) will finish.
This is a normal action in the database.
There are some additional scenarios how live locks can originate:
(this is update by Tibor Karaszi, SQL Server MVP )
A has shared lock on resource X
B requests exclusive lock. Is blocked.
C requests shared lock, is granted.
A releases lock, but C has its shared lock. B is still blocked.
D requests shared lock, is granted.
E requests shared lock, is granted.
C releases lock, but D and E has its shared lock. B is still blocked.
And so on.
End result is that B would never get access to the resource since new shared locks are granted,
overlapping in time. To the best of my knowledge, SQL Server handles this by queuing the lock
requests. There are some other alternatives for live locks, like conflicting requests are made, and
error handling results in these requests are re-made all the time and never granted. Your
description, from what I read, seems to be more "blocking", where one or several processes are
simply waiting for a resource to be available.
(End of update by Tibor Karaszi)
If actions: inserts, updates and selects are correctly written and transactions (explicitely or implicitely created) are short, live locks are not a problem.
However, in a number of scenarios, live lock become PERSISTENT.
Live locks may last from millisecons to many hours
For live locks there is (at the back end level):
1) no automatic detection.
2) no automatic resolution.
3) no logging, no errors raised.
The only way that live locks typically reveal themselves is timeouts on application level.
1) User deletes several hundred rows from a heap table with 20 mln records. Practice shows that any access to this table is delayed for the whole duration of a delete action
(see accessibility of data discussed below).
2) User deletes several hundred rows from a large table that has 10 FK constraints to other tables (some of them are heaps). The same: live lock, no accessibility of the
table for the whole duration of the delete.
3) User inserts with bulk insert several thousand rows. Underlying table(s) in practice not accessible (even if Microsoft might happen to deny this).
4) User blissfully writes an embedded SQL statement in his application: "select * from tbTransactionDetails". He does not expect any troubles, even if table contains 100
columns or may grow to 20 mln rows. After all, its only a select. Application developer does not understand that the very first record (being consumed by the application)
must be locked by some mechanizm and NOT be changed until the LAST row is pumped to the application server.
Lets rethink previous paragraph from point of view of our common sense: lets assume the first row contains your checking account balance, and last row is record of your
saving account. You expect the Microsoft to guarantee you that these accounts do not contradict each other (inside any select statement), regardless of how many
processes may attempt to modify the first or the second account (or both) for the duration of select.
5) Application developer specifies WHERE conditions for SELECT clause, and it works fine. However, conditions are dynamic, and (especially) if they rely on non-clustered
indexes, as long as statistics for the table is not updated, or large number of rows are to be retrieved for the specified conditions, select statement will live-lock the table for
Using Sp_who2 we shall see "Blocked by" column. This is SPId of the process, that created live lock.
Once again: Live locking is a normal activity in a database, unless it is a persisting live lock.
Typically, DBA kills process with the ID, specified in this column, if it is blocking other processes for a long time.
If several processes show different numbers (i.e. are blocked by different processes) - this is indication of a LOCKING CHAIN.
Example: SPs 55 and 57 are waiting for process 88. It means that process 88 is HEAD of the locking chain.
Sometimes the hyrearchy inside the locking chain may contain 3-4 layers: e.g. SP 55 waits for SP 57 that waits for SP 88, etc.
It is useless to kill process 57 (because it is NOT head of the locking chain).
Only killing of the head of the locking chain will destroy the whole locking sequence.
ACCESSIBILITY OF DATA
(for technical reasons - there is limit on the size in dbforums- this material you can see inside attachement itself).
Last edited by alakimov; 01-03-06 at 00:15.