I was trying to find a workaround to avoid lock escalation and I tested three "solutions" on SQL server 2000 SP1/SP2.
On my machine (with more than 700MB RAM), doing an update on a table containing 100000 records, I found that:
- with SP1, updating more than 765 records lock escalation locks the entire table, so I have concurrency problems, i.e. other session working on the table block and wait.
- with SP2, in my opinion, the threshold is 4860: if your update affects more than 4860 records, SQL Server escalates the lock from rows to table.
I came to the conclusion that lock escalation acts depending on the number of rows affected. I couldn't believe it's a matter of resources.
I can send you the test scenario if you want to verify what I'm going to say.
As I said, I found 3 possible solutions, but I'd like to hear from you if they are correct.
- The first one uses the "undocumented flag 1211", that you have to set in the registry among the SQL server startup params. This flag allow you, when the number of rows involved in the update is over the threshold, to force SQL server not to escalte from row lock to table lock.
Performance is good and so is concurrency.
I don't know why this flag is not recommended: can anybody explain me what are disadvantages in using it ?
- The second solution works using a cursor: performance are not good, but concurrency is guaranteed.
- The third one is very simple: how can you prevent SQL to lock the entire table ? The answer is: locking a single row not involved in the update operation.
So, I started a transaction which locks a single row not used by any other transactions in my application: then I go on with a new transaction that executes the big work (the big update), and because of the previously locked row, SQL Server is not allowed to escalate lock.
I verify that in this case SQL Server uses only row lock, nor page lock neither table lock.
It looks like you've done a lot of research here, I don't know if this will help but have you looked at SP_INDEXOPTION?
I am interested in your situation and have a few questions, if you don't mind.
[list=1][*]This mass update, does this happen all the time by the client application? So that end-users update 100's of records and therefore lock each other out?[*]Is this process also locking out clients that are just reading the data?[/list=1]
I think that every row lock allocates 96bytes, so when sqlserver(or the whole server) is starting to run out of memory sqlserver changes all the row locks to a table lock that don´t consumes that amount of memory.
There is however a parameter that you can try to change(It could be the same as you´re describing). It´s called "Locks" and you can access it throgh "sp_configure" with the advanced options on. Requires restart of sqlserver. I should be careful in adjusting the param to high though(I have newer tried it).
Have you tried to apply some table hints? I don´t know if they are applyible in your solution.
Please let me know if you come up with some new interesting stuff!
Does SQL Server have anything like Sybase ASE's 'lock promotion' configuration? I know in ASE you can specify high and low water marks for page (or row)-level locks to full table locks globally with sp_configure "page lock promotion..." or on a table-by-table basis with sp_setrowlockpromote.
Hi everybody !
Answering to achorozy I can say that, unfortunately, it's not a matter of SP_INDEXOPTION, because with this option you cannot disable table lock.
Besides, when the application does mass updates, SQL escalates lock to table exclusive lock and other sessions cannot even read rows.
I can say to Jonte that I've checked and I am not exceeding the maximum number of locks defined for the server. I've also tried to increase this parameter, but it'didn't work.
Answering to MattR, I didn't find any sp_configure option allowing me to define the lock granularity.
As far as other users reading information you could issue a NOLOCK, this will allow users to read data while the update is happening and will not block an update from happening.
Within stored procedures here at this site we issue a
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
at the begin of the procedure and
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
at the end, this takes the place of issuing NOLOCKs on each table.
Also the option on LOCK for sp_configure does not set the lock escalation
Use the locks option to set the maximum number of available locks, limiting the amount of memory Microsoft® SQL Server™ uses for locks. The default setting is 0, which allows SQL Server to allocate and deallocate locks dynamically based on changing system requirements.
When the server is started with locks set to 0, the lock manager allocates two percent of the memory allocated to SQL Server to an initial pool of lock structures. As the pool of locks is exhausted, additional locks are allocated. The dynamic lock pool does not allocate more than 40 percent of the memory allocated to SQL Server.