I am using DB2 7.2 database. I know its default locking is row level. I am having some problem with my application. When we are updating/inserting records into the database, it is locking the database and application is getting hanged. I am not sure if my database is using default locking or is getting locked from somewhere else. So I want to explicitly do the row level locking and setting the lock mode to Update.
Any help is appreciated.
Originally posted by alligatorsql.com Hello,
which database do you use. Some database can not run in row locking mode !
In DB2 UDB there are only two levels of locking row and table. You can change the default level of row locking to table using the alter table locksize ROW | TABLE. The default is row, as it has been for the last 12 years, and I imagine you want to keep it that way.
Example: ALTER TABLE EMPLOYEE LOCKSIZE TABLE
There is no means for you to control the locking explicitly, trust me you don't want to do this, that's why they call it a database manager. Instead you must let DB2 UDB know what your intentions are i.e. "FOR FETCH ONLY", "FOR UPDATE OF", "WITH UR", "WITH CS", ect. ect. ect.
You might also want to check to see if AUTOCOMMIT is turned ON | OFF. Are you commiting frequently, do it as often as you can, it's a good thing.
What is the size of your LOCKLIST, if it is too small you will escalate to table locks. The default size is always to small, unless you are running the sample database. What is the value of LOCKTIMEOUT you probably want to set this to 90 then work down to 30 or 10 seconds, the default value of -1 means that you won't time out, it's the default because at one time we had no LOCKTIMEOUT and it remains backword compatible.
Finally, take some snapshots and look at who is holding the locks and who is waiting on these locks.
We have LOCKLIST=15 (Good enough for our testing rightnow) and LOCKTIMEOUT=-1 (I am testing by chaning this to 90).
We have one Detail table and one Summary table. We insert one record into Detail table and may be 1-50,000 records into the Summary table. We cannot commit until all the records are inserted into the Summary table for the record inserted into Detail table.
The problem is coming up when we try to insert >20,000 records into the Summary table. This may be escalating to the table lock and not even allowing Read access to these tables.
Why it is locking table ? How can we have only row level locking while inserting records ?
It sounds like your LOCKLIST is too small. Here is the math, it takes 32 bytes for each lock, except the first lock which takes 64 bytes, so a single row lock is 32 bytes and a sigle table lock 32-btyes, unless these are the first locks then 64-bytes.
Your lock list is 15 (4K pages), so 15 * 4096 = 61440 bytes to support all your database locks, either row or table. Or you can have up to 61440 / 32 = 1920 locks, either row or table. After you reach 1920 locks DB2 starts lock escalation from row to table.
Now if your MAXLOCKS = 22, (the default) then any one single application can only obtain 22% of the LOCKLIST, so now after only 1920 * .22 = 422 locks from your applicatoin DB2 will start lock escalation (from row to table).
The default for LOCKLIST=50 and MAXLOCKS=22, I think these were the defaults back in 1991 when I started working with the product, back then a server with 16 MB was rare.
I usually start out with LOCKLIST=1000 and LOCKLIST=25 without question, this should be a no brainer on today's servers with 4, 8, 16, and 32 GB of memory standard (OK, maybe 32 GB is not standard, but there out there).
Locklist could be the solution for our problem but we don't want to have thousands of locks floating around. We are storing like 20,000-50,000 records.
After all the struggle we decided to commit after every 100 records and maintain the status, once all the data is stored we are updating the status to something like done otherwise rollback(one batch process will check for the status if necessary rollback the data).
It was a good excersie for me as I am not a database guy. I learned a lot of things here.