Results 1 to 6 of 6
  1. #1
    Join Date
    Jul 2005
    Location
    Irvine, CA
    Posts
    23

    Unanswered: Locklist - maxlocks question

    I'm looking for opinions on what I should do with a locking situation with a 3rd party application during very high row count data updates (over 3 million rows at a time).

    System: AIX 5.1, DB2 8 fp4a, WebSphere, + a 3rd party java application (I cannot disclose who they are).

    Only 30 concurrent connections during the day and no lock contention or escalations with typical user interface use.

    The Problem - a nightly data migration inserts and updates millions of rows of data. This is absolutely out of our control. The process is "owned" by the vendor and they insist the methods they use are correct The result is a lot of lock escalations from row to table locks - not to mention the need for excessively sized and numerous log files.

    The locklist is 153,600 already with a maxlocks of 10%. My possibly faulty estimates show that even the 524,288 maximum for the locklist + a setting maxlocks to 100% still would not provide enough locks to allow row locking.

    Solution? - Could I alter the "huge" tables to default table lock just before the nightly run, then switch them back to default row lock after the run is complete? No 2 processes access the same table at once, so this may work. I haven't tested it yet though.

    Also, does anyone else out there need to update millions of rows at a time - all in a single transaction? How did you resolve row lock issues without resulting in table locks escalations? Remember, we cannot ask the vendor to "split up" the updates and commit every 10,000 rows or so.

    Thanks in advance for any advice.

    -- Steve

  2. #2
    Join Date
    Sep 2003
    Posts
    237
    LOCKLIST is the number of 4K pages used for locks; is allocated as needed. MAXLOCKS is the maximum percentage of locks that can be assigned to ONE application session. You could set LOCKLIST and MAXLOCKS higher such that LOCKLIST*MAXLOCKS > # of records being updated by your app
    mota

  3. #3
    Join Date
    Sep 2003
    Posts
    237
    The above formula is wrong; your current LOCKLIST*MAXLOCKS supports 150000 locks; increase parameters so that new LOCKLIST*MAXLOCKS supports more than 3000000 locks. If L is the # of locks per page, you are saying 150000=current(LOCKLIST*MAXLOCKS )L; you want
    3000000=new(LOCKLIST*MAXLOCKS )L; so new(LOCKLIST*MAXLOCKS )=20*
    current(LOCKLIST*MAXLOCKS )
    mota

  4. #4
    Join Date
    Jul 2005
    Location
    Irvine, CA
    Posts
    23
    Thanks dbamota

    My current allocation is about 628 MB, so I need to allocate about 12 GB. Wow, that's a lot of lock memory just for a data migration script. Maybe with this ammunition, I may be able to persuade the 3rd party tool scriptors.

    Thanks again,

    Steve

  5. #5
    Join Date
    Sep 2003
    Posts
    237
    What is your LOCKLIST, MAXLOCKS? we are OK with 5000(so memory is 20M) and 80 respectively;you can increase MAXLOCKS to 80 and increase LOCKLIST to 2.5 times the current value;
    mota

  6. #6
    Join Date
    Aug 2001
    Location
    UK
    Posts
    4,650
    If this is a nighly load, I wouldn't care about lock escalations, preceisely, there is no point in preventing lock escalations, by doing so, you are increasing the overhead of obtaining row locks.

    If possible, I would recommend that you alter table to have locksize TABLE. But if there are a few dozen tables, and you are talking about a one transaction doing all inserts/updates on a particular table, you may as well ignore the locksize/maxlocks ... Tune these parameters for online access during the day.

    Cheers
    Sathyaram

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •