Results 1 to 2 of 2

Thread: Locks - Why?

  1. #1
    Join Date
    Jun 2004
    Posts
    12

    Unanswered: Locks - Why?

    Hi there,

    I'm transferring data from one table to another. The tables are the exact same, I've just created the second table with a larger page size to handle more data.

    There is around 60GB of data to be transferred from table 1 to table 2, so I'm doing it in chunks. So my command looks like this:

    insert into table2 select * from table1 where key between a and b (where a and b are integer numbers)

    And I just have been increasing a and b appropriately (i.e. 1 to 1000, then 1001 to 2000, and so forth). Everything has been going fine, but the transfers have been taking longer and longer as a and b have increased. Now, the last transfer took over 5 hours and now I have a whole bunch of locks on both tables.

    Why am I getting locking issues when I'm simply reading the data from one table and putting it into another?

    And now that the tables are locked, how do I "unlock" them?

    Please help!

    VX

  2. #2
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    I don't know what you mean by "locked". Locks are released when a commit is issued. So, unless you turned off auto-commit (CLI or other interface), then no locks should be held when the command finishes.

    When doing massive inserts, there are some performance considerations:

    1. If possible, do not create the indexes on the target table until the data is moved. If you must create the PK index (either explicitly or if DB2 does it automatically when you create the PK) then try to defer creating any other indexes until all the data is moved.

    2. If possible, defer creating any foreign keys until after the data is loaded.

    3. If possible, turn off DB2 logging. If you create the table with not logged initially, then you can turn off logging temporarily (untill you do a commit). Execute the following script with auto-commit off (+c option on CLI):

    alter table2 not logged initially;
    insert into table2 select * from table1;
    commit;

    Then create the indexes.

    If you want to do massive inserts with logging, then optimize DB2 for massive inserts:

    - increase number of asynchronous page cleaners (NUM_IOCLEANERS) to 3 or higher
    - decrease changed pages threshold (CHNGPGS_THRESH) to 20
    - increase log buffer size (LOGBUFSZ) to 128 or higher
    - make sure your buffer pools are sufficiently large
    - increase the size and number of log files
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •