Results 1 to 11 of 11
  1. #1
    Join Date
    Sep 2006
    Location
    Columbus, OH
    Posts
    69

    Unanswered: Transaction log filling up

    DB2 v8.2
    W2k


    DBD:B2::st execute failed: [IBM][CLI Driver][DB2/NT] SQL0964C The transactionlog for the database is full. SQLSTATE=57011


    I have Archival logging on. It was my understanding that only with circular logging would the transaction log fill up. I checked the hard drive and there was plenty of space left. So I am confused how the logs are filling up and what I can do to prevent this. User Exit is on shipping archive logs to a veritas server.


    Thanks
    Charlie

  2. #2
    Join Date
    Jan 2003
    Posts
    4,292
    Provided Answers: 5
    Your problem is not archiving the logs, it is that your active logs are not big enough to hold all the currently active transactions. Either you are experiencing very large transactions (not committing often enough), or you need to increase the logging parameters (logprimary, logsecond, logfilsiz).

    Andy

  3. #3
    Join Date
    Jan 2007
    Location
    Jena, Germany
    Posts
    2,721
    Another potential problem could be that there is not enough disk space to hold all log entries. Note that free disk space after the rollback doesn't mean anything. If secondary log files were used, they will be freed as part of the rollback, if possible.
    Knut Stolze
    IBM DB2 Analytics Accelerator
    IBM Germany Research & Development

  4. #4
    Join Date
    Jan 2003
    Posts
    1,605
    ARWinner and stolze both have a point.

    PKPChuck, try calculating this formula to get the disk size required:
    total_disk_size_required = (LOGPRIMARY + LOGSECOND) * LOGFILSIZ
    Don't over locate disk space required.

    Default settings are small so increase logfilsiz and logprimary. Logsecound use only if there is some time that more logs are requred e.g. some batch aggressive jobs at night that occupies more logs.

    To get info about logs:
    db2 get db cfg for database_name

    Increase parameter:
    db2 update db cfg for database_name using new_value

    Hope this helps,
    Grofaty

  5. #5
    Join Date
    Sep 2006
    Location
    Columbus, OH
    Posts
    69
    I doubled the LOGFILSIZ and LOGSECOND of the datbase and it now the query runs without filling up the logs. Thanks for the advice.

    Right now I'm just runnning a pruning script that has to DELETE 60,000 records for each hour of data. Wondered if there were any more effective way to prune this many records. I want to avoid locking, performance issues, and tons of logs.

  6. #6
    Join Date
    Sep 2006
    Location
    Columbus, OH
    Posts
    69
    Quote Originally Posted by PKPChuck
    Right now I'm just runnning a pruning script that has to DELETE 60,000 records for each hour of data. Wondered if there were any more effective way to prune this many records. I want to avoid locking, performance issues, and tons of logs.

    I believe what is happening is, the DELETE of 60,000 records from table B escalates from a row lock to a table lock of table B. The Application does rapid inserts into table A which has a trigger to update table B. Is there a way to DELETE this many records with out locking out the table which will stall the application.

    Charlie

  7. #7
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    To reduce the likelihood of lock escalation, increase the size of the LOCKLIST in the database configuration. The default is way too small. A value of 4096 (4K pages) is probably OK for most applications. Also, you should probably increase the MAXLOCKS to about 60.

    You could create a stored procedure that does cursor processing that commits after every delete, and only locks one row at a time (or locks n rows at a time if you only commit every n rows deleted). The stored procedure can be called from a script. This may take a little longer to execute, but it will minimize lock contention.
    Last edited by Marcus_A; 01-19-07 at 19:08.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  8. #8
    Join Date
    Jan 2007
    Location
    Jena, Germany
    Posts
    2,721
    Depending on your environment, range partitionining could also be an answer to quickly delete a bunch of rows. Detach the partition, then drop the table that formely was the partition.

    p.s: This is a V9 feature.
    Knut Stolze
    IBM DB2 Analytics Accelerator
    IBM Germany Research & Development

  9. #9
    Join Date
    Sep 2006
    Location
    Columbus, OH
    Posts
    69
    I increased LOCKLIST and MAXLOCKS significantly but its still escalating to a table lock. Not sure if those parameters will help me since I am trying to delete about 60,000 records at a time.

    Anyway to pass in a flag into the DELETE command not to lock down the table. Using DB2 v8.2 right now. Or maybe a trigger to delete records as they come in... Its looking as I may have to do this offline.

  10. #10
    Join Date
    Sep 2006
    Location
    Columbus, OH
    Posts
    69
    Quote Originally Posted by PKPChuck
    I increased LOCKLIST and MAXLOCKS significantly but its still escalating to a table lock. Not sure if those parameters will help me since I am trying to delete about 60,000 records at a time.

    Anyway to pass in a flag into the DELETE command not to lock down the table. Using DB2 v8.2 right now. Or maybe a trigger to delete records as they come in... Its looking as I may have to do this offline.
    I bumped up LOCKLIST to 3000 pages. This seems to be adequate to DELETE 60K records. I havent received a lock esc since.


    Thanks,
    Charlie

  11. #11
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Quote Originally Posted by PKPChuck
    I bumped up LOCKLIST to 3000 pages. This seems to be adequate to DELETE 60K records. I havent received a lock esc since.

    Thanks,
    Charlie
    You seem a bit squeamish about using 4096 pages as I suggested originally. 4096 is only about 16MB of memory.

    Most of the DB2 defaults are the same going all the way back to 1989 when the average Intel 486 server running OS/2 (the only OS it ran on originally) had a maximum memory of 16MB.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •