Results 1 to 11 of 11
  1. #1
    Join Date
    Feb 2013
    Posts
    29

    Answered: Transaction logs full

    Hello all !

    I have a situation on my SAP on DB2 server when I upgrade SAP (actually some other engineer does SAP upgrade, I only help if there are DB2 problems).
    During the upgrade there was an error - transaction logs full, so I guess the database filled up all of the primary and secondary logs (20 primary , 40 secondary all of size 64 MB).

    When that happened I suggested running the command: db2 archive log for db <dbname> (log archiving is done on TSM) which actually helped and the upgrade continued and finished successfully.
    The engineer kept running the mentioned command every few minutes.

    My question is: is there any other way of preventing the transaction logs full error besides of increasing the logfilesiz parameter or the number of primary/secondary logs ?

    Because as I see it I basically have two solutions:

    - Like said - increase the logfilesiz along with the number of primary/secondary logs (and ensure that the disk location for all of the logs is big enough) (the problem only happens during upgrades and very large transactions without committing)
    - create a cron job that will execute the command : db2 archive log for db <dbname> every 5 minutes

    Because this is a costumer that is not very keen on resizing disk partitions I am leaning toward the solution of implementing a cronjob whenever upgrading SAP, but I would really like to read all of your opinions which of the two solutions do you think is better or maybe if you have a better one.

    The DB2 version:
    Instance "db2inst1" uses "64" bits and DB2 code release "SQL09055" with
    level identifier "06060107".
    Informational tokens are "DB2 v9.5.0.5", "special_23289", "U829462_23289", and
    Fix Pack "5".

    OS version:

    AIX: 6100-06-02-1044

    Thanks in advance,
    kwhtre

  2. Best Answer
    Posted by tafster

    "The log size you have is small. DB2 will only archive an active log file if you have enabled infinite logging. A snapshot will tell you which application is holding the oldest log. You should have your active logs in a separate filesystem from both the data and the archived logs. Set your logfilsiz, logprimary and logsecond to make full use of the log filesystem. Make the logfilsiz fairly big so that you are not creating new log files too quickly in high activity times. If the logs are not archiving regularly enough in quiet times then schedule a cron job to force an archive every hour or whatever you want
    For instance we have logfilsiz 262144 logprimary 16 logsecond 75 (100Gb log filesystem) and force an archive every hour. There are some very long running batch jobs that run periodically."


  3. #2
    Join Date
    Apr 2006
    Location
    Belgium
    Posts
    2,514
    Provided Answers: 11
    logsecond can be increased if space is available
    does sap allow an upgrade of sap on a non-supported db2 version ?
    if upgrading sap - why not upgrade db2 also ?
    Best Regards, Guy Przytula
    Database Software Consultant
    Good DBAs are not formed in a week or a month. They are created little by little, day by day. Protracted and patient effort is needed to develop good DBAs.
    Spoon feeding : To treat (another) in a way that discourages independent thought or action, as by overindulgence.
    DB2 UDB LUW Certified V7-V8-V9-V9.7-V10.1-V10.5 DB Admin - Advanced DBA -Dprop..
    Information Server Datastage Certified
    http://www.infocura.be

  4. #3
    Join Date
    Feb 2013
    Posts
    29
    Yes I knew that about logsecond, I was only wondering which of the two possible solutions is 'better' and less invasive on the system. Like I said, I am leaning toward implementing a cronjob with the archive log command. Would you as an experienced DBA say that is a good idea, given that transaction logs get full only during upgrades ?

    I know there is a big question about upgrading the database as well, but right now I am focusing on solving the transaction logs full nuisance. (Not because I am lazy but it is a difficult costumer)

    Regards
    Last edited by kwhtre; 09-29-15 at 10:06.

  5. #4
    Join Date
    Jul 2013
    Location
    Moscow, Russia
    Posts
    666
    Provided Answers: 55
    Hello,

    I'm curious how the 'archive log' command could resolve the 'transaction log full' issue. These things are unrelated.
    The only practical way to resolve such errors online from the administrative point of view is to increase the value of the logsecond parameter when you see that log utilization is getting close to the current limit.
    Regards,
    Mark.

  6. #5
    Join Date
    Feb 2013
    Posts
    29
    I was under the impression that the command archives an active log file, making room for a new active log file. Basically it doesn't wait for the logs to be archived automatically but archives them manually. Or am I completely wrong here.

    At least that is how I understand it:
    http://www-01.ibm.com/support/knowle.../r0004476.html (Usage notes)


    Regards

  7. #6
    Join Date
    Jul 2013
    Location
    Moscow, Russia
    Posts
    666
    Provided Answers: 55
    You are completely wrong here.
    'Archive log' doesn't make any transaction to commit or rollback, so it doesn't free any space in the transaction log.
    Moreover, since the archived log file becomes truncated (reduced in size) upon the same (logprimary + logsecond)*logfilsiz, you're making things only worse.

    Look at the warning at the link you provided:
    "Frequent use of this command can drastically reduce the amount of the active log space available for transactions."
    Regards,
    Mark.

  8. #7
    Join Date
    Feb 2013
    Posts
    29
    OK, thanks for your help, but how can you then explain why the database stopped with an error 'transaction logs full' and it resumed the operations immediately after executing the command to archive the logs ? Isn't this command an equivalent to Oracle's 'alter system switch logfile' ? It dumps the transactional log to an archive log (freeing the redo log in Oracle or in DB2: making room in the active log directory for a new active log).

    Again, thank you for your help.

    Regards

  9. #8
    Join Date
    May 2012
    Location
    Canberra, Australia
    Posts
    176
    Provided Answers: 6
    The log size you have is small. DB2 will only archive an active log file if you have enabled infinite logging. A snapshot will tell you which application is holding the oldest log. You should have your active logs in a separate filesystem from both the data and the archived logs. Set your logfilsiz, logprimary and logsecond to make full use of the log filesystem. Make the logfilsiz fairly big so that you are not creating new log files too quickly in high activity times. If the logs are not archiving regularly enough in quiet times then schedule a cron job to force an archive every hour or whatever you want
    For instance we have logfilsiz 262144 logprimary 16 logsecond 75 (100Gb log filesystem) and force an archive every hour. There are some very long running batch jobs that run periodically.
    Andy

    "All parts should go together without forcing. You must remember that the parts you are reassembling were disassembled by you. Therefore, if you can’t get them together again, there must be a reason. By all means, do not use hammer.” — IBM maintenance manual, 1975 "

  10. #9
    Join Date
    Feb 2013
    Posts
    29
    OK, thank you all for your answers.

  11. #10
    Join Date
    Jul 2013
    Location
    Moscow, Russia
    Posts
    666
    Provided Answers: 55
    I can only guess what could happen there. The oldest transaction which held the oldest log might finish freeing the log space after you got the 'log full' error.
    DB2 manages logs slightly differently from Oracle (I'm not an expert at Oracle though). 'Archive log' doesn't free up any log space available for transactions. It simply stops writing to the current log file truncating it. This file is still counted in the number of log files (logprimary + logsecond) if there are uncommitted transactions in this file, despite the fact that this file was archived and might be copied to the archive log path if you configured it. This is why the warning about too frequent 'archive log' commands from the documentation comes up.

    Note, that If logsecond <> -1 the archived log files stay in the active log path if they need for crash recovery (but it's not related to your problem).
    Regards,
    Mark.

  12. #11
    Join Date
    Feb 2013
    Posts
    29
    Thank you Mark. I tested some stuff on my virtual machine (by setting the log primary, log second and log size to a really low value and running an update on a table with a million records) and managed to reproduce the error.
    I see now, that the solution can only be made by increasing the mentioned parameters to a higher value. Like you said, archiving it manually doesn't make any difference here.

    Thanks and best regards,
    kwhtre

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •